Eric Carter, Senior Director of Marketing, on the rate at which data is growing and Hedvig's scalable solution to meet enterprise needs. Slides are from the Santa Clara University's Annual Conference on Massive Storage Systems and Technology 2015
Slides shown in Hedvig booth at VMworld 2016. Highlight scale-out, software-defined storage - both hyperscale and hyperconverged - for large VMware vSphere environments.
Learn about Hedvig's Docker Datacenter plugin and how it simplifies the world of software-defined storage by creating persistent virtual disks.
http://www.hedviginc.com/docker-resources
Red Hat and Verizon teamed up to take attendees of Red Hat Storage Day New York on 1/19/16 through a tour of containerized storage and why it's important to the future of storage.
Slides shown in Hedvig booth at VMworld 2016. Highlight scale-out, software-defined storage - both hyperscale and hyperconverged - for large VMware vSphere environments.
Learn about Hedvig's Docker Datacenter plugin and how it simplifies the world of software-defined storage by creating persistent virtual disks.
http://www.hedviginc.com/docker-resources
Red Hat and Verizon teamed up to take attendees of Red Hat Storage Day New York on 1/19/16 through a tour of containerized storage and why it's important to the future of storage.
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...Red_Hat_Storage
At Red Hat Storage Day New York on 1/19/16, Red Hat partner Seagate presented on how to implement dense storage using HDDs with SSDs and PCIe flash accelerator cards.
An SDS (software-defined storage) refers to a software controller that is used for managing and virtualizing a physical storage for the purpose of controlling the way in which data is stored.
Attendees of Red Hat Storage Day New York on 1/19/16 heard from Red Hat's Ross Turk why software-defined storage matters and how it can help solve data challenges at the petabyte scale and beyond.
Cloudian and Rubrik - Hybrid Cloud based Disaster RecoveryCloudian
Cloudian and Rubrik outline the benefits of using a modern hybrid cloud based approach for your VMware backups. While everyone is promising instant restores and shorter backup windows, Rubrik's solution with Cloudian additionally adds policy based management, reducing complexity. Tiering of data to Cloudian S3 Object Storage ensures that only the hottest backups are consuming SSD storage. Both solutions are scale-out so adding addition capacity is a breeze.
Key Architecture and Performance Principles to Optimize Data ManagementJana Lass
These slides are from a webinar that featured a discussion on ways companies can address shifts in big data infrastructure and design the appropriate data management architecture for optimal performance and scale. It covers lessons gleaned from real customer challenges and implementations, offering attendees practical advice on the kinds of design decisions that can optimize the protection and management modern data platforms from Hadoop to NoSQL databases.
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...Red_Hat_Storage
At Red Hat Storage Day New York on 1/19/16, Red Hat partner Seagate presented on how to implement dense storage using HDDs with SSDs and PCIe flash accelerator cards.
An SDS (software-defined storage) refers to a software controller that is used for managing and virtualizing a physical storage for the purpose of controlling the way in which data is stored.
Attendees of Red Hat Storage Day New York on 1/19/16 heard from Red Hat's Ross Turk why software-defined storage matters and how it can help solve data challenges at the petabyte scale and beyond.
Cloudian and Rubrik - Hybrid Cloud based Disaster RecoveryCloudian
Cloudian and Rubrik outline the benefits of using a modern hybrid cloud based approach for your VMware backups. While everyone is promising instant restores and shorter backup windows, Rubrik's solution with Cloudian additionally adds policy based management, reducing complexity. Tiering of data to Cloudian S3 Object Storage ensures that only the hottest backups are consuming SSD storage. Both solutions are scale-out so adding addition capacity is a breeze.
Key Architecture and Performance Principles to Optimize Data ManagementJana Lass
These slides are from a webinar that featured a discussion on ways companies can address shifts in big data infrastructure and design the appropriate data management architecture for optimal performance and scale. It covers lessons gleaned from real customer challenges and implementations, offering attendees practical advice on the kinds of design decisions that can optimize the protection and management modern data platforms from Hadoop to NoSQL databases.
Tegile Systems is a leading provider of next-generation flash-driven storage arrays. With our patented metadata acceleration technology, Tegile arrays deliver inline data de-duplication and compression without any performance hit. This enables enterprises to deliver optimal storage capacity and performance for workloads such as databases, server virtualization and virtual desktops, while dramatically reducing costs. Tegile is backed by premier venture capital firms August Capital and Meritech and strategic investors HGST and SanDisk. Follow us on Twitter @tegile
Practical information on how to Optimize Virtual Machines for High Performance by Boyan Krosnov, Chief Product Officer at StorPool Storage
Presentation delivered at OpenNebula TechDay Sofia on 25-th of February 2016
All your BIM are belong to us - Revit and FME for Enterprise Data ManagementSafe Software
This session illustrates the use of FME and Revit to create data management workflows for AEC firms. Topics include user driven reports, BIM validation workflows, data extraction and collection routines, Revit to GIS workflows, more.
See more presentations from the FME User Conference 2014 at: www.safe.com/fmeuc
In this session, we'll discuss new volume types in Red Hat Gluster Storage. We will talk about erasure codes and storage tiers, and how they can work together. Future directions will also be touched on, including rule based classifiers and data transformations.
You will learn about:
How erasure codes lower the cost of storage.
How to configure and manage an erasure coded volume.
How to tune Gluster and Linux to optimize erasure code performance.
Using erasure codes for archival workloads.
How to utilize an SSD inexpensively as a storage tier.
Gluster's erasure code and storage tiering design.
Gluster for Geeks: Performance Tuning Tips & TricksGlusterFS
In this Gluster for Geeks technical webinar, Jacob Shucart, Senior Systems Engineer, will provide useful tips and tricks to make a Gluster cluster meet your performance requirements. He will review considerations for all different phases including planning, configuration, implementation, tuning, and benchmarking.
Topics covered will include:
• Protocols (CIFS, NFS, GlusterFS)
• Hardware configuration
• Tuning parameters
• Performance benchmarks
Building Distributed Systems without Docker, Using Docker Plumbing Projects -...Patrick Chanezon
Docker provides an integrated and opinionated toolset to build, ship and run distributed applications. Over the past year, the Docker codebase has been refactored extensively to extract infrastructure plumbing components that can be used independently, following the UNIX philosophy of small tools doing one thing well: runC, containerd, swarmkit, hyperkit, vpnkit, datakit and the newly introduced InfraKit.
This talk will give an overview of these tools and how you can use them to build your own distributed systems without Docker.
Patrick Chanezon & David Chung, Docker & Phil Estes, IBM
"Who Moved my Data? - Why tracking changes and sources of data is critical to...Cask Data
Speaker: Russ Savage, from Cask
Big Data Applications Meetup, 09/14/2016
Palo Alto, CA
More info here: http://www.meetup.com/BigDataApps/
Link to talk: https://youtu.be/4j78g3WvC4Y
About the talk:
As data lake sizes grow, and more users begin exploring and including that data in their everyday analysis, keeping track of the sources for data becomes critical. Understanding how a dataset was generated and who is using it allows users and companies to ensure their analysis is leveraging the most accurate and up to date information. In this talk, we will explore the different techniques available to keep track of your data in your data lake and demonstrate how we at Cask approached and attempted to mitigate this issue.
Turning object storage into vm storagewim_provoost
Object Storage is today the standard to build scale-out storage. But due to technical hurdles it is impossible to run Virtual Machines directly from an Object Store. Open vStorage is the layer between the hypervisor and Object Store and turns the Object Store into a high performance, distributed, VM-centric storage platform.
Enterprises that are interested in modernizing their data center have a few important things to consider. Does the company have 5 people or less? Is it important to scale compute and storage as needed? Based on several considerations, Hedvig Inc. can support your enterprise in achieving either a hyperscale or hyperconverged solution. Check out this infographic to learn more about hyperscale and hyperconverged solutions.
VMworld 2013: Virtualization and Converged Infrastructure Solutions VMworld
VMworld 2013
Brent Allen, HP
Trey Layton, VCE
Lucas Nguyen, VMware
John Power, IBM
Andy Rhodes, Dell
Jeff Schneider, Lenovo
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The truth is that your customers don’t care about backup. They care about recovery.
In this webinar, backup specialist Christophe Bertrand of Arcserve will talk about on simplifying backup and discuss the key things you need to focus on when providing disaster recovery services to today’s businesses.
Join Christophe and the N-able team as we cover:
• The impact of downtime on an average business and how to minimize data loss
• Taking the complexity out of backup and talking about what matters
• How you can measure data protection and meet your SLA
These slides - based on the webinar - shed light on how business stakeholders make the most of information from their big data environments and the requirements those stakeholders have to turn big data into business impact.
Using recent big data end-user research from leading IT analyst firm Enterprise Management (EMA), data from Vertica’s recent benchmarks on SQL on Hadoop, and firsthand customer experiences, viewers will learn:
- Use cases where end users around the world are using big data in their organizations
- How maturity with big data strategies impact why and how business stakeholders use information from their big data environments
- How Vertica empowers the use of information from big data environments
Boost Performance with Scala – Learn From Those Who’ve Done It! Cécile Poyet
Scalding is a scala DSL for Cascading. Run on Hadoop, it’s a concise, functional, and very efficient way to build big data applications. One significant benefit of Scalding is that it allows easy porting of Scalding apps from MapReduce to newer, faster execution fabrics.
In this webinar, Cyrille Chépélov, of Transparency Rights Management, will share how his organization boosted the performance of their Scalding apps by over 50% by moving away from MapReduce to Cascading 3.0 on Apache Tez. Dhruv Kumar, Hortonworks Partner Solution Engineer, will then explain how you can interact with data on HDP using Scala and leverage Scala as a programming language to develop Big Data applications.
Boost Performance with Scala – Learn From Those Who’ve Done It! Cécile Poyet
Scalding is a scala DSL for Cascading. Run on Hadoop, it’s a concise, functional, and very efficient way to build big data applications. One significant benefit of Scalding is that it allows easy porting of Scalding apps from MapReduce to newer, faster execution fabrics.
In this webinar, Cyrille Chépélov, of Transparency Rights Management, will share how his organization boosted the performance of their Scalding apps by over 50% by moving away from MapReduce to Cascading 3.0 on Apache Tez. Dhruv Kumar, Hortonworks Partner Solution Engineer, will then explain how you can interact with data on HDP using Scala and leverage Scala as a programming language to develop Big Data applications.
Boost Performance with Scala – Learn From Those Who’ve Done It! Hortonworks
Scalding is a scala DSL for Cascading. Run on Hadoop, it’s a concise, functional, and very efficient way to build big data applications. One significant benefit of Scalding is that it allows easy porting of Scalding apps from MapReduce to newer, faster execution fabrics.
In this webinar, Cyrille Chépélov, of Transparency Rights Management, will share how his organization boosted the performance of their Scalding apps by over 50% by moving away from MapReduce to Cascading 3.0 on Apache Tez. Dhruv Kumar, Hortonworks Partner Solution Engineer, will then explain how you can interact with data on HDP using Scala and leverage Scala as a programming language to develop Big Data applications.
Insurance companies are facing similar challenges like all other disrupted market segments like the change of customer expectations and hence the need of differentiating itself new as a brand in a challenging market environment. But at the same time, it underlies a very strict regulatory pressure. Generali Switzerland, as many market leaders in every industry, have understood the power of data to reimagine their markets, customers, products, and business model and managed this change by building their Connection Platform within one year.
Christian Nicoll, Director of Platform Engineering & Operations at Generali Switzerland guides us through their journey of setting up an event-driven architecture to support their digital transformation project.
Attend this online talk and learn more about:
-How Generali managed it to assemble various parts to one platform
-The architecture of the Generali Connection Platform, including Confluent, Kafka, and Attunity.
-Their challenges, best practices, and lessons learned
-Generali’s plans of expanding and scaling the Connection Platform
-Additional Use Cases in regulated markets like retail banking
Some interesting case studies of how we helped our clients adopt DevOps. The cases cover various fields within DevOps space: CI/CD, Monitoring, Cloud Migration
Red Hat Summit 2015: Red Hat Storage Breakfast sessionRed_Hat_Storage
See the presentation shared during a special breakfast session during Red Hat Summit 2015. Learn about our mission, what areas and communities are seeing strong growth, and much more.
SnappyFlow is a full stack observability platform providing DevOps and SRE practitioners and Application Developers with a comprehensive solution bringing together metrics, logs, traces, profiling, alerts, and synthetics in a unified platform.
Extending open source and hybrid cloud to drive OT transformation - Future Oi...John Archer
A look at ESG concerns and agility needed to address pressures to transform energy organizations with decarbonization. Presented to Future Oil and Gas conference November 2021
2015 02 12 talend hortonworks webinar challenges to hadoop adoptionHortonworks
Hadoop is no longer optional. Companies of all sizes are in various phases of their own Big Data journey. Whether you are just starting to explore the platform or have multiple clusters up and running, everyone is presented with a similar challenge - developing their internal skillset. Hadoop specialists are hard to find. Hand coding is too prone to error when it comes to storing, integrating or analyzing your data. However, it doesn’t need to be this difficult.
In this recorded webinar, Talend and Hortonworks help you learn how to unify all your data in Hadoop, with no specialized Big Data skills.
Find the recording here. www.talend.com/resources/webinars/challenges-to-hadoop-adoption-if-you-can-dream-it-you-can-build-it
This webinar covers: How Hadoop opens a new world of analytic applications, How to bridge the skills gap with our Big Data solutions, Experience a real-world, simple technical demo
This Hadoop Hive Tutorial will unravel the complete Introduction to Hive, Hive Architecture, Hive Commands, Hive Fundamentals & HiveQL. In addition to this, even fundamental concepts of BIG Data & Hadoop are extensively covered.
At the end, you'll have a strong knowledge regarding Hadoop Hive Basics.
PPT Agenda
✓ Introduction to BIG Data & Hadoop
✓ What is Hive?
✓ Hive Data Flows
✓ Hive Programming
----------
What is Apache Hive?
Apache Hive is a data warehousing infrastructure built over Hadoop which is targeted towards SQL programmers. Hive permits SQL programmers to directly enter the Hadoop ecosystem without any pre-requisites in Java or other programming languages. HiveQL is similar to SQL, it is utilized to process Hadoop & MapReduce operations by managing & querying data.
----------
Hive has the following 5 Components:
1. Driver
2. Compiler
3. Shell
4. Metastore
5. Execution Engine
----------
Applications of Hive
1. Data Mining
2. Document Indexing
3. Business Intelligence
4. Predictive Modelling
5. Hypothesis Testing
----------
Skillspeed is a live e-learning company focusing on high-technology courses. We provide live instructor led training in BIG Data & Hadoop featuring Realtime Projects, 24/7 Lifetime Support & 100% Placement Assistance.
Email: sales@skillspeed.com
Website: https://www.skillspeed.com
The new dominant companies are running on data SnapLogic
The cost of Digital Transformation is dropping rapidly. The technologies and methodologies are evolving to open up new opportunities for new and established corporations to drive business. We will examine specific examples of how and why a combination of robust infrastructure, cloud first and machine learning can take your company to the next level of value and efficiency.
Rich Dill, SnapLogic's enterprise solutions architect, at Big Data LDN 2017.
Similar to Modern storage for modern business: get to know Hedvig (20)
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Modern storage for modern business: get to know Hedvig
1. 1 Copyright 2015 Hedvig Inc.
Modern Storage for
Modern Business
Eric Carter
Senior Director, Marketing
Hedvig Inc.
@ercarter
2. 2 Copyright 2015 Hedvig Inc.
Business innovation requires flexibility
Business
innovation
Time-to-market
Flexible
infrastructure
Someone has a
great idea
Someone has to
develop it
Someone has to
deploy it and manage it
3. 3 Copyright 2015 Hedvig Inc.
According to Forrester . . .
3
10Xfaster growth of
data than
storage budgets
58%take days, weeks,
or months to
provision storage
14%have “cloud-like”
storage
provisioning
Source: Forrester Technology Adoption Profile: Meet Evolving Business Demands With Software-Defined Storage, March 2015. Visit hedviginc.com for full research report.
4. 4 Copyright 2015 Hedvig Inc.
The new way of doing storage
Commodity Servers Software Software-Defined
Storage
+ =
5. 5 Copyright 2015 Hedvig Inc.
It’s not so new (as this audience knows…)
…but it is new for most enterprises
6. 6 Copyright 2015 Hedvig Inc.
2012-present
2007-2011
2004-2007
It’s what Hedvig brings to the enterprise
Commodity Servers Software Hedvig Distributed
Storage Platform
+ =
7. 7 Copyright 2015 Hedvig Inc.
..and scales to massive proportion
Elastic:
Scale to petabytes
of data
Simple:
Provide block, file,
and object storage
Flexible:
Connect to any
compute, or cloud
8. 8 Copyright 2015 Hedvig Inc.
DC1 DC2 Cloud3
How it works
iSCSI NFS
1
Create Virtual Disk(s) and assign policies via UI,
CLI or API
2
Present block, file, or object storage to application
tier
3
Capture I/O and communicate it to underlying cluster
4
Distribute and replicate data across
cluster
5
Auto-tiers and auto-balance across racks, datacenters,
and clouds
Hedvig storage cluster
Application hosts
VM VM
VM
VM VM
VM
S3, Swift
9. 9 Copyright 2015 Hedvig Inc.
Why do it this way?
Copyright 2015 Hedvig Inc.9
Scale Seamlessly add or subtract capacity/performance – grow to 1000+ nodes
Simplicity Simplify provisioning with storage resource abstraction – a few clicks
Resilience Self-heal – never manually restore or migrate data again
Leverage compute cluster, flash – ride commodity hardware power curvePerformance
Cost Take advantage of low-cost whitebox hardware and appreciating software
Flexibility Choose disk type and deployment – hyperscale or hyperconverged
10. 10 Copyright 2015 Hedvig Inc.
Scale-out seamlessly with x86 or ARM
Provision with unprecedented flexibility and speed
Consolidate all protocols in one platform
Run agnostic to any hypervisor, container, or OS
Provide hybrid-aware DR, across any public cloud
Support hyperconverged and hyperscale
Enable enterprise-grade storage features
Hedvig design principles
11. 11 Copyright 2015 Hedvig Inc.
Who is doing it with Hedvig ;)
Big
data
Quick, reliable indexing of 100M active
client docs.
Hedvig a scale-out, flash-friendly
solution to replace local SSDs.
Server
virtualization
Bringing apps back in house on
virtualized, multi-datacenter architecture.
Hedvig provides enterprise features and
built-in resiliency.
Private
cloud
Growing at a rate of 1 terabyte per hour
in private and hybrid cloud
environments.
Hedvig provides scalability and multi-
protocol flexibility for managed services.
I’ll start with a quick acknowledgement of a dynamic that we see in the marketplace – and that is there is a desire to drive innovation by business executives – and this in today’s world rides atop technology moreso than at any other time in our history. This means there is pressure on application developers to develop apps in response to a business initiative and to get them to market quickly.
According to Forrester, data is growing 10 times faster than are storage budgets. This creates a big challenge – how do you keep up? It also takes a very long time to provision new storage into an environment – 58% of those surveyed report that it takes days, weeks or even months to rollout a new solution. The bad news as well, is that while many are on the quest to gain a cloud-like capability, only 14% report having anything that even resembles the simplicity of cloud-like storage provisioning.
More and more companies are moving away from traditional monolithic arrays to store VMs to a software-based approach – where the capabilities and value enterprise storage is purchased and delivered via software as opposed to locked up in a custom hardware solution. An WHY are companies considering and already adopting this kind of approach?One reason is Flexibility – today’s software-defined solutions – like Hedvig – give IT organizations the ability to zig and zag much more effectively with the demands of business. A recent Forrester study indicates that nearly 60% of businesses reports that storage provisioning takes days or longer with traditional solutions. With software defined, we’re in an era where provisioning is much more cloud-like, taking minutes to achieve versus hours, days or weeks. It also provides the ability to define virtual volumes with features and functions – like VMware VVOLS – assigning capabilities that uniquely fit a given application or service.
A software approach also gives you the ability to bring your own hardware – to virtualize and pool your resources in ways that weren’t possible before – helping deliver significant value from existing and new assets.
more and more companies are taking a page out of the playbook of the Googles, Amazons, and Facebooks, choosing to deploy off-the-shelf systems versus custom hardware. But why?
Yes it is a cost thing – but it’s lower cost without sacrificing quality. We had a bit of a mild online debate recently as to the meaning of commodity. For many of us who have been in the storage industry for some time, it’s a foreign proposition – however Commodity doesn’t mean cheap and low quality, instead it signals that pound for pound there is parity across the market in terms of the components that are available. This is really where we are at with the server market. The quality of a given commodity node may differ slightly, but it is essentially uniform across producers. X86 servers, ARM servers, SSDs, and certainly hard drives. I argued in a recent block that if you pull the bezel on some of todays more traditional storage array solutions, inside you’ll see the same components that are available to you in the server hardware available from your provider of choice – Supermicro, Dell, Cisco, HP, etc.
Paired with the software approach to storage, this means that for your virtualization environment, you can ride Moore’s law and take advantage of the frequent increases in processing power, drive capacity, etc. while maintaining or even dropping your costs over time.
Commodity systems for many are easier to obtain and stock. One of our customers, chose software defined running on standard Cisco servers for their storage solution for this very reason. For them, it was much faster, and easier to obtain and deploy commodity Cisco servers to add capacity to their storage system than it was to try to requisition proprietary array hardware. IT also gives them the ability to get out of the game of pre-buying capacity in the traditional fashion in favor of a more just in time approach.
At its most basic level, if you look at the equation on the screen – you take commodity servers - X86 or ARM – preferably packed with disk – including flash which we’ll talk further about shortly – and you add the Hedvig software – and you have what we call the Hedvig Distributed Storage Platform.
Think about this as an elastic storage cluster to support all of your various application requirements.
You bring to the party the kind of servers you’re comfortable with – typically off-the-shelf type hardware – and you leverage this well-known, easy-to-acquire hardware to run our software and support your storage needs.
You determine the characteristics of your cluster – the quality of the servers you bring will contribute to the performance you can achieve with IOPs and so-forth – and the quantity of nodes contributes to the availability.
And with Hedvig – this is what you can achieve - you simply add nodes to scale – even to petabytes of data. Hedvig lends itself very well to elastic workloads.
When we set out to build the platform we knew going in that you have requirements for storage of various type – block, file and object- to enable the choice for you to deliver storage as you need it – all from the same platform.
And with the flexibility to connect to any app, or compute platform, or even cloud that you want to take advantage of – along with a wide range of enterprise features we’ll discuss in the coming slides.
Hedvig is a single solution you can leverage to meet your wide-ranging needs without having to resort to standing up islands of storage, train people on different products and figure out how to work with different vendors. Hedvig is all about simplicity and flexibility at scale.
Now let’s take a look at a day-in –the-life if you will.
Step one – an admin – or user – creates the Virtual Disk resource – this can be via UI, or CLI – or via API which some customers use to build their own self-service interface.
Next that virtual disk is presented to a host via the Storage Proxy – providing a block of file resource – and in the case of object storage via S3 or Swift API.
As hosts write data, the proxy intercepts the I/O and sends it down to the storage cluster.
In the distributed systems approach, this data is distributed and stored in smaller chunks and tracked throughout the cluster – taking advantage of the horsepower and resources of the underlying x86 systems. Data replicas – the default being three – are then made and distributed as well – this is what allows for redundancy and high availability of data.
And in the normal course of operations, the Storage Service will auto-tier data, auto balance across our server resources, and make sure that data is distributed to datacenters or clouds as dictated by your set policy.
The great news here is that with Hedvig, you don’t need to be an expert in distributed systems to operate your storage system. The intelligence is built-into the software.
This means utilizing storage software, and commodity hardware to deploy storage that takes advantage of many systems versus just a single box or a few – to scale out, not up – taking advantage of the power of many systems working together to deliver horsepower and capacity – an approach that takes advantage of parallelism. You can provision storage very quickly – very cloud like – with a few clicks. . And by many I mean, hundreds or more.
A software approach also lends itself to flexible deployment models – running applications and storage services together in a hyperconverged fashion, or, decoupling and scaling compute and storage individually – what we refer to as hyperscale or a two-tier model.
You also get the benefit of hardware advancements as they emerge – riding Moore’s law.
Again from the Cloud-company playbook what we can achieve with a distributed storage system is a new level of resilience and performance.
And imagine, removing old nodes and adding new nodes over time – and having data auto replicate and auto rebalance. You can finally get out of the business of doing data migrations and forklift system upgrades.With software – we also benefit from updates and enhancements over time such that it is an appreciating asset. This moves us away from the throwing hardware/software in the rubbish every 3-5 years and starting again. The software you invest in today can still be adding value to you on updated hardware for many many years. And programmability. Now this is NOT exclusive to software-defined– but software-based storage that is not locked into custom hardware, lends itself much more readily to cloud approaches, to microservices, and the like. For DevOps, software-defined storage is a must have, enabling automation for provisioning, scaling, VM and data movement.
This approach helps you control costs – your investment in Hedvig software is an appreciating asset, protecting against technical and functional obsolescence.
What are the use cases?
First – we see server virtualization environments as prime places for you to leverage the Hedvig software-defined approach – especially where you are using multiple hypervisors. Where you need scalability, elasticity, and agility to keep up with all of these apps, span multiple locations and clouds, and accommodate the rate of change – Hedvig is a good fit.
Next – for those building private clouds and looking to modernize how you deliver IT, you need a storage platform that is designed to scale with cloud-like capabilities. Hedvig provides the flexibility to support a wide range of options for storage configuration and for scalability, taking advantage of commodity economics. With Hedvig, you can achieve a managed service offering that provides onsite and offsite storage capabilities with built-in disaster-recovery-as-a-service.
For big data Hadoop deployments – Hedvig provides a single, efficient storage pool – what the industry is now calling a data lake – to deliver data into multiple Hadoop distributions. Hedvig delivers tunable capabilities and features like dedupe and client-side caching to deliver efficiency and performance for big data.