For years vendors have been trying to drive down the cost of flash so that the all-flash data center can become reality. The problem is that even the rapidly declining price of flash storage can’t keep pace with the rapidly declining price of hard disk. As a result data that does not need to be on flash storage has to be stored on something less expensive. But does that less expensive storage need to be another hard disk array or could it be stored in the cloud?
In this webinar join Storage Switzerland’s founder George Crump and Avere Systems CEO, Ron Bianchini for an interactive webinar Using the Cloud to Create an All-Flash Data Center.
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
In this webinar join experts from Storage Switzerland and Tegile to discover if the All-Flash Data Center can become reality. We will explore the return on investment that All-Flash systems can deliver, like increase user and virtual machine densities, lower drive counts and simpler storage architectures. We will also look at some of the methods that All-Flash systems employ to deliver an acceptable cost per GB like thin provisioning, clones, deduplication and compression. Finally we will take one last look at disk, does it have a role in the All-Flash Data Center and if it does what should that role be?
Structor - Automated Building of Virtual Hadoop ClustersOwen O'Malley
Discusses vagrant scripts to setup and deploy a working Hadoop multiple node cluster with or without security. All source code is available on https://github.com/hortonworks/structor .
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
In this webinar join experts from Storage Switzerland and Tegile to discover if the All-Flash Data Center can become reality. We will explore the return on investment that All-Flash systems can deliver, like increase user and virtual machine densities, lower drive counts and simpler storage architectures. We will also look at some of the methods that All-Flash systems employ to deliver an acceptable cost per GB like thin provisioning, clones, deduplication and compression. Finally we will take one last look at disk, does it have a role in the All-Flash Data Center and if it does what should that role be?
Structor - Automated Building of Virtual Hadoop ClustersOwen O'Malley
Discusses vagrant scripts to setup and deploy a working Hadoop multiple node cluster with or without security. All source code is available on https://github.com/hortonworks/structor .
Hadoop Operations: Starting Out Small / So Your Cluster Isn't Yahoo-sized (yet)Michael Arnold
Hadoop Summit 2012 - Deployment and Operations track
Everyone hears about large clusters with thousands of machines and petabytes of storage yet not everyone starts their first Hadoop deployment with dozens of cabinets of equipment. What do you do when you don`t have quite as large of a deployment? What decisions should you make now and which should you postpone for later? This session is for SysAdmins that have not yet or just recently jumped into the Hadoop fray. You will be presented with the knowledge gained from two years of operational experience at a (currently) small Hadoop site. We will discuss things that are initially important for a small (10-100 node) cluster and what happens when you outgrow your first deployment.
Hadoop Operations for Production Systems (Strata NYC)Kathleen Ting
Hadoop is emerging as the standard for big data processing and analytics. However, as usage of the Hadoop clusters grow, so do the demands of managing and monitoring these systems.
In this full-day Strata Hadoop World tutorial, attendees will get an overview of all phases for successfully managing Hadoop clusters, with an emphasis on production systems — from installation, to configuration management, service monitoring, troubleshooting and support integration.
We will review tooling capabilities and highlight the ones that have been most helpful to users, and share some of the lessons learned and best practices from users who depend on Hadoop as a business-critical system.
Building clouds with apache cloudstack apache roadshow 2018ShapeBlue
Talk given at Apache Roadshow, FOSS Backstage, Berlin, June 2018
Apache CloudStack is open source software designed to deploy and manage large networks of virtual machines, as a highly available, highly scalable Infrastructure as a Service (IaaS) cloud computing platform. This talk will give an introduction to the technology, its history and its architecture. It will look common use-cases (and some real production deployments) that are seen across both public and private cloud infrastructures and where CloudStack can be completed by other open source technologies.
The talk will also compare and contrast Apache Cloudstack with other IaaS platforms and why he thinks that the technology, combined with the Apache governance model will see CloudStack become the de-facto open source cloud platform. He will run a live demo of the software and talk about ways that people can get involved in the Apache CloudStack project.
This presentation will discuss best practices for designing and building a solid, robust and flexible Hadoop platform on an enterprise virtual infrastructure. Attendees will learn the flexibility and operational advantages of Virtual Machines such as fast provisioning, cloning, high levels of standardization, hybrid storage, vMotioning, increased stabilization of the entire software stack, High Availability and Fault Tolerance. This is a can`t miss presentation for anyone wanting to understand design, configuration and deployment of Hadoop in virtual infrastructures.
Soft-Shake 2013 : Enabling Realtime Queries to End UsersBenoit Perroud
Since it became an Apache Top Level Project in early 2008, Hadoop has established itself as the de-facto industry standard for batch processing. The two layers composing its core, HDFS and MapReduce, are strong building blocks for data processing. Running data analysis and crunching petabytes of data is no longer fiction. But the MapReduce framework does have two major drawbacks: query latency and data freshness.
At the same time, businesses have started to exchange more and more data through REST API, leveraging HTTP words (GET, POST, PUT, DELETE) and URI (for instance http://company/api/v2/domain/identifier), pushing the need to read data in a random access style – from simple key/value to complex queries.
Enhancing the BigData stack with real time search capabilities is the next natural step for the Hadoop ecosystem, because the MapReduce framework was not designed with synchronous processing in mind.
There is a lot of traction today in this area and this talk will try to answer the question of how to fill in this gap with specific open-source components, ultimately building a dedicated platform that will enable real-time queries on Internet-scale data sets. After discussing the evolution of the deployments of common Hadoop platform, a hybrid approach called lambda architecture will be proposed. It will be demonstrated with concrete examples, discussing which technology could be a good match, and how they would interact together.
What is Trove, the Database as a Service on OpenStack?OpenStack_Online
Trove was integrated into the IceHouse release of OpenStack to provision and manage databases in an OpenStack Cloud. With Trove developers can spin up a database instance on-demand in an instant.
Please sign up for upcoming OpenStack Online Meetups: http://www.meetup.com/OpenStack-Online-Meetup/
From cache to in-memory data grid. Introduction to Hazelcast.Taras Matyashovsky
This presentation:
* covers basics of caching and popular cache types
* explains evolution from simple cache to distributed, and from distributed to IMDG
* not describes usage of NoSQL solutions for caching
* is not intended for products comparison or for promotion of Hazelcast as the best solution
Best Practices for Using Alluxio with Apache Spark with Gene PangSpark Summit
Alluxio, formerly Tachyon, is a memory speed virtual distributed storage system and leverages memory for storing data and accelerating access to data in different storage systems. Many organizations and deployments use Alluxio with Apache Spark, and some of them scale out to over PB’s of data. Alluxio can enable Spark to be even more effective, in both on-premise deployments and public cloud deployments. Alluxio bridges Spark applications with various storage systems and further accelerates data intensive applications. In this talk, we briefly introduce Alluxio, and present different ways how Alluxio can help Spark jobs. We discuss best practices of using Alluxio with Spark, including RDDs and DataFrames, as well as on-premise deployments and public cloud deployments.
Check out our list of the top 10 geekiest things you can do in Las Vegas while at NetApp Insight. Las Vegas has plenty of delightful entertainment options, even for a self-professed geek like yourself.
Hadoop Operations: Starting Out Small / So Your Cluster Isn't Yahoo-sized (yet)Michael Arnold
Hadoop Summit 2012 - Deployment and Operations track
Everyone hears about large clusters with thousands of machines and petabytes of storage yet not everyone starts their first Hadoop deployment with dozens of cabinets of equipment. What do you do when you don`t have quite as large of a deployment? What decisions should you make now and which should you postpone for later? This session is for SysAdmins that have not yet or just recently jumped into the Hadoop fray. You will be presented with the knowledge gained from two years of operational experience at a (currently) small Hadoop site. We will discuss things that are initially important for a small (10-100 node) cluster and what happens when you outgrow your first deployment.
Hadoop Operations for Production Systems (Strata NYC)Kathleen Ting
Hadoop is emerging as the standard for big data processing and analytics. However, as usage of the Hadoop clusters grow, so do the demands of managing and monitoring these systems.
In this full-day Strata Hadoop World tutorial, attendees will get an overview of all phases for successfully managing Hadoop clusters, with an emphasis on production systems — from installation, to configuration management, service monitoring, troubleshooting and support integration.
We will review tooling capabilities and highlight the ones that have been most helpful to users, and share some of the lessons learned and best practices from users who depend on Hadoop as a business-critical system.
Building clouds with apache cloudstack apache roadshow 2018ShapeBlue
Talk given at Apache Roadshow, FOSS Backstage, Berlin, June 2018
Apache CloudStack is open source software designed to deploy and manage large networks of virtual machines, as a highly available, highly scalable Infrastructure as a Service (IaaS) cloud computing platform. This talk will give an introduction to the technology, its history and its architecture. It will look common use-cases (and some real production deployments) that are seen across both public and private cloud infrastructures and where CloudStack can be completed by other open source technologies.
The talk will also compare and contrast Apache Cloudstack with other IaaS platforms and why he thinks that the technology, combined with the Apache governance model will see CloudStack become the de-facto open source cloud platform. He will run a live demo of the software and talk about ways that people can get involved in the Apache CloudStack project.
This presentation will discuss best practices for designing and building a solid, robust and flexible Hadoop platform on an enterprise virtual infrastructure. Attendees will learn the flexibility and operational advantages of Virtual Machines such as fast provisioning, cloning, high levels of standardization, hybrid storage, vMotioning, increased stabilization of the entire software stack, High Availability and Fault Tolerance. This is a can`t miss presentation for anyone wanting to understand design, configuration and deployment of Hadoop in virtual infrastructures.
Soft-Shake 2013 : Enabling Realtime Queries to End UsersBenoit Perroud
Since it became an Apache Top Level Project in early 2008, Hadoop has established itself as the de-facto industry standard for batch processing. The two layers composing its core, HDFS and MapReduce, are strong building blocks for data processing. Running data analysis and crunching petabytes of data is no longer fiction. But the MapReduce framework does have two major drawbacks: query latency and data freshness.
At the same time, businesses have started to exchange more and more data through REST API, leveraging HTTP words (GET, POST, PUT, DELETE) and URI (for instance http://company/api/v2/domain/identifier), pushing the need to read data in a random access style – from simple key/value to complex queries.
Enhancing the BigData stack with real time search capabilities is the next natural step for the Hadoop ecosystem, because the MapReduce framework was not designed with synchronous processing in mind.
There is a lot of traction today in this area and this talk will try to answer the question of how to fill in this gap with specific open-source components, ultimately building a dedicated platform that will enable real-time queries on Internet-scale data sets. After discussing the evolution of the deployments of common Hadoop platform, a hybrid approach called lambda architecture will be proposed. It will be demonstrated with concrete examples, discussing which technology could be a good match, and how they would interact together.
What is Trove, the Database as a Service on OpenStack?OpenStack_Online
Trove was integrated into the IceHouse release of OpenStack to provision and manage databases in an OpenStack Cloud. With Trove developers can spin up a database instance on-demand in an instant.
Please sign up for upcoming OpenStack Online Meetups: http://www.meetup.com/OpenStack-Online-Meetup/
From cache to in-memory data grid. Introduction to Hazelcast.Taras Matyashovsky
This presentation:
* covers basics of caching and popular cache types
* explains evolution from simple cache to distributed, and from distributed to IMDG
* not describes usage of NoSQL solutions for caching
* is not intended for products comparison or for promotion of Hazelcast as the best solution
Best Practices for Using Alluxio with Apache Spark with Gene PangSpark Summit
Alluxio, formerly Tachyon, is a memory speed virtual distributed storage system and leverages memory for storing data and accelerating access to data in different storage systems. Many organizations and deployments use Alluxio with Apache Spark, and some of them scale out to over PB’s of data. Alluxio can enable Spark to be even more effective, in both on-premise deployments and public cloud deployments. Alluxio bridges Spark applications with various storage systems and further accelerates data intensive applications. In this talk, we briefly introduce Alluxio, and present different ways how Alluxio can help Spark jobs. We discuss best practices of using Alluxio with Spark, including RDDs and DataFrames, as well as on-premise deployments and public cloud deployments.
Check out our list of the top 10 geekiest things you can do in Las Vegas while at NetApp Insight. Las Vegas has plenty of delightful entertainment options, even for a self-professed geek like yourself.
Why do you work for IT in the 21st century? We asked NetApp IT employees from all corners of the world to share why they like to come to work each day and compiled them into an infographic.
Apache Hadoop has gained considerable attention from the enterprise IT community as a data analytics alternative to traditional BI systems and data warehousing. And while this is not the only alternative currently available, it has become highly visible.
However, with heightened visibility comes heightened scrutiny. Hadoop’s shortcomings have also become more visible to enterprise IT administrators who have expressed concern over data integrity, system resiliency, ease of use, and maintainability. Now, a growing number of enterprise IT‐centric vendors are responding to the opportunity to offer a Hadoop‐based data analytics solution that conforms to the demands of a production data center environment. Here we review one such solution that has resulted from a partnership between NetApp and Cloudera, the commercial face of Apache Hadoop.
Oracle Database Consolidation with FlexPod on Cisco UCSNetApp
Cisco and Oracle as technology front-runners provide YOU the tools you need to optimize your Oracle environments! John McAbel, Senior Product Manager - Oracle Solutions on UCS at Cisco Systems, explains how NetApp and Cisco are providing a flexible infrastructure that helps prepare organizations for today, and for future business growth and change.
Cloud Data Management at Australia's Largest Software Company -Session Sponso...Amazon Web Services
AWS Cloud services are known to scale automatically and easily, though managing the data sets created by the applications using these services don't. In fact, distributed and disparate types of production, test, development and customer data in volume regularly create operational friction and escalate costs at scale. Learn how Australia's largest software company overcame these challenges, and leveraged an advanced cloud data management platform to maximise control over data placement, performance and privacy, as well as create a better experience for their SaaS customers.
Speakers: Corey Adolphus, Hybrid Cloud Architect, NetApp & Darryn Schafferius, Systems Engineer, NetApp
AWS re:Invent 2016: Getting Started with the Hybrid Cloud: Enterprise Backup ...Amazon Web Services
This sessions is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing Backup ad Recovery processes to achieve fast, simple wins that demonstrate the scale and flexibility of cloud services for storage. Services mentioned: S3, Glacier, Snowball, 3rd party partners, Storage Gateway, and cloud data migration services.
2014 Global Trend Forecast (Technology, Media & Telecoms)CM Research
In this report, the third volume in our "Global TMT Trend Forecast" series, we identify the major disruptive technologies that we will see in 2014 and predict how they will impact the world’s largest technology, media and telecom (TMT) companies.
Inside, we split the global TMT sector into 17 subsectors (e.g. connected devices, consumer electronics, semiconductors, e-commerce, social media, software, telecom operators, etc.) and examine how emerging technology themes will impact each sector, highlighting the likely winners and losers. Behind many of the themes mentioned in this report we have published in-depth research reports supporting our thinking. Here, we bring all these themes together. Our objective is to offer investors and industry executives a comprehensive trend forecast for the global TMT sector over the next 12 months.
If you only read one TMT Trends report this year, make sure it is this one.
Similar to Share on LinkedIn Share on Twitter Share on Facebook Share on Google+ Share by email Embed Using the Cloud to Create a Truly All-Flash Data Center
Object Storage promises many things - unlimited scalability, both in terms of capacity and file count, low cost but highly redundant capacity and excellent connectivity to legacy NAS. But, despite these promises object storage has not caught on in the enterprise like it has in the cloud. It seems like, for the enterprise object storage just isn’t a good fit. The problem is that most object storage system’s starting capacity is too large. And while connectivity to legacy NAS systems is available, seamless integration is not. Can object storage be sized so that it is a better fit for the enterprise?
Storage Switzerland’s founder and lead analyst, George Crump and Cloudian’s Chief Marketing Officer, Paul Turner, describe the benefits of object and cloud storage but also describe how they can work together to solve your data problem once and for all. In addition, they cover specific next steps to begin implementing a hybrid cloud storage solution in your data center.
Webinar: Performance vs. Cost - Solving The HPC Storage Tug-of-WarStorage Switzerland
The HPC storage performance tier is well defined: scale-out solid state storage systems. But the capacity tier is up for debate. Should you use a high end NAS file system or make the switch to object storage? More importantly: How do you move data from the performance tier to the capacity tier without placing additional burden on already overworked IT personnel?
We answer these questions and provide designs that solve the HPC storage-tug-of-war in our webinar with Caringo. Listen as experts on HPC, NAS and Object Storage discuss the HPC storage challenge, debate the potential solutions and provide you guidance on how to create the right architecture.
Webinar: 4 Ways to Improve NetApp Storage Performance Without Replacing ItStorage Switzerland
New on demand webinar with Storage Switzerland Lead Analyst George Crump and Avere Systems Director Chris Bowen. In this webinar, George and Chris discuss why NAS storage performance is so critical, how to balance storage performance and storage capacity, and four ways to improve storage performance without replacing your existing NAS system.
Webinar: End NAS Sprawl - Gain Control Over Unstructured DataStorage Switzerland
The key to ending NAS Sprawl is to fix the file system so it can offer cost effective, scalable, high performance storage. In this webinar Storage Switzerland Lead Analyst George Crump, Quantum VP of Global Marketing Molly Rector, and the Quantum StorNext Solution Marketing Senior Director Dave Frederick discuss the challenges facing the typical scale-out storage environment and what IT professionals should be looking for in solutions to eliminate NAS Sprawl once and for all.
Webinar: Overcoming the Top 3 Challenges of the Storage Status QuoStorage Switzerland
Between 2010 and 2020, IDC predicts that the amount of data created by humans and enterprises will increase 50x. Legacy network attached storage (NAS) systems can't meet the unstructured data demands of the mobile workforce or distributed organizations. In this webinar, George Crump, Lead Analyst at Storage Switzerland and Brian Wink, Director of Solutions Engineering at Panzura expose the hidden gotcha's of the storage status quo and explore how to manage unstructured data in the cloud.
Webinar: Hyperconvergence is Broken, Learn How to Fix it!Storage Switzerland
While hyperconverged infrastructures (HCI) offer rapid startup they lock organizations into a specific set of vendor compute, storage, networking and hypervisor configurations. And, as the infrastructure scales it becomes increasingly difficult to deliver workload specific levels of performance and data protection.
Join Storage Switzerland and Cloudian in this on demand webinar where we discuss the advantages of object storage over NAS, the problems with converting from NAS to object storage and how to overcome those problems.
In this presentation from the recent AWS Oil & Gas event in Aberdeen, Franz Esser from AWS Partner, Eurotech discusses the solutions that Eurotech have delivered to commom challenges faced by organisations operating in the oil and gas sector.
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash ArrayStorage Switzerland
Join Storage Switzerland and Data Direct Networks (DDN) for this on demand webinar: "Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array”. We discuss the different types of flash storage and compare them, why vendors want to replace your SAN instead of enhance it and what you can do to not only protect your current storage investments but also prepare a path to the future.
Join us for our on demand webinar where Storage Switzerland and Tegile Systems discuss how the acquisition and operating costs of flash make it feasible to build a private cloud that is responsive to the needs of the business and cost effective.
Webinar: Which Storage Architecture is Best for Splunk Analytics?Storage Switzerland
We discuss the pros and cons of the three most common storage architectures for Splunk, enabling you to decide which makes the most sense for your organization.
1. Leverage existing storage resources
2. Deploy a cloud storage and SaaS solution
3. Deploy a hybrid, Splunk-ready solution
Webinar: Cloud Storage: The 5 Reasons IT Can Do it BetterStorage Switzerland
In this webinar learn the five reasons why a private cloud storage system may be more cost effective and deliver a higher quality of service than public cloud storage providers.
In this webinar you will learn:
1. What Public Cloud Storage Architectures Look Like
2. Why Public Providers Chose These Architectures
3. The Problem With Traditional Data Center File Solutions
4. Bringing Cloud Lessons to Traditional IT
5. The Five Reasons IT can Do it Better
1) To show you how to spot an Aspera opportunity ! 2) To outline the Aspera portfolio (Sales overview not technical) 3) To look at the Aspera opportunity from Sharepoint 4) Summary / Q and A / Close – But interaction is welcomed throughout.. 5) But before all of that…. This… 2 AGENDA AND OBJECTIVES
Webinar: Overcoming the Storage Challenges Cassandra and Couchbase CreateStorage Switzerland
NoSQL databases like Cassandra and Couchbase are quickly becoming key components of the modern IT infrastructure. But this modernization creates new challenges – especially for storage. Storage in the broad sense. In-memory databases perform well when there is enough memory available. However, when data sets get too large and they need to access storage, application performance degrades dramatically. Moreover, even if enough memory is available, persistent client requests can bring the servers to their knees.
Join Storage Switzerland and Plexistor where you will learn:
1. What is Cassandra and Couchbase?
2. Why organizations are adopting them?
3. What are the storage challenges they create?
4. How organizations attempt to workaround these challenges.
5. How to design a solution to these challenges instead of a workaround.
Brian Brownlow is an experienced senior analyst programmer for Mayo Clinic. He is made a workshop presentation at the 2014 BDPA Technology Conference on the topic, 'Big Data Implementation - Mayo Clinic Case Study'. This presentation will show part of the Mayo Clinic story on the embarking of an exploration of the application of `Big Data' technologies. `Big Data' is seen as one set of tools that can be used to enhance medical research, medical education and practice management. Mayo Clinic is always searching for better, faster and cheaper ways to use its data to improve patient care and sustain financial outcomes in a challenging reimbursement environment. Our approach uses several components that are open source and combines them with data from various sources to provide information to decision makers in near real time. We have created a center of `Big Data' excellence using in-house staff and vendor engagements. `Big Data' is one element of our Enterprise Data Trust framework.
Even though users and application owners are demanding it, the Always-On Data Center seems unrealistic to most IT professionals. Overcoming the cost and complexity of an Always-On environment while delivering consistent results is almost too much to ask. But the reality is that data centers of all sizes can affordably meet this expectation. The Always-On environment requires a holistic approach, counting on a highly virtualized infrastructure, flexible data protection software and purpose built protection storage.
Listen in as experts from Storage Switzerland, Veeam and ExaGrid architect a data availability and protection infrastructure that can meet and even exceed the Always-On expectations of an Always-On organization.
SpringPeople - Introduction to Cloud ComputingSpringPeople
Cloud computing is no longer a fad that is going around. It is for real and is perhaps the most talked about subject. Various players in the cloud eco-system have provided a definition that is closely aligned to their sweet spot –let it be infrastructure, platforms or applications.
This presentation will provide an exposure of a variety of cloud computing techniques, architecture, technology options to the participants and in general will familiarize cloud fundamentals in a holistic manner spanning all dimensions such as cost, operations, technology etc
Similar to Share on LinkedIn Share on Twitter Share on Facebook Share on Google+ Share by email Embed Using the Cloud to Create a Truly All-Flash Data Center (20)
Scaling Security Workflows in Government AgenciesAvere Systems
For most federal agencies dealing with increased security threats, limiting machine-data collection is not an option. But faced with finite IT budgets, few agencies can continue to absorb the high costs of scaling high-end network attached storage (NAS) or moving to and expanding a block-based storage footprint. During this webcast, you’ll learn about more cost-effective solutions to support large-scale machine-data ingestion and fast data access for security analytics.
You’ll learn about:
- The common challenges organizations face when scaling security workflows
- Why a high-performance cache works to solve these issues
- How to integrate cloud into processing and storage for additional scalability and efficiencies
Hedge Fund IT Challenges Financial SurveyAvere Systems
This survey highlights results of a recent Avere Systems Survey capturing challenges that hedge fund IT managers are experiencing in an era of constant and rapid change.
Cloud Bursting 101: What to do When Cloud Computing Demand Exceeds CapacityAvere Systems
Slides from live webinar hosted on February 16, 2017.
Deploying applications locally and bursting them to the cloud for compute may seem difficult, especially when working with high-performance, critical information. However, using cloudbursts to offset peaks in demand can bring big benefits and kudos from organizational leaders always looking to do more with less.
After this short webinar, you’ll be ready to:
- Explain what cloud bursting is and what workloads it is best for
- Identify efficiencies in applying cloud bursting to high-performance applications
- Understand how cloud computing services access your data and consume it during burst cycles
- Share three real-world use cases of companies leveraging cloud bursting for measurable efficiencies
- Have seen a demonstration of how it works
Presenters will build an actionable framework in just thirty minutes and then take questions.
Solving enterprise challenges through scale out storage & big compute finalAvere Systems
Google Cloud Platform, Avere Systems, and Cycle Computing experts will share best practices for advancing solutions to big challenges faced by enterprises with growing compute and storage needs. In this “best practices” webinar, you’ll hear how these companies are working to improve results that drive businesses forward through scalability, performance, and ease of management.
The slides were from a webinar presented January 24, 2017. The audience learned:
- How enterprises are using Google Cloud Platform to gain compute and storage capacity on-demand
- Best practices for efficient use of cloud compute and storage resources
- Overcoming the need for file systems within a hybrid cloud environment
- Understand how to eliminate latency between cloud and data center architectures
- Learn how to best manage simulation, analytics, and big data workloads in dynamic environments
- Look at market dynamics drawing companies to new storage models over the next several years
Presenters communicated a foundation to build infrastructure to support ongoing demand growth.
Deliver Best-in-Class HPC Cloud Solutions Without Losing Your MindAvere Systems
While cloud computing offers virtually unlimited capacity, harnessing that capacity in an efficient, cost effective fashion can be cumbersome and difficult at the workload level. At the organizational level, it can quickly become chaos.
You must make choices around cloud deployment, and these choices could have a long-lasting impact on your organization. It is important to understand your options and avoid incomplete, complicated, locked-in scenarios. Data management and placement challenges make having the ability to automate workflows and processes across multiple clouds a requirement.
In this webinar, you will:
• Learn how to leverage cloud services as part of an overall computation approach
• Understand data management in a cloud-based world
• Hear what options you have to orchestrate HPC in the cloud
• Learn how cloud orchestration works to automate and align computing with specific goals and objectives
• See an example of an orchestrated HPC workload using on-premises data
From computational research to financial back testing, and research simulations to IoT processing frameworks, decisions made now will not only impact future manageability, but also your sanity.
Moonbot Studios took flight to the cloud when resources didn't match deadlines. To offset workload peaks and overcome other operational challenges, Moonbot deployed the Avere vFXT to gain flexibility and affordability without making large capital investments.
Building a Just-in-Time Application Stack for AnalystsAvere Systems
Slide presentation from Webinar on February 17, 2016.
People in analytical roles are demanding more and more compute and storage to get their jobs done. Instead of building out infrastructure for a few employees or a department, systems engineers and IT managers can find value in creating a compute stack in the cloud to meet the fluctuating demand of their clients.
In this 45-minute webinar, you’ll learn:
- How to identify the right analytical workloads
- How to create a scalable compute environment using the cloud for analysts in under 10 minutes
- How to best manage costs associated with the cloud compute stack
- How to create dedicated client stacks with their own scratch space as well as general access to reference data
Health systems departments, research & development departments, and business analyst groups all face silos of these challenging, compute-intensive use cases. By learning how to quickly build this flexible workflow that can be scaled up and down (or off) instantly, you can support business objectives while efficiently managing costs.
Moonbot Studios Shoots for the Cloud to Meet Deadlines and Manage Costs
Threatened by deadlines for Academy award submissions, Moonbot Studios faced a shortage of rendering capacity while working on Taking Flight, its newest animated short film, and other important projects. As a small studio with a matching budget, the team did what it does best—it got creative and solved the problem with what they first called “magic.”
In this webinar, the Moonbot team will tell its tale of sending its rendering capacity to Google Compute Engine and how they defied networking odds by caching data close to the animators with an Avere vFXT. Hear Moonbot’s pipeline supervisor tell how they turned cloud data center distance into a non-issue, met deadlines, and gained quantitative benefits that sparked energy in this small team of creative aviators.
In this session, you will learn:
•What drove the Moonbot Studios to move to the cloud
•How they moved complex renders to Google Compute Engine, overcoming data access roadblocks
•Measurable results including speed, economics, flexibility, and creative freedom
The Moonbot Studios flight to the cloud will be supported by Google Cloud Platform and Avere Systems for a complete overview of how the technologies help bring new ideas to life.
Three Steps to Modern Media Asset Management with Active ArchiveAvere Systems
From Dec. 3 2015 Webinar
Digital media assets are extremely valuable, providing organizations a collection of tools to reach and engage. Making this growing collection of large files manageable over time can challenge even the most seasoned IT professionals.
In this webinar, we'll look at the workflow in place at large global broadcasters, then discuss the best practices identified while building a modern media asset management system with active archive support. Operations directors, program managers, systems architects, and media technology professionals will learn:
- How the right tools can help bring order to "big data" assets and methodologies for implementation and savings of media professionals valuable time.
- How private cloud archive reduces cost while improving accessibility and security
- How to create a high performance active archive that masks network latency between media asset management software and cloud archives
Slides for October 15 webinar with ESG Analyst Scott Sinclair and Avere Systems Engineer Bernie Behn reviewing ESG lab results that tested the Avere vFXT Edge filer on Google Cloud Platform.
Scientific Computing in the Cloud: Speeding Access for Drug DiscoveryAvere Systems
Scientific computing on the cloud lured scientists at H3 Biomedicine in Cambridge, Massachusetts, with the promise of near-limitless compute capacity potential of Amazon EC2. Today, scientists run a wide array of applications in the cloud that contribute to the integration of human cancer genomics with chemistry and biology to discover a library of specialty cancer treatment drugs.
In this webinar, you'll hear how this organization has built cloud infrastructure in a way that reduces latency and gives them storage flexibility, and does so in a way that helps them save money and support their business strategy. The H3 Biomedicine story will be supported by a look at the cloud technology and AWS services that have enabled application migration to the cloud in a hybrid IT environment.
Build a Cloud Render-Ready InfrastructureAvere Systems
Webinar presented September 8, 2015
Rendering applications place high-demands on both compute and storage in visual effects infrastructures. With peaks and valleys in the workflow being the norm, leading VFX creators look to the cloud to build infrastructures that provide flexibility to meet ongoing IT management challenges. In this webinar, you’ll hear from industry innovators about the advantages of cloud rendering and how VFX IT leaders are designing this on-demand solution with Avere Systems and Google Cloud Platform. Designed for CTOs, information systems directors, systems engineers and administrators, the content will discuss the initial steps and technical insights of a render-ready hybrid cloud IT architecture.
4 C’s for Using Cloud to Support Scientific ResearchAvere Systems
While cost is a primary "c" driving the adoption of object-based cloud solutions in the life sciences, compute, capacity, and collaboration may all be bigger incentives. In this webinar, we'll examine how to use an Avere Hybrid Cloud NAS infrastructure to gain big benefits in areas like genomics research, personalized medicine, drug discovery, imaging, and other data analysis applications.
• Compute - Building production environments in the compute cloud without rewriting existing applications
• Capacity - Modernizing storage archives and disaster recovery by adding object storage for durability while leveraging existing on-premises NAS
• Collaboration - Using the cloud t o safely and securely share data globally
• Cost - Using cloud to lower overall costs to keep pace with fast-growing demands of research initiatives
Avere & AWS Enterprise Solution with Special Bundle Pricing OfferAvere Systems
In this webinar, Sabina Joseph, AWS, and Mark Eastman, Avere, discuss the enterprise cloud NAS solution available using Avere FXT Edge Filers and Amazon Cloud Services. Special limited-time bundle pricing is available and will be reviewed at the end.
While organizations understand that cloud gives them benefits like cost savings, ability to keep up with exponential data growth, and enhanced productivity etc, it is equally difficult for them to ignore its integration challenges like an unfamiliar interface, performance degradation and user disruption.
Take a look at this info graphic to understand how Avere’s Enterprise Hybrid Cloud NAS provides the technology for a simple, flexible and cost-effective infrastructure that combines the best of NAS and Cloud.
This infographic demonstrates how the growth of data in the enterprise will require network-attached storage to integrate cloud provider services. Avere Systems' Cloud NAS solution allows data to easily use cloud storage as part of the NAS environment.
Infographic showing the benefits of adding Avere Cloud NAS to enterprise architecture to take advantage of AWS S3 or Glacier cloud services for storage.
Optimizing the Upstreaming Workflow: Flexibly Scale Storage for Seismic Proce...Avere Systems
Of all the applications in the oil and gas industry's upstream workflow, those involved in seismic processing place the greatest demand on storage. Pre-stack and post-stack migration, velocity modeling, and other processing steps are challenging even the highest performance NAS systems. In this Webinar, we discuss meeting these demands with accelerated performance, reduced cost, and a streamlined workflow.
Webinar: Untethering Compute from StorageAvere Systems
Enterprise storage infrastructures are gradually sprawling across the globe and consumers of data increasingly require access to remote storage resources. Solutions for mitigating the pain associated with this growth are out there, but performance varies. This Webinar will take a look at these challenges, review available solutions, and compare tests of performance.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Share on LinkedIn Share on Twitter Share on Facebook Share on Google+ Share by email Embed Using the Cloud to Create a Truly All-Flash Data Center
1. Using The Cloud To Create a
True All-Flash Data Center
1. The All-Flash Array Challenge
2. The Hybrid Array Challenge
3. The Cloud Answer
4. 3 Steps to a True All-Flash Data Center
Tuesday April 28th, 2014,
1:00pm EDT and 10:00am PDT
In this webinar learn about:
Pre-register and get a copy of
Storage Switzerland’s White Paper:
“How Cloud can enable the All-Flash Data Center”
Join us LIVE on:
2. Logistics
● Be on the look-out for polling questions
● You may ask questions at any time during the
presentation by using the Q&A box
○ On-Demand Viewers please tweet us questions
@averesystems #allflash
● At the end of the presentation please take a moment
to provide feedback and rate today’s webinar
3. Our Speakers
Ron Bianchini is the President, CEO, & Co-Founder of Avere Systems and has a long record of
accomplishments in building and leading successful companies that deliver breakthrough
technologies. Prior to Avere Systems, Ron was senior vice president at NetApp, CEO and founder of
Spinnaker Networks, and VP of product architecture at FORE Systems, co-founder of Scalable
Networks, and started his career as a professor at Carnegie Mellon University in Pittsburgh,
Pennsylvania.
George Crump is the founder of Storage Switzerland, the leading storage analyst focused on the
subjects of big data, solid state storage, virtualization, cloud computing and data protection. He is
widely recognized for his articles, white papers, and videos on such current approaches as all-flash
arrays, deduplication, SSDs, software-defined storage, backup appliances, and storage networking.
He has 25 years of experience designing storage solutions for data centers across the US.
4. ● Analyst firm focused on storage, cloud
and virtualization
● Knowledge of these markets is gained
through product testing and interaction
with end users and suppliers
● The results of this research can be
found in the articles, videos, webinars,
product analysis and case studies on
our web site:
http://storageswiss.com
Who Is Storage Switzerland?
5. Company Overview
• Mission
– Reinvent storage with Hybrid Cloud NAS that provides complete flexibility
to deploy and scale compute and storage in the cloud or on premises,
wherever it makes most sense.
• Founders
– Ron Bianchini, CEO: NetApp, Spinnaker Networks, FORE, Scalable, CMU Prof.
– Mike Kazar, CTO: NetApp, Spinnaker Networks, IBM, Transarc, CMU PhD
• Who Uses Avere
– Vertical industries: media, tech/quant/science apps, web, MSP
– Horizontal cloud apps: cloud bursting, file serving, active archive
4Proprietary & Confidential
6. Polling Question
How Are You Using Flash?
A) I am using flash in my
servers
B) I have a hybrid flash array
C) I have an all-flash array
D) I would like to use flash but
it is too expensive
7. Agenda
• The Type of On-Premises Flash Solutions
• Pros / Cons of on premise flash Solutions
• Using the Cloud to Eliminate the “Cons”
• Overcoming Cloud Latency
• Q&A
8. Polling Question
Where Are You On Your Cloud Journey?
• Our data in the cloud? No-way!
• We are just now starting to consider
cloud for storage/compute
• We use the cloud for backup/archive
• We use the cloud for production data
9. The All-Flash Data Center
Why go all-flash anyway?
• All-Flash allows for denser
configuration of storage and servers
• More IOPS per GB reduces the
number of drives needed
• Consistent High Performance reduces
storage management time
• Increases user and customer
satisfaction
11. Where has All-Flash Been Successful?
• Databases
• Virtual Servers
• Virtual Desktops
• Big Data Analytics Processing
Server Virtualization Desktop Virtualization Databases
12. Where has All-Flash Been Successful?
• Databases
• Virtual Servers
• Virtual Desktops
• Analytics Processing
13. Where has All-Flash Been Successful?
• Databases
• Virtual Servers
• Virtual Desktops
• Analytics Processing
• Consistently Active
• Typically Random I/O
• Moderate Ingest Rate
• Slowest Capacity Growth
14. Where has All-Flash Not Been
Successful?
• Backup
• Archive
• Machine Data
• Sensor Data
15. Where has All-Flash Not Been
Successful?
• Backup
• Archive
• Machine Data
• Sensor Data
• Inactive For Long Periods of
Time
• Rarely I/O demanding
• Potentially High Ingest Rate
• Largest Capacity Growth
16. Reality Strikes The All-Flash Data Center
The Real Data Center Needs Two Types of Storage
17. Reality Strikes The All-Flash Data Center
The Real Data Center Needs Two Types of Storage
Server Virtualization Desktop Virtualization Databases
18. Reality Strikes The All Flash Data Center
The Real Data Center Needs Two Types of Storage
Server Virtualization Desktop Virtualization Databases
Data Needs Change
19. The All-Flash Data Center
Two Questions
A. What and Where Should That Second
Storage Tier Be Located?
B. What Should Handle The Movement
of Data Between The Two Tiers?
20. The All-Flash Data Center
What And Where Should Second Tier Be?
A. On-Premises Scale-Out NAS?
B. Private Cloud (Object Storage)
C. Public Cloud
21. The All-Flash Data Center
How Do We Facilitate Data Movement
A) Human Interaction
22. The All-Flash Data Center
How Do We Facilitate Data Movement
A) Human Interaction
B) Automation
23. Avere – Reinventing Storage
22Proprietary & Confidential
Traditional NAS NAS Optimization Hybrid Cloud NAS
Challenges Benefits Benefits
Poor Performance Scaling Unlimited Performance Scaling Unlimited Performance with the Cloud
High CAPEX & OPEX Lower TCO (less disks & power) Lowest TCO (less admin & data centers)
Management Silos Consolidated NAS – GNS Consolidated Object & NAS - GNS
Global Access via Complex Replication Global Access via WAN Global Access via Cloud
NFS &
CIFS
Client
Workstations
Compute
Farm
FXT Series
Edge Filer
Public
Object
Private
Object
FlashCloudTM
WAN
Legacy NASAmazon &
Google
Amplidata &
Cleversafe
FXT Series
Edge Filer
24. Storage Cloud
(near infinite capacity)
Compute Cloud
(near infinite performance)
Hybrid Cloud is Attractive
- BUT Presents Challenges
23Proprietary & Confidential
Cloud challenges
1. Disk storage is slow
2. Unfamiliar object-
based interface
3. High latency to
remote storage
4. No easy on-ramp to
cloud storage
5. Cloud gateways do
NOT scale
On-Prem Storage
NAS Object
On-Prem Compute
Latency of
10-100ms or more
Single-node
Gateway
Single-node
Gateway
25. Storage Cloud
(near infinite capacity)
Compute Cloud
(near infinite performance)
Ultimate Hybrid Cloud Flexibility
24Proprietary & Confidential
Virtual FXT Cluster
• Scalable NAS architected for
the compute cloud
• Auto move active data to
RAM & SSD tiers
• Hide latency to on-prem and
in-cloud storage
• For cloud bursting and
permanent IT infrastructure
Physical FXT Cluster
• Scalable NAS performance
• Low latency
• Save cost, store data where
it makes most sense
• Global namespace
• Data mobility
Bucket 2
Bucket n
Bucket 1
Physical FXT
On-Prem Storage
NAS Object
On-Prem Compute
Virtual FXT
Virtual Compute Farm
26. Storage Cloud
(near infinite capacity)
Compute Cloud
(near infinite performance)
Ultimate Hybrid Cloud –
All Flash Data Center
25Proprietary & Confidential
Cloud repository
• Leverage disk-based cloud
storage
• Hide latency to on-prem and
in-cloud storage
All Flash Data Center
• Scalable NAS performance
• Low latency
• Save cost, store data where
it makes most sense
• Global namespace
• Data mobility
Bucket 2
Bucket n
Bucket 1
Physical FXT
On-Prem StorageOn-Prem Compute
27. Avere Benefits
26Proprietary & Confidential
Customer Needs Avere Delivers
Low-latency file access
AND
Low-cost capacity scaling
Edge-Core architecture
Familiar NFS & SMB/CIFS interfaces Edge Filer local termination of file system protocols
Support for NAS and object repositories Native NAS support and FlashCloud for object store
Manage as a single pool of storage GNS, FlashMove®
Scalable performance and HA Scale-out clustering
On-prem and in-cloud flexibility Physical and virtual solutions
Data protection Cloud snapshots, FlashMirror®
High security AES-256 encryption
Efficiency Compression
Lowest TCO
Support for Amazon, Google, Amplidata, Cleversafe
& Legacy NAS
28. Comparing 1,000,000 IOPS Solutions*
EMC Isilon
$10.7 / IOPS
NetApp
$5.1 / IOPS
150ms
Avere
$2.3 / IOPS
Throughput
(IOPS)
Latency/ORT
(ms)
List Price $/IOPS Disk
Quantity
Rack
Units
Cabinets Product Config
Avere FXT 3800 1,592,334 1.24 $3,637,500 $2.3 549 76 1.8
32-node cluster,
cloud storage config
NetApp FAS 6240 1,512,784 1.53 $7,666,000 $5.1 1728 436 12 24-node cluster
EMC Isilon S200 1,112,705 2.54 $11,903,540 $10.7 3360 288 7 140-node cluster
*Comparing top SPEC SFS results for a single NFS file system/namespace. See www.spec.org/sfs2008 for more information.
Avere 32-node
FXT cluster
Core Filer
-NAS
-Public object
-Private object
29. Avere Cloud NAS – Spec SFS Results*
• Avere is first and only vendor to provide low-latency,
scalable NAS performance for cloud storage
– Performance with cloud equivalent to that with legacy NAS (note
ZFS column below)
• Cloud storage provides infinitely scalable capacity with
lowest cost, simplest management, and highest reliability
28Proprietary & Confidential
Avere + Amazon S3 Avere + Cleversafe Avere + Amplidata Avere + ZFS (NFS)
Throughput (IOPS) 180,141 180,394 180,229 180,538
Latency/ORT (ms) 0.86 0.89 0.95 0.88
Avere Config
3-node FXT 3800
cluster
3-node FXT 3800
cluster
3-node FXT 3800
cluster
3-node FXT 3800
cluster
Core Filer Config
Amazon S3 storage
service, eleven 9's
durability
2x Accesser + 9x
Slicestor nodes, 5 of 9
erasure coding
3x Controller + 8x
Storage nodes, 20/4
durability
Open ZFS on
commodity storage
server
Capacity (TB) Infinite 220 186 22
*See public results at spec.org/sfs2008/results/sfs2008.html for more info.
30. FXT Series Product Line
29Proprietary & Confidential
Hardware n1-highmem-8 r3.2xlarge r3.8xlarge FXT 3200 FXT 3850 FXT 4850
DRAM (GB) 52 61 244 96 288 288
SSD (TB) 4 4 8 - 0.8 4.8
SAS (TB) - - - 4.8 7.8 -
Total Capacity (TB) 4 4 8 4.8 8.6 4.8
Network Bandwidth 10GbE High 10GbE 2x10GbE, 6x1GbE
Virtual FXT Physical FXT
Performance
Performance
4850
r3.2xlarge
r3.8xlarge
3850
3200n1-highmem-8
Google AWS
AWS
• Protocols
– To Client: NFSv3 (TCP/UDP), CIFS (SMB1.0 & 2.0); To Core Filer: NFSv3 (TCP), S3 API
• Clustering
– Cluster from 3 to 50 FXT nodes for performance and capacity scaling
– HA failover, mirrored writes, redundant network ports & power
• Management
– GUI, analytics, email alerts, SNMP, XML-RPC interface, policy-based management
• Licensed Software
– FlashCloudTM for Amazon, Google, Amplidata/HGST, and Cleversafe
– NAS Core (for connecting to on-prem NAS filers), FlashMove®, and FlashMirror®
Included with
all FXT models
31. Recording of this and other webinars available on our website (no
need to re-register) or through the BrightTalk app for mobile
devices (just subscribe to the Avere channel)
Before we get to your questions ...
You will find this presentation and other
related downloads in the Attachments tab
of the interface.