This presentation discusses how ARM-based systems can provide scalable and efficient CEPH storage solutions. It outlines how the ARM ecosystem is innovating in storage, including low-cost enterprise SMB storage and energy efficient enterprise storage racks. Examples are given of CEPH implementations on ARM, such as a 504 node CEPH cluster using converged microservers and CEPH performance on ThunderX platforms comparable to x86 servers but with lower total cost of ownership. Overall, the presentation argues ARM-based systems can deliver scalable, portable, and optimized intelligent flexible cloud storage using CEPH.
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
Kenny Chang (張任伯) (Storage Solution Architect, Intel)
With the trend that Solid State Drive (SSD) becomes more affordable, more and more cloud providers are trying to provide high performance, highly reliable storage for their customers with SSDs. Ceph is becoming one of most open source scale-out storage solutions in worldwide market. More and more customers have strong demands that using SSD in Ceph to build high performance storage solutions for their Openstack clouds.
The disrupted Intel® Optane SSDs based on 3D Xpoint technology fills the performance gap between DRAM and NAND based SSD while the Intel® 3D NAND TLC is reducing cost gap between SSD and traditional spindle hard drive and makes it possible for all flash storage. In this session, we will
1) Discuss OpenStack storage Ceph reference design on the first Intel Optane (3D Xpoint) and P4500 TLC NAND based all-flash Ceph cluster, it delivers multi-million IOPS with extremely low latency as well as increase storage density with competitive dollar-per-gigabyte costs
2) Share Ceph bluestore tunings and optimizations, latency analysis, TCO model, IOPS/TB, IOPS/$ based on the reference architecture to demonstrate this high performance, cost effective solution.
OpenStack and Ceph case study at the University of AlabamaKamesh Pemmaraju
The University of Alabama at Birmingham gives scientists and researchers a massive, on-demand, virtual storage cloud using OpenStack and Ceph for less than $0.41 per gigabyte. This is a session at the OpenStack summit given by Kamesh Pemmaraju at Dell and John Paul at University of Alabama. This will detail how the university IT staff deployed a private storage cloud infrastructure using the Dell OpenStack cloud solution with Dell servers, storage, networking and OpenStack, and Inktank Ceph. After assessing a number of traditional storage scenarios, the University partnered with Dell and Inktank to architect a centralized cloud storage platform that was capable of scaling seamlessly and rapidly, was cost-effective, and that could leverage a single hardware infrastructure for the OpenStack compute and storage environment.
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
Kenny Chang (張任伯) (Storage Solution Architect, Intel)
With the trend that Solid State Drive (SSD) becomes more affordable, more and more cloud providers are trying to provide high performance, highly reliable storage for their customers with SSDs. Ceph is becoming one of most open source scale-out storage solutions in worldwide market. More and more customers have strong demands that using SSD in Ceph to build high performance storage solutions for their Openstack clouds.
The disrupted Intel® Optane SSDs based on 3D Xpoint technology fills the performance gap between DRAM and NAND based SSD while the Intel® 3D NAND TLC is reducing cost gap between SSD and traditional spindle hard drive and makes it possible for all flash storage. In this session, we will
1) Discuss OpenStack storage Ceph reference design on the first Intel Optane (3D Xpoint) and P4500 TLC NAND based all-flash Ceph cluster, it delivers multi-million IOPS with extremely low latency as well as increase storage density with competitive dollar-per-gigabyte costs
2) Share Ceph bluestore tunings and optimizations, latency analysis, TCO model, IOPS/TB, IOPS/$ based on the reference architecture to demonstrate this high performance, cost effective solution.
OpenStack and Ceph case study at the University of AlabamaKamesh Pemmaraju
The University of Alabama at Birmingham gives scientists and researchers a massive, on-demand, virtual storage cloud using OpenStack and Ceph for less than $0.41 per gigabyte. This is a session at the OpenStack summit given by Kamesh Pemmaraju at Dell and John Paul at University of Alabama. This will detail how the university IT staff deployed a private storage cloud infrastructure using the Dell OpenStack cloud solution with Dell servers, storage, networking and OpenStack, and Inktank Ceph. After assessing a number of traditional storage scenarios, the University partnered with Dell and Inktank to architect a centralized cloud storage platform that was capable of scaling seamlessly and rapidly, was cost-effective, and that could leverage a single hardware infrastructure for the OpenStack compute and storage environment.
講者:
Jeff Chu (Director of Enterprise Solutions, ARM)
Kan Yan Rong (Technical Expert in Storage and Application) Technology, WDC/SanDisk)
概要:
Jeff from ARM will provide a brief update on the activities furthering Ceph on ARM including some recent progress from ARM as well some increased community activity. After that Chris and Yan from Western Digital/San Disk will be presenting the topic on Ceph Block Performance on Cavium ARM and SATA SSDs.
Calista Redmond from IBM presented this deck at the Switzerland HPC Conference.
“The OpenPOWER Foundation was founded in 2013 as an open technical membership organization that will enable data centers to rethink their approach to technology. Today, nearly 200 member companies are enabled to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. These innovations include custom systems for large or warehouse scale data centers, workload acceleration through GPU, FPGA or advanced I/O, platform optimization for SW appliances, or advanced hardware technology exploitation. OpenPOWER members are actively pursing all of these innovations and more and welcome all parties to join in moving the state of the art of OpenPOWER systems design forward.”
Watch the video presentation: http://insidehpc.com/2016/03/openpower-foundation/
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
BrightTalk session-The right SDS for your OpenStack CloudEitan Segal
Discover the benefits of having a purpose-built SDS Block system supporting your OpenStack Cloud OS with all of its components; bare metal, virtual machines and containers.
Percy Tzelnic from Dell Technologies presented this deck at the HPC User Forum in Austin.
Watch the video presentation: http://insidehpc.com/2016/09/emc-in-hpc-the-journey-so-far-and-the-road-ahead/
Learn more: http://emc.com/
Deploying Massive Scale Graphs for Realtime InsightsNeo4j
Graph databases have been at the forefront of helping organizations manage and generate insights from data relationships, and applying those insights in real-time to drive competitive advantage. As organizations gain value in deploying graph databases, the data volumes managed are growing exponentially pushing the limits of large-scale in-memory graph processing. Neo4j and IBM Power Systems combined forces to deliver a market leading scalable graph database platform capable of affordably storing and processing graphs of extremely large size and offering real-time insights, using flash and FPGA accelerators. In this session we will cover the use cases driving the need for this extremely scalable platform and how this platform offers an easy to deploy model for extreme scale graph databases.
In this video from the Rice Oil & Gas Conference, Brent Gorda from ARM presents: ARM in HPC.
"With the recent Astra system at Sandia Lab (#203 on the Top500) and HPE Catalyst project in the UK, Arm-based architectures are arriving in HPC environments. Several partners have announced or will soon announce new silicon and projects, each of which offers something different and compelling for our community. Brent will describe the driving factors and how these solutions are changing the landscape for HPC."
Watch the video: https://wp.me/p3RLHQ-jXS
Learn more: https://developer.arm.com/hpc
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the 2019 UK HPC Conference, Dr. Oliver Perks from Arm presents: Arm as a Viable Architecture for HPC & AI.
"In the past two years Arm has transitioned from being a novelty research project for HPC to a viable candidate for large scale procurements. Through the advent of competitive processors, such as the Marvell ThunderX2, Arm is being taking increasingly seriously as an alternative to traditional X86 based supercomputers. Whilst the novelty lies within the architectural design, the most significant body of work has taken place in the ecosystem and applications space, ensuing a smooth transition for production scientific workloads. In this talk we will present the current status of Arm in HPC and scientific computing, and what to expect from future generations of Arm based processors. Additionally, we will cover the best practices for the adoption of Arm technology in a production HPC setting."
Watch the video: https://wp.me/p3RLHQ-kV5
Learn more: https://developer.arm.com/solutions/hpc
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Red Hat Ceph Storage: Past, Present and FutureRed_Hat_Storage
Ceph is a massively scalable, open source, software-defined storage system that runs on commodity hardware. Get an update about the latest version of Red Hat Ceph Storage, including information about the newest features and use cases, with a particular focus on cloud storage and OpenStack. We’ll also explore the themes and directions for the roadmap for the next 12 months.
In the age of Big Data, websites running on x86 can become overwhelmed. See why you should turbocharge your LAMP stack by moving it to IBM POWER8. The reasons are plentiful.
POWER8 hardware outperforms x86, particularly when coupled with a LAMP stack. POWER8 now runs Linux natively, rather than just AIX. Ideal for intensive data processing applications and Big Data analytics, POWER8 hardware can reduce data centre footprint and power usage while providing more processing power than x86 alternatives.
See http://isi.com.au/power8-linux for more
Huawei’s requirements for the ARM based HPC solution readiness - Joshua MoraLinaro
Talk Title: Huawei’s requirements for the ARM based HPC solution readiness
Talk Abstract:
A high level review of a wide range of requirements to architect an ARM based competitive HPC solution is provided. The review combines both Industry and Huawei’s unique views with the intend to communicate openly not only the alignment and support in ongoing efforts carried over by other ARM key players but to brief on the areas of differentiation that Huawei is investing towards the research, development and deployment of homegrown ARM based HPC solution(s).
Speaker: Joshua Mora
Speaker Bio:
20 years of experience in research and development of both software and hardware for high performance computing. Currently leading the architecture definition and development of ARM based HPC solutions, both hardware and software, all the way to the applications (ie. turnkey HPC solutions for different compute intensive markets where ARM will succeed !!).
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Xu Luo
Server Segment Manager
Akira Shimizu
Segment Marketing Manager
As the scale of compute continues to grow we find ourselves at a time of major disruption. Not a bad disruption but an opportunity to leverage new levels of access to deliver new services across more connected devices with significantly less latency.
To take advantage of this opportunity means delivering new levels of scalability and portability in the network enabling compatible services to be deployed whether from the data center or at the edge. And the scale will require significantly improved levels of efficiency and compute density.
This broad range of solutions addressing such a diverse set of requirements can only be delivered by the breadth of ARM and the ARM ecosystem
Today I want to make you aware of the opportunity for increased efficiency and scale of leveraging the ARM ecosystem for CEPH by first showing you why it will be crucial to tomorrow’s data center. And then share with you some examples of scale and efficency that is being delivered by the ARM ecosystem…
But for those of you who may not know ARM, let me introduce you to ARM and our Ecosystem
First thing to understand is that “ARM” is thought of in many different ways depending on the topic.
ARM as a company has been around for just over 25 years now
ARM is based in the UK with a global presence
We license technology to our customers enabling them to add their IP and create the right semiconductor chips for their markets and customers
Pick a few stats...
ARM also has a very technical meaning.
It is a computer architecture… The ARM Architecture
As a RISC the ARM Architecture was defined with efficiency at the center...
And as an Ecosystem, ARM and our partners work together to jointly develop and enable new markets, products, and opportunities…
At the same time, many members of the ecosystem compete with each other
This collaboration and competition allow for companies to work together where there is little differentiation and focus their resources and investments on the key technologies which differentiate them from their competitors
In this manner there is the opportunity for success for everyone...
Together with our Connected Community®, we are breaking down barriers to innovation for developers, designers and engineers, and enabling competition and choice across technology markets. We share success with our partners….
Andrew Carnegie couldn't’t have said it better…
Fundamentally ARM and our partners are driving an almost continuous pace of innovation. Innovation in business model, technology, and product development has been a part of ARM and our partners since the beginning
And as we said, our business model is dependent on partnership. Our success is dependant upon our customers’ success
And this is all built upon a foundation of energy efficiency..
Disrupting existing markets
And creating new opportunities
Now… I am sure that many of you have seen this or other similar data. IP traffic continuing to grow from from 1ZB this year to 2.3ZB in 2020 and storage ballooning from 7 zettabytes today to over 44 zettabytes by 2020
But this is only part of it. Looking at the network specs for 5g and predictions about it's deployment...
...there will be a 30 fold increase in access nodes
<click>
...in the UK, the move to HD and 4k content will drive up BW requirements by 22x with more than 75% of the traffic being video
<click>
...To support IoT and access and control of real time sensors, the 5g specs are for a maximum of 1 ms end to end latency. This is a major driver for more compute at the edge of the network.
<click>
...And it is the massive volume of devices, not just the growing number of consumer devices we all have but also those IoT nodes throughout our infrastructure that is driving the requirement for massive increases in the density of connections
-------------------------- OLD NOTES - USE AS YOU SEE FIT
The point is that it’s not just a bigger hammer – it’s a question of scale and specialisation
Need to be able to express the requirements in a simple definitioin – from streaming high speed video to support for 10^6 low BW connections per sq km for sensors…
The need here is to quantify some of the claims made.
The latest hype topic whether this be driven from IoT, Connected things, connected home, health, augmented reality etc all will place a load on the network.
Some data points:
Three axis of pressure:
MTC MC (Machine Type Comms Mission Critical)
MTC NMC (machine Type Comms Non-Mission Critical)
Mobile Broadband (subsciber driven)
Use cases for each and explain diverse set of challenges.
From EE UK market studies (Mobile Broadband):
Video will be the driver of long term growth. There will be a 22x increase in data over the UK network between 2015 and 2030. The network in the UK will be required to carry 2200PBytes per month. 76% of the traffic will be video related. 4K video will be the majority of traffic in this timeframe with a data rate per stream of over 18Mbps. This places a demand on the network and forces carriers to consider caching this content on the edge of the network to meet latency demands.
Augmented reality and gaming drive about 2/3 of the remaining traffic over the cellular network and amount to approximately 600PetaBytes per month.
MTC NMC – drives bandwidth throughput and connectivity everywhere. 2 foils below is an old slide but represents the amount of data that requires to be handled. This is not latency sensitive and can be handled in the core cloud (bring in NFV/Virtualisation), but we require connectivity and bandwidth/thoughput.
Another to highlight is the development of Narrow-Band IoT (NB-IoT) in 3GPP that is expected to support massive machine connectivity in wide area applications. NB-IoT will most likely be deployed in bands below 2GHz and will provide high capacity and deep coverage for an enormous numbers of connected devices.
MTC MC – Need equivalent data on Machine critical requirements that would drive reduction in latency and require data to be processed and cached at the edge of the network in the edge cloud. To support such latency-critical applications, 5G should allow for an application end-to-end latency of 1ms or less. Many services will distribute computational capacity and storage close to the air interface. This will create new capabilities for real-time communication and will allow ultra-high service reliability in a variety of scenarios, ranging from entertainment, automotive, health to industrial process control.
In addition to very low latency, 5G should also enable connectivity with ultra-high reliability and ultra-high availability. For critical services, such as control of critical infrastructure and traffic safety, connectivity with certain characteristics, such as a specific maximum latency, should not merely be ‘typically available.’ Loss of connectivity and deviation from quality of service requirements must be extremely rare. For example, some industrial applications might need to guarantee successful packet delivery within 1 ms with a probability higher than 99.9999 percent.
This the allows us to bridge to the so-whats of what we are doing…
So as the device that normally communicate back to the servers grow and grow.
<click>
to become the 10s of Billions predicted by so many
<click>
We will see the increasing demands placed on the required compute...
In fact the future of cloud and network infrastructure…is not just about more servers and disks
The future will be driven by the types of SERVICES that need to be delivered…. And the specific needs of that service.
It means that if you are a cloud or network provider you need datacenters and networks that are highly reusable, highly reconfigurable and highly flexible.
Where the topology, the compute, the storage can be adjusted to match the service being delivered.
You will need an Intelligent Flexible Cloud…
Until now, there’s been a clear distinction between what runs in the cloud… and what runs in the network.
THAT WILL CHANGE.
And I believe there will be new players and business models emerging….
And the future is more about pushing server functionality throughout the network with some being pushed further to the edge of the network and others staying within the core datacenter
This means that networking and cloud capability is delivered closer to where its actually needed…Another way to think about this is that we will have data centers throughout the network…throughout this Intelligent Flexible Cloud
This means that you will need storage solutions that can scale to be deployed as needed. Already we see some CDNs deploying content at the edge to minimize the bandwidth impacts…
For this to happen computing, storage, and networking will have to be:
Scalable – we need to make sure we are delivering HW that can scale from the data center to the smallest edge and still run the same SW, services, manageability…
Portable – it needs to be portable across a diverse range of workload optimized hardware. Essentially running standard server software on workload optimized hardware without any impact or even knowledge by the developer of those apps and services. The software can take advantage of the underlying acceleration as needed without impact to the delivery or rollout of the service. Really leverage truly open hardware and software interface standards.
Optimized – Leveraging workload acceleration to meet the performance demands without breaking the power or size limitations but also deliver a significant increase in compute density for general workloads to be run within those constraints.
This level of scalability is already being deliver by many of our partners in networking, servers, and storage. And they are already demonstrating huge gains in efficiency…
So lets look a little closer at storage…
As the scale of computing continues to grow with almost no limits, we are also seeing the need to deliver an increased amount of that compute at the edge...
Historically ARM processors have been in storage but really for the function of controlling the disks themselves or managing solid state storage…
But that is NOT what we are talking about today…
Instead we are seeing companies innovate storage solutions by taking advantage of the benefits of ARM-based systems. Now they can achieve efficiencies such that they could change the dynamics around storage. One such example is an innovative storage service from a company called Cynny Space. They were able to achieve cost and power savings without impacting the reliability all while able to co-exist with established interfaces for users…
-------------- ORIGINAL NOTES
They innovated at the system level to achieve 3x power saving and about upto 7x lower cost to the end user. But it was just about the lower cost, they delivered a service with high reliability and key attributes like disaster recovery. They have been able to add value added differentiated features like end to end security all the way through their service from server to client and self managing scalbility that allows them to add nodes automatically.
From an end user perpective the are able to coexist with established services with S3 compatible APIs.
Another example is Huawei where they have shown significant power savings by deploying ARM based storage solutions…
This is their Ocean UDS storage system where the ARM processor enables them to realize a 50% reduction in system power or a MAXIMUM of 4.2W per TB…
… And just this week HPE announced their new product line of more affordable enterprise class storage for SMB…
Delivering the same level of service at a fraction of the cost compared to using a legacy architecture.
Now many of these examples are proprietary solutions so you might ask…
…What about CEPH?
<CLICK>
Well I’m here to tell you that these benefits are applicable to CEPH based systems.. . TODAY!
in fact…
First of all, CEPH on ARM has been in the community for some time but as of this year with the Jewel release it is an officially supported part of the upstream community . I’m sure you have all read the blogs on this..
With this release, the momentum will even more for CEPH on ARM.
That said, products have been shipping since before this release…
For example…
There are already solutions in the market using ARM and CEPH in storage solutions.
Ambeded, for example, is delivering a CEPH based solution putting several microservers together to deliver a CEPH Cluster Storage Appliance… Earlier this year that product won an award at Interop 2016 for Storage…
This solution is a different sort of architecture than legacy based solutions in that it distributes the work a little bit different…
They use a micro server approach where they have a cluster of 8 ARM based micro servers each connected to a drive in a rack. They include the networking, switches, and power supplies while minimizing power…
But the power savings are only 1 of the benefits…
When you start looking at the failure domains, and the resultant rebuild times, they are minimized with this architecture. Instead of a whole server with 10s of drives going down, now you only have a single drive failure.
And from a performance standpoint, it delivers …
Now this data, like much of what I will show you was generated from our partners so I don’t have the specifics but if you have any questions I can get back to you with the answers or put you in contact with them directly…
For ambedded the energy savings alone is huge when examined at a rack level…
This is a consistent savings with what the proprietary solutions were seeing…
Now if you want to look at an architecture that is more traditional with a number of drives per processor then you only have to look at solutions from a few of our server chip partners. The next few slides are excerpts from some presentations on CEPH storage from both Applied Micro with their x-Gene System on Chip and Cavium with their ThunderX System on Chip…
Let’s start by looking at some of the recent work APM has done…
APM has created a 1U storage platform called Mudan. It leverages their 8 core x-gene 1 SOC to connect up to 12 HDDs with 2 SSD for journaling using a mix of integrated and external peripherals.
They have recently gone through some testing on this platform to demonstrate the balanced configuration of the solution…
This is not a reference platform but a platform that they are selling to customers for evaluation and small deployments…
This has all been documented this in an application note on how to deploy a CEPH cluster using X-Gene1.
In essence they were able to create a configuration that equally saturates the SSDs, HDDs, and network while not overly stressing the processor and I/O…
On the Cavium fron they recently did a head to head comparison of a 24 core ThunderX ST with a legacy 2630v3 based x86 server. Penguin computing provides a storage solution based on the Cavium Thunder X that leverages the 16 integrated SATA ports so there is no need for extra expanders which can cause bottle necks or failure points…not to mention add cost.
They then tested that against a legacy x86 solution and demonstrated that they were able to exceed the performance of a 2630v3 based system.
But when you look at the complexity of the HW solutions you see the areas for savings at a system level so above and beyond just the processor.
In fact
With SOC integration, the Cavium system is able cost 40-60% less than the legacy systems. Again very similar to the proprietary HPE system that was just announced…
And it still achieves the same or better performance…
Now, do you remember this HDD
<CLICK>
Well it is actually much more than just an HDD. It is actually what WDLabs calls a Converged Micro Server.
They have integrated a processor SOC onto the control board of the HDD creating a micro server…
This way they run Linux on the drive as well as the OSD on each drive…
Similar to the Ambedded solution, the failure domain is kept at a minimum…but in this case the processor, memory, and networking are all included on the HDD itself.
Here are some of the specifications but essentially it has everything on board that a server would…
Now this is a first generation system and they have done some preliminary testing in a huge 504 drive cluster…
With CEPH running on each micro server, they pulled together 4 PB into a single cluster.
They have dual ethernet for the private CEPH network and the external network.
Much of this is documented in a CEPH blog from May but here is some of the data…
So in summary…
We encourage you to explore CEPH on ARM for yourself.