Learn how upcoming changes in the persistent memory market will affect deployments of in-memory computing and traditional applications. Using software innovations from SanDisk and the broad portfolio of flash storage hardware options, customers and developers can optimize applications for “flash extended memory”, the intersection of in-memory computing and persistent memory technologies.
This presentation provides an introduction to the current activities leading to software architectures and methodologies for new NVM technologies, including the activities of the SNIA Non-Volatile Memory (NVM) Technical Working Group. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.
Yesterday's thinking may still believe NVMe (NVM Express) is in transition to a production ready solution. In this session, we will discuss how the evolution of NVMe is ready for production, the history and evolution of NVMe and the Linux stack to address where NVMe has progressed today to become the low latency, highly reliable database key value store mechanism that will drive the future of cloud expansion. Examples of protocol efficiencies and types of storage engines that are optimizing for NVMe will be discussed. Please join us for an exciting session where in-memory computing and persistence have evolved.
Hitachi Virtual Storage Platform is the only 3D scaling storage platform designed for all data types. It is the only storage architecture that flexibly adapts for performance, capacity and multivendor storage. Combined with unique Hitachi Command Suite management software, it transforms the data center.
Flash for the Real World – Separate Hype from RealityHitachi Vantara
Join us for a live webcast and hear Hu Yoshida, Chief Technology Officer of Hitachi Data Systems, discuss the real world criteria for making an effective decision when evaluating flash storage. With all the noise in the market it can be difficult to separate fact from fiction in order to evaluate the performance, efficiency and economic trade-offs for flash storage.
Specifically, you’ll learn how to determine if flash storage will help you:
Actually achieve the performance you need as you compare technology options.
Realize efficiency gains that extend beyond the promise of flash performance.
Make the economic case for real-world business decisions before taking the leap.
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
Capacity Efficiency: Identifying the Right Solutions for the Right ChallengeHitachi Vantara
Justin Augat, Hitachi Data Systems Senior Product Marketing Manager shares strategies to identify current storage costs, measure the unit cost of data storage, and set preliminary plans to reduce the total cost of storage.
Non-Volatile DIMMs, or NVDIMMs, have emerged as a go-to technology for boosting performance for next generation storage platforms. The standardization efforts around NVDIMMs have paved the way to simple, plug-n-play adoption. This session will highlight the state of NVDIMMs today and give a glimpse into the future – what customers, storage developers, and the industry would like to see to fully unlock the potential of NVDIMMs.
Benchmarking Performance: Benefits of PCIe NVMe SSDs for Client WorkloadsSamsung Business USA
The transition from Serial ATA (SATA ) to Peripheral Component Interconnect Express (PCIe) interface and Non-Volatile Memory Express (NVMe) protocol is taking client storage to a new level. This white paper discusses the benefits that PCIe NVMe SSDs, such as Samsung’s 950 PRO, bring to client PC users. Client PC workloads are not always well understood in the industry, since common benchmarking utilities tend to focus on measuring maximum performance rather than performance under typical PC usage. This white paper looks at actual IO traces of PC workloads to better understand how client SSDs should be benchmarked, and also tests the 950 PRO against other Samsung SSDs to show how PCIe and NVMe improve IO performance in tests that represent real-world IO activity.
This presentation provides an introduction to the current activities leading to software architectures and methodologies for new NVM technologies, including the activities of the SNIA Non-Volatile Memory (NVM) Technical Working Group. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.
Yesterday's thinking may still believe NVMe (NVM Express) is in transition to a production ready solution. In this session, we will discuss how the evolution of NVMe is ready for production, the history and evolution of NVMe and the Linux stack to address where NVMe has progressed today to become the low latency, highly reliable database key value store mechanism that will drive the future of cloud expansion. Examples of protocol efficiencies and types of storage engines that are optimizing for NVMe will be discussed. Please join us for an exciting session where in-memory computing and persistence have evolved.
Hitachi Virtual Storage Platform is the only 3D scaling storage platform designed for all data types. It is the only storage architecture that flexibly adapts for performance, capacity and multivendor storage. Combined with unique Hitachi Command Suite management software, it transforms the data center.
Flash for the Real World – Separate Hype from RealityHitachi Vantara
Join us for a live webcast and hear Hu Yoshida, Chief Technology Officer of Hitachi Data Systems, discuss the real world criteria for making an effective decision when evaluating flash storage. With all the noise in the market it can be difficult to separate fact from fiction in order to evaluate the performance, efficiency and economic trade-offs for flash storage.
Specifically, you’ll learn how to determine if flash storage will help you:
Actually achieve the performance you need as you compare technology options.
Realize efficiency gains that extend beyond the promise of flash performance.
Make the economic case for real-world business decisions before taking the leap.
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
Capacity Efficiency: Identifying the Right Solutions for the Right ChallengeHitachi Vantara
Justin Augat, Hitachi Data Systems Senior Product Marketing Manager shares strategies to identify current storage costs, measure the unit cost of data storage, and set preliminary plans to reduce the total cost of storage.
Non-Volatile DIMMs, or NVDIMMs, have emerged as a go-to technology for boosting performance for next generation storage platforms. The standardization efforts around NVDIMMs have paved the way to simple, plug-n-play adoption. This session will highlight the state of NVDIMMs today and give a glimpse into the future – what customers, storage developers, and the industry would like to see to fully unlock the potential of NVDIMMs.
Benchmarking Performance: Benefits of PCIe NVMe SSDs for Client WorkloadsSamsung Business USA
The transition from Serial ATA (SATA ) to Peripheral Component Interconnect Express (PCIe) interface and Non-Volatile Memory Express (NVMe) protocol is taking client storage to a new level. This white paper discusses the benefits that PCIe NVMe SSDs, such as Samsung’s 950 PRO, bring to client PC users. Client PC workloads are not always well understood in the industry, since common benchmarking utilities tend to focus on measuring maximum performance rather than performance under typical PC usage. This white paper looks at actual IO traces of PC workloads to better understand how client SSDs should be benchmarked, and also tests the 950 PRO against other Samsung SSDs to show how PCIe and NVMe improve IO performance in tests that represent real-world IO activity.
Handle transaction workloads and data mart loads with better performancePrincipled Technologies
Database work is a big deal—in terms of its importance to your company, and the sheer magnitude of the work. Our tests with the Dell EMC PowerEdge R930 server and Unity 400F All-Flash storage array demonstrated that it could perform comparably to an HPE ProLiant DL380 Gen9 server and 3PAR array during OLTP workloads, with a better compression ratio (3.2-to-1 vs. 1.3-to-1). For loading large sets of data, the Dell EMC Unity finished 22 percent faster than the HPE 3PAR, which can result in less hassle for the administrator in charge of data marts. When running both OLTP and data mart workloads in tandem, the Unity array outperformed the HPE 3PAR in terms of orders processed per minute by 29 percent. For additional product information concerning the Unity 400F storage array, visit DellEMC.com/Unity.
IBM SAN Volume Controller Performance Analysisbrettallison
Introduction
Storage Problems and Limitations with Native Storage
SVC Overview
SVC Physical and Logical Overview
Performance and Scalability Implications
Types of Problems
Performance Analysis Techniques
Performance Analysis Tools for SVC
Performance Analysis Metrics for SVC
Online Banking Example
This short paper discusses the work happening in the Fibre Channel Industry Association's T-11 committee to develop a new low latency protocol for a flash drive world. This paper is an excellent introduction to it.
Jean Thomas Acquaviva from DDN present this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer."
Learn more: http://ddn.com
Watch the video presentation: http://wp.me/p3RLHQ-f7J
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
IBM recently announced the brand new Version of one of the industry's fastest Flash Storage Solution. The IBM Flashsystem 900. Now triple capacity and inline compression on top.
Why hitachi virtual storage platform does so well in a mainframe environment ...Hitachi Vantara
Hitachi VSP is a new paradigm in enterprise array performance. In this session we will discuss how the architecture of VSP enhances its box-wide performance. The results of performance testing with synthetic host I/O generators and the PAI/O driver will also be presented.
C-Drive 2009 presentation by Scott DesBles about how Compellent's Data Instant Replay and Data Progression work together to create an efficient data storage system.
Using a Field Programmable Gate Array to Accelerate Application PerformanceOdinot Stanislas
Intel s'intéresse tout particulièrement aux FPGA et notamment au potentiel qu'ils apportent lorsque les ISV et développeurs ont des besoins très spécifiques en Génomique, traitement d'images, traitement de bases de données, et même dans le Cloud. Dans ce document vous aurez l'occasion d'en savoir plus sur notre stratégie, et sur un programme de recherche lancé par Intel et Altera impliquant des Xeon E5 équipés... de FPGA
Intel is looking at FPGA and what they bring to ISVs and developers and their very specific needs in genomics, image processing, databases, and even in the cloud. In this document you will have the opportunity to learn more about our strategy, and a research program initiated by Intel and Altera involving Xeon E5 with... FPGA inside.
Auteur(s)/Author(s):
P. K. Gupta, Director of Cloud Platform Technology, Intel Corporation
Handle transaction workloads and data mart loads with better performancePrincipled Technologies
Database work is a big deal—in terms of its importance to your company, and the sheer magnitude of the work. Our tests with the Dell EMC PowerEdge R930 server and Unity 400F All-Flash storage array demonstrated that it could perform comparably to an HPE ProLiant DL380 Gen9 server and 3PAR array during OLTP workloads, with a better compression ratio (3.2-to-1 vs. 1.3-to-1). For loading large sets of data, the Dell EMC Unity finished 22 percent faster than the HPE 3PAR, which can result in less hassle for the administrator in charge of data marts. When running both OLTP and data mart workloads in tandem, the Unity array outperformed the HPE 3PAR in terms of orders processed per minute by 29 percent. For additional product information concerning the Unity 400F storage array, visit DellEMC.com/Unity.
IBM SAN Volume Controller Performance Analysisbrettallison
Introduction
Storage Problems and Limitations with Native Storage
SVC Overview
SVC Physical and Logical Overview
Performance and Scalability Implications
Types of Problems
Performance Analysis Techniques
Performance Analysis Tools for SVC
Performance Analysis Metrics for SVC
Online Banking Example
This short paper discusses the work happening in the Fibre Channel Industry Association's T-11 committee to develop a new low latency protocol for a flash drive world. This paper is an excellent introduction to it.
Jean Thomas Acquaviva from DDN present this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer."
Learn more: http://ddn.com
Watch the video presentation: http://wp.me/p3RLHQ-f7J
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
IBM recently announced the brand new Version of one of the industry's fastest Flash Storage Solution. The IBM Flashsystem 900. Now triple capacity and inline compression on top.
Why hitachi virtual storage platform does so well in a mainframe environment ...Hitachi Vantara
Hitachi VSP is a new paradigm in enterprise array performance. In this session we will discuss how the architecture of VSP enhances its box-wide performance. The results of performance testing with synthetic host I/O generators and the PAI/O driver will also be presented.
C-Drive 2009 presentation by Scott DesBles about how Compellent's Data Instant Replay and Data Progression work together to create an efficient data storage system.
Using a Field Programmable Gate Array to Accelerate Application PerformanceOdinot Stanislas
Intel s'intéresse tout particulièrement aux FPGA et notamment au potentiel qu'ils apportent lorsque les ISV et développeurs ont des besoins très spécifiques en Génomique, traitement d'images, traitement de bases de données, et même dans le Cloud. Dans ce document vous aurez l'occasion d'en savoir plus sur notre stratégie, et sur un programme de recherche lancé par Intel et Altera impliquant des Xeon E5 équipés... de FPGA
Intel is looking at FPGA and what they bring to ISVs and developers and their very specific needs in genomics, image processing, databases, and even in the cloud. In this document you will have the opportunity to learn more about our strategy, and a research program initiated by Intel and Altera involving Xeon E5 with... FPGA inside.
Auteur(s)/Author(s):
P. K. Gupta, Director of Cloud Platform Technology, Intel Corporation
Rapid Application Design in Financial ServicesAerospike
Applying internet NoSQL design patterns to fraud detection and risk scoring, including when to use SQL and when to use NoSQL. The state of NAND Flash and NVMe is also discussed, as well as storage class memory futures with Intel's 3D Xpoint technology.
This talk was presented in LA at the following meetup:
http://www.meetup.com/scalela/events/233396111/
Esta oferta podría ser la solución para el punto de partida de Flash que junto con Spectrum Scale te da una solución de Software Defined Storage escalable, para cumplir con los requisitos del almacenamiento no estructurado y big data.
Director of Data Center Solutions Marketing, Brian Allison, breaks down the impact of Big Data Flash in this presentation during Flash Memory Summit 2016
Flash memory summit 2015 gary lyng session 301-aGARY LYNG
The World is Ready for Big Data Flash - these workloads, enterprise's at scale and hyperscalers have already discovered the real world benefits of implementing Flash in the data center for high performance and high capacity workloads and @Scale the incredible bottom line savings while raising the SLA's of apps. and the business they support.
Webinar: All-Flash For Databases: 5 Reasons Why Current Systems Are Off TargetStorage Switzerland
In this webinar join Storage Switzerland’s founder and lead analyst George Crump and Vexata’s VP of Products and Solutions Rick Walsworth as they explain how all-flash systems have fallen short and how IT can realize the full potential of flash-based storage without the compromises. Learn 5 areas where all-flash arrays miss the database performance mark.
With the football season in full swing, the baseball season heading into the playoffs, and the hockey season just starting, it is time to raid the refrigerator for snacks, head for the most comfortable chair in the family room, and settle in for a full day of viewing sports. Unfortunately, it is not always easy to turn on the myriad number of devices required to watch a game broadcast over cable, on that wide-screen hi-def TV, with the wrap-around sound from the latest audio system available. There is the re-mote for the cable system; there is a remote for the TV; there is one for the satellite dish; there is anoth-er for the sound system. There are so many remote controls on the coffee table that there is hardly room for the snacks! What you need is a universal remote; a single, simplified command center that can control all of the hi-tech equipment in the family room. Unfortunately, even that universal remote will not do the job for any device released after the remote was manufactured. What is required is a universal remote with a learning capability to take the complexity out of turning on the TV, one than can reprogram itself from the remote that comes with every new device.
This is a paper was written by David Reine, an IT analyst for The Clipper Group, and highlights IBM’s SAN Volume Controller new features, capabilities and benefits. These new capabilities were announced on October 20, 2009 If you have a heterogeneous storage architecture in your data center that is under-utilized and costing the enterprise on the bottom line, IBM SVC 5 may be the solution that you have
Operational costs and complexity can grow exponentially as storage capacity increases. In this session learn how Dell Storage SC automates the most common storage tasks, and Enterprise Manager™ software delivers centralized management of all local and remote Storage Center™ environments.
SQLintersection keynote a tale of two teamsSumeet Bansal
Shared the stage with Kevin Kline. Paul Randal and Kimberly L. Tripp organized an excellent conference. This slide deck talks about how to design large MS SQL Server architectures with 1000s of databases that are high performance and yet easy to manage. ioMemory by Fusion-io provides performance and SQL Sentry provides an amazing interface to manage and monitor 1000s of databases.
The Consequences of Infinite Storage Bandwidth: Allen Samuels, SanDiskOpenStack
Audience: Beginner to Intermediate
About: Overall increases in CPU and DRAM processing power are falling behind the massive acceleration in available storage and network bandwidth. Storage management services are emerging as a serious bottleneck. What does this imply for the datacenter of the future? How will it affect the physical network and storage topologies? And how will storage software need to change to meet these new realities?
Speaker Bio: Allen joined SanDisk in 2013 as an Engineering Fellow, he is responsible for directing software development for SanDisk’s system level products. He has previously served as Chief Architect at Weitek Corp. and Citrix, and founded several companies including AMKAR Consulting, Orbital Data Corporation, and Cirtas Systems. Allen has a Bachelor of Science in Electrical Engineering from Rice University.
OpenStack Australia Day - Sydney 2016
https://events.aptira.com/openstack-australia-day-sydney-2016
In this webinar join experts from Storage Switzerland and Tegile to discover if the All-Flash Data Center can become reality. We will explore the return on investment that All-Flash systems can deliver, like increase user and virtual machine densities, lower drive counts and simpler storage architectures. We will also look at some of the methods that All-Flash systems employ to deliver an acceptable cost per GB like thin provisioning, clones, deduplication and compression. Finally we will take one last look at disk, does it have a role in the All-Flash Data Center and if it does what should that role be?
Speedment SQL Reflector is a software solution that allows applications to get automatically updated data in real time. The SQL Reflector loads data from your existing SQL database and feeds it into an in-memory data grid e.g. GridGain. When started, the SQL reflector will load your selected existing relational data into your map cluster. Also, any subsequent changes that are made to the relational database (regardless how, via your application, script, SQL commands or even stored procedures) are then continuously fed to your GridGain nodes. Even SQL-transactions are preserved so that your maps will always reflect a valid state of the underlying SQL database.
Stibo Systems recently released its in-memory component for our Master Data Management (MDM) platform, giving significant speed-ups in most parts of the system. Our MDM platform provides high volume data management with many concurrent users. This in-memory component is built in-house and this talk is about how and why we did this, including:
MVCC (Multi Version Concurrency Control) aware map, off heap and compact.
- Lock-free MVCC aware indexing.
- Wait-free MVCC aware querying that goes directly on the metal.
- Clustering and MVCC with recovery support.
- Why we built our own in-memory technology, how we integrated it into our existing 200+ man years system and the speed-ups we gained.
This will help you as a developer to navigate the landscape of in-memory products and identify the trade-offs involved helping you choose the right path.
Today, many companies are faced with a huge quantity of data and a wide variety of tools with which to process it. This potentially allows for great opportunities to satisfy customers’ needs and bring user experience to the next level. However, in order to achieve this and provide a competitive solution, sophisticated and complex data processing is needed. Such processing can rarely be done with one tool or framework — a number of tools are often involved, each having prowess in a particular field of the processing pipeline.
In this session, we will see the latest endeavors of Apache Ignite to integrate with other big data platforms and provide its in-memory computing strengths for data processing pipelines. In particular we will have a closer look at how it can be integrated and used with Apache Kafka and/or Flume, and outline several use scenarios.
As the dangers of global climate change multiply, utility companies seek methods to reduce carbon emissions, such as integrating renewable and sustainable energy sources like wind, solar, and hydroelectric power. Renewable energy not only has the power to improve climate conditions, it also encourages economic growth. By combining advances in sensor technology with machine learning algorithms and environmental data, utility companies can monitor energy sources in real time to make faster decisions and speed innovation.
In this session, Nikita Shamgunov, CTO and co-founder of MemSQL, will conduct a live demonstration based on real-time data from 2 million sensors on 197,000 wind turbines installed on wind farms around the world. This Internet of Things (IoT) simulation explores the ways utility companies can integrate new data pipelines into established infrastructure. Attendees will learn how to deploy this breakthrough technology composed of Apache Kafka, a real-time message queue; Streamliner, an integrated Apache Spark solution; MemSQL Ops, a cluster management and monitoring interface; and a set of simulated data producers written in Python. By applying machine learning to analyze millions of data points in real time, the data pipeline predicts and visualizes health of wind farms at global scale. This architecture propels innovation in the energy industry and is replicable across other IoT applications including smart cities, connected cars, and digital healthcare.
Apache Ignite is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time. But, did you know it provides streaming and complex event processing (CEP)? In this hands-on demonstration we will take Apache Ignite’s Streaming and CEP features for a test drive. We will start with an example streaming use case then demonstrate how to implement each component in Apache Ignite. Finally we will show how to connect a dashboard application to Apache Ignite to display the results.
PipelineDB is an open-source relational database that runs SQL queries continuously on streaming data, incrementally storing results in tables. Our talk will include an overview of PipelineDB’s architecture, the use cases for continuous SQL queries on streams, user case studies, and outline how PipelineDB can used to easily build scalable and highly available streaming and realtime analytics applications using only SQL with no external dependencies.
Simplicity, accuracy, speed are three things everyone wants from their data architecture. A content delivery network based in LA, was looking to achieve these goals and developed a framework that handled batch and stream processing with open source software. The objective was to manage the real-time aggregation of over 32 TB of daily web server log data. The problem? Everything. Listen as Dennis Duckworth explains how VoltDB reduced the number of environments, used 1/10th the CPU cycles, and achieved 100% billing accuracy on 32 TB of daily web server data.
In-memory computing is all about now. It’s the art of collecting and processing data as quickly as it is created in order to provide instant actionable insights. Databases, however, are all about the past. They are a record of what happened, not what is happening right now.
In this presentation, you will learn how to turn your enterprise databases, and the applications they support, into real-time sources of what’s currently happening throughout the business. By utilizing database change, and in-memory processing and analytics, you can tap into your enterprise activity and make decisions while the data is still relevant.
Neeve Research offers the X Platform, a revolutionary memory-oriented transaction processing platform for extreme enterprise applications. The platform uniquely integrates structured in-memory state, advanced messaging, multi-agency and decoupled enterprise data management to enable a true no-compromise extreme TP platform. The true innovation of the platform lies in its ability to provide a no-compromise blend of extreme performance, reliability, scalability and developmental agility. It is extremely fast, it is extremely easy to use, it can be used to build a wide variety of applications and the applications built using it exhibit zero data loss and scale linearly. After almost a decade of hard engineering and close-quarters field hardening with an exclusive set of Fortune 300 companies, Neeve is opening the platform for wider use. Listen as Girish Mutreja unveils the X Platform and shows how easy it is to build an application that performs at 100s of thousands of transactions per second or sub-100 microsecond latencies with zero garbage and zero data loss.
The reality is that you don’t need ‘stateless’ services to either scale out or be fault tolerant — what you really need is a scalable, fault tolerant state management solution that you can build your services around.
In this talk we will discuss how some of the popular microservices frameworks are tackling this problem, and will look at technologies available today that make it possible to build scalable, highly available systems without ‘stateless’ service layers, whether you are building microservices or good ol’ monoliths.
This talk describes the future memory and storage architecture create by the convergence of In-Memory computing and emerging Persistent Memory technologies. The audience will learn:
- The new Memory and Storage architecture created by these technologies;
- The new operating system file system and memory management architectures under development by the major OS vendors;
- The new APIs for In-Memory computing with Persistent Memory
- Opportunities for software innovation based this disruptive shift in the cloud architecture
Much industry focus is on All-Flash Arrays with traditional databases, but new databases using native direct-attached Flash have proven reliable, performant, and popular for operational use cases. Today, these operational databases store account information for banking and retail applications, real-time routing information for telecoms, and user profiles for advertising; they also support machine learning for applications in the financial industry, such as fraud detection. While proprietary PCIe and “wide SATA” had previously been popular, NVMe has finally come into operational use. Aerospike will discuss the benefits of NVMe for these use cases (including specific configurations and performance numbers), as well as the architectural implications of low-latency Flash and Storage Class Memory.
In-Memory Computing frameworks such as Spark are gaining tremendous popularity for Big Data processing as their in-memory primitives make it possible to eliminate disk I/O bottleneck. Logically, the more available memory they have, the better performance they can achieve. However, unpredicted GC activity from on-heap memory management, high cost for serialization/de-serialization (SerDe), and burst temporary object creation/destruction greatly impacts their performance and scale-out ability. For example in Spark, when the volume of datasets are much larger than the system memory volume, SerDe makes significant impact on almost every in-memory computing steps such as caching, checkpoint, shuffling/dispatching, data loading and Storing.
With fast growing advanced server platform with significant increased non-volatile memory such as Intel 3D Xpoint technology powered NVMe and Fast SSD Array Storage, how to best use various hybrid memory-like resources from DRAM to NVMe/SSD determines Big Data applications performance and scalability.
In this presentation, we will first introduce our non-volatile generic Java object programming model for In-Memory Computing. This programming model defines in-memory non-volatile objects which can be directly operated on memory-like resources. We then discuss our structured data in-memory persistence library that can be used to load/store non-volatile generic Java object from/to underlying heterogeneous memory-like resources, such as DRAM, NVMe, even SSD.
We then present a non-volatile computing case using Spark. We will introduce that this model can (1) Lazily loads data to minimize memory footprint, (2) Naturally fits both non-volatile RDD and off-heap RDD, (3) Uses non-volatile/off-heap RDDs to transform Spark datasets, (4) Avoids memory caching by using in-place non-volatile datasets.
Finally we will present that up to 2X performance boost can be achieved on Spark ML tests after applying this non-volatile computing approach that removed SerDe, caching hot data, and reducing GC pause time dramatically.
The advent of non-volatile memory (NVM) will fundamentally change the dichotomy between memory and durable storage in database management systems (DBMSs). These new NVM devices are almost as fast as DRAM, but all writes to it are potentially persistent even after power loss. Existing DBMSs are unable to take full advantage of this technology because their internal architectures are predicated on the assumption that memory is volatile. That means when NVM finally arrives, just like when you finally passed that kidney stone after three weeks, everyone will be relieved but the transition will be painful. Many of the components of legacy DBMSs will become unnecessary and will degrade the performance of data intensive applications.
With persistent memory solutions quickly moving from concept designs to mass-production reality, IT architects are faced with significant questions: How do I get the most value out of my system? How will the broader market adopt and implement today’s NVDIMM portfolio? What applications gain the most benefit from today’s solutions? What are the current challenges for adoption? How should I plan to ensure I keep up with industry trends?
Gordon Patrick, director of Micron’s enterprise computing memory business, will provide a view of how current products are driving new opportunities in persistent memory and provide insight on important industry trends affecting tomorrow’s persistent memory platforms.
Three key audience takeaways:
What are the clearest routes to add value to your systems through today’s persistent memory solutions?
What key design elements should be considered given the broader shift to non-volatile memory systems?
What changes are needed to truly extract the value from today’s persistent memory technology?
Online decision making over time needs interacting with an ever changing environment. And underlying machine learning models need to change and adapt to this changing environment. We discuss class of algorithms and provide details of how the computation is parallelized using the Spark framework. Our implementation follows the architectural style of the Lambda Architecture—a batch layer to process bulk data and create models, a speed layer to process incremental data and create updates to models, and a serving layer to respond to decision requests in near real time. The batch layer is implemented as a Spark application, the speed layer is a Spark Streaming application, and the serving layer is implemented using the Play Framework. Spark’s MlLib and low-level API are used for training and creating models in both the batch and speed layers.
Do you need to move enterprise database information into a Data Lake in real time, and keep it current? Or maybe you need to track real-time customer actions in order to engage them while they are still accessible. Perhaps you have been tasked with ingesting and processing large amounts of IoT data.
Lets face it: Distributed computing is hard. The truth is that most systems and vendor solutions work great under regular conditions, but what separates them is what happens when things go wrong. If you’re building a mission critical distributed system, you need to take the time to build infrastructure to test for failure. In this talk we’ll outline how we think about testing a distributed system, and share some real world experience in ferreting out issues before they become problems in production. We’ll provide a hands on overview of our test framework and show you how you too can be prepared.
While everyone is talking about ‘stateless’ services as a way to achieve scalability and high availability, the truth is that they are about as real as the unicorns. Building applications and services that way simply pushes the problem further down the stack, which only makes it worse and more difficult to solve (although, on the upside, it might make it somebody else’s problem). This is painfully obvious when building microservices, where each service must truly own its state.
The reality is that you don’t need ‘stateless’ services to either scale out or be fault tolerant — what you really need is a scalable, fault tolerant state management solution that you can build your services around.
In this talk we will discuss how some of the popular microservices frameworks are tackling this problem, and will look at technologies available today that make it possible to build scalable, highly available systems without ‘stateless’ service layers, whether you are building microservices or good ol’ monoliths.
Modern transactional systems need to be fast, always available and constantly scale to meet the ever changing needs of the business. It is becoming increasingly commonplace for next generation e-commerce systems to demand double or single digit millisecond response times, for financial trading systems to incur maximum latencies in the order of microseconds and gaming and analytic engines to consumes hundreds of thousands of transactions a second. It is a common and tempting mistake to believe that we can meet the extreme needs of such systems by just replacing traditional disk based storage systems with in-memory data grids using traditional application architectures. Such an approach will take us only so far after which the system’s demands will once again overtake its capabilities. To truly meet the extreme needs of these systems and continue to scale as the demand scales, we need to think differently about how such systems are architected and employ modern techniques to unlock the full potential of memory oriented computing. This talk explains why and how.
Join Girish Mutreja, CEO of Neeve Research and author of the X Platform as he discusses the above and provides a unique perspective into what’s different about memory oriented TP applications and how application architectures, particularly mission critical applications, need to adapt to the new world of memory oriented computing. Girish will outline the key architectural elements of TP applications and explain how they need to function in the world of memory oriented computing. He will delve into why such systems need to be architected as a marriage between messaging and data storage; why message routing and data gravity is of critical importance to these systems; how structured, in-memory state lends to extreme agility; how fault tolerance, load balancing, transaction processing and threading need to function in such systems; why architectural precepts such as transaction pipelining and agent oriented design are critical to reliability, performance and scalability. Girish will illustrate how these concepts have enabled enterprises such as MGM Resorts to transition to game changing, memory oriented architectures by leveraging the X Platform.
Caching is a frequently used and misused technique for speeding up performance, off-loading non-scalable or expensive infrastructure, scaling systems and coping with large processing peaks. In this talk Greg introduces you to the theory of caching and highlights key things to keep in mind when you apply caching. Then we take a comprehensive look at how the JCache standard standardises Java usage of caching.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
When stars align: studies in data quality, knowledge graphs, and machine lear...
IMCSummit 2015 - Day 2 IT Business Track - Drive IMC Efficiency with Flash Extended Memory
1. 1
Driving
IMC
Compu/ng
Efficiency
with
Flash
Extended
Memory
Dan
Baigent,
Sr
Director
Strategic
Partner
Ecosystems
June
30,
2015
2. 2
Forward-‐Looking
Statements
During
our
mee;ng
today
we
will
make
forward-‐looking
statements.
Any
statement
that
refers
to
expecta;ons,
projec;ons
or
other
characteriza;ons
of
future
events
or
circumstances
is
a
forward-‐looking
statement,
including
those
rela;ng
to
market
growth,
products
and
their
capabili;es,
performance
and
compa;bility,
cost
savings
and
other
benefits
to
customers.
Informa;on
in
this
presenta;on
may
also
include
or
be
based
upon
informa;on
from
third
par;es,
which
reflects
their
expecta;ons
and
projec;ons
as
of
the
date
of
issuance.
We
undertake
no
obliga;on
to
update
these
forward-‐looking
statements,
which
speak
only
as
of
the
date
hereof.
3. 3
Widening
Performance
Gap
1990
1995
2000
2005
2010
2015
Server CPU
Performance
Mega
Trend:
Legacy
Storage
I/O
BoKleneck
Aligning
Performance
SSD
Performance
HDD
Performance
Source:
StorageIOblog;
Sep
2009;
hSp://storageioblog.com/data-‐center-‐io-‐boSlenecks-‐performance-‐issues-‐and-‐impacts/
4. 4
Non-‐Vola/le
Memory
(NVM)
! Today:
NAND
Flash
– Capacity:
100s
of
GB
to
100s
of
TB
per
device
– Trends:
Higher
capacity,
lower
cost/GB,
lower
write
cycles,
SLC-‐>MLC-‐>3BPC
– IOPS:
100K
to
millions,
GB/s
of
bandwidth
! Tomorrow:
Non-‐Vola;le
Memory
technologies
(Phase
Change
Memory,
MRAM,
STT-‐RAM,
etc.)
– Poten;al
for
orders
of
magnitude
performance
improvement
Fusion
ioMemory™
PCIe
SAS
and
SATA
SSDs
InfiniFlash™
System
SanDisk
ION
Accelerator™
5. 5
Why
Do
Applica/ons
Need
Op/miza/on
for
Flash?
! Many
applica;ons
assume
high
latency
storage
(some
even
op;mize
for
read/
write
head
posi;oning)
! Flash
is
different
from
disk
– Performance,
endurance,
addressing
– Geeng
more
different
over
;me
! Flash-‐focused
applica;on
accelera;on
– Shifing
boSlenecks
(Compute
to
I/O
to
Network
to
Applica;on)
– Managed
writes
=
greater
device
life;me
(wear
leveling,
endurance)
– Improved
system
efficiency
(TCO
and
TCA)
– Even
lower
power
and
cooling
costs
Area
Hard
Disk
Drives
Flash
Devices
Read/Write
Performance
Largely
symmetrical
Heavily
asymmetrical.
Sequen;al
vs
Random
Performance
100x
difference.
<10x
difference.
Background
ops
Rare
Regular
Wear
out
Largely
unlimited
Limited
writes
IOPS
100s
to
1,000s
100Ks
to
Millions
Latency
10s
milli
sec
10s-‐100s
micro
sec
Addressing
Sequence,
Sector
Direct,
byte
addressable
6. 6
Becoming
“Flash
Aware”:
SanDisk
NVMFS
Value
! Increase
life
expectancy
of
flash
devices
! Consistent
low
latency
! Consistent
high
performance
How
! Reducing
MySQL™
Writes
to
flash
! Op;mize
IO
Write
path
for
flash
! Applica;ons
leverage
enhanced
I/O
interface
Available
today!
Non-‐Vola;le
Memory
File
System
–
Op;mized
for
Flash
and
Beyond
Native Flash Translation Layer
block allocation, mapping, recycling
ACID updates, logging/journaling, crash-recovery
NVMFS
file metadata mgmt
Kernel block layer
kernel-space
user-space
Standard Linux File Systems
file metadata mgmt,
block allocation, mapping,
recycling,
ACID updates, logging/journaling,
crash-recovery
Flash Primitives
MySQL™ Database
Linux VFS (virtual file system) abstraction layer
New File system
Elimina;ng
Duplicate
Logic
and
Leveraging
New
Primi;ves
for
Op;mal
Flash
Performance
and
Efficiency
7. 7
Persistent
Storage
In-‐Memory
Database
and
Persistence
Checkpoints,
Logs
and
Data
Tiering,
Oh
My!
Logs
Mem
Image
Checkpoints
State
Changes
Transac'ons
over
Time
Warm
Data
Data
Tiering
Backup
Systems
In-‐Memory
Compute
(transac/ons,
processing,
analy/cs)
“Hot
Data”
Persistent
Snapshots
Transac;on
Commit
Restore
Data
into
memory
Backup
and
Archiving
Possible
wait
condi;ons
Logs
8. 8
Emergence
of
Flash
Extended
Memory
Flash DevicesMemory Persistent Memory
HDDs
Memory
Addressable
and
I/O
Addressable
“Flash
as
Memory”
“Flash
as
Disk”
Tradi;onal
Block
I/O
Storage
Device
Technologies:
Flash
Extended
Memory
Decoupling
of
the
physical
infrastructure
(memory)
from
the
management
and
u/liza/on
of
that
infrastructure
through
a
so[ware
interface
9. 9
Flash
Extended
Memory
Example:
MongoDB
–
Lowering
TCO
through
transparent
DRAM/Flash
/ering
MongoDB
Throughput
(opera;ons/second)
0
2000
4000
6000
8000
24 GB Node 12 GB Node 8 GB Node
-‐26%
-‐33%
Throughput
Devia;on
(opera;ons/second)
0
100
200
300
400
24 GB Node 12 GB Node 8 GB Node
3x
Improvement
YCSB
Workload
A
DRAM
reduc;ons
of
2x
and
3x
yield
26%
and
33%
throughput
degrada;ons
respec;vely.
Throughput
predictability
actually
improves
with
less
DRAM.
Latency
shows
similar
trend.
Source:
Based
on
internal
tes;ng
by
SanDisk;
Nov
2014
10. 10
RA
New
Memory-‐Storage
Hierarchy
Range
of
persistence
–
I/O
becomes
Load-‐
Store
Storage
Memory
Convergence:
Rethinking
the
Memory
Hierarchy
L1,
L2,
L3
CPU
Caches
DRAM
Persistent
Memories
Flash
Hard
Drive
Microseconds
Nanoseconds
CYCLES
TO
WAIT
Main
Memory
System
High
Performance
I/O
System
Accessed
Like
Memory
and
Managed
Like
Storage
Milliseconds
ACCESS
DELAY
2
cycles
4
million
cycles
11. 11
Implica/ons
for
Applica/ons
! Extremely
low
latency
transac;on
commits
– 10-‐100us
reduced
to
<1us
! Accelerate
logs,
indexes
etc.
! Convergence
of
in-‐memory
and
disk
approaches
– New
data
structures
to
op;mize
directly
for
memory
access
! Rich
variety
of
programming
models
– Tradi;onal
I/O
will
s;ll
be
available
– Mmap
directly
to
persistent
memory
is
alterna;ve
model:
use
pointer
opera;ons
to
manipulate
persistent
data
– SNIA
Non-‐Vola;le
Memory
Programming
TWG
–
standardize
mmap
for
persistent
memory
! New
challenges
– CPU
cache
management
– Atomicity?
12. 12
Technology
Preview:
Database
Accelera/on
Through
Flash
Extended
Memory
NVMFS
(POSIX
compliant
file
system)
ACM
(Auto-‐Commit
Memory)
So[ware
• A
“transparent”
Sofware
Defined
Memory
layer
can
provide
accelerated
I/O
over
a
“baseline”
unaware
interface
• But
“flash-‐aware”
applica;ons
can
op;mize
that
accelera;on
Flash DevicesMemory Persistent Memory
Memory
Device
Technologies:
Memory
Interfaces
Oracle
MySQL™
(No
applica;on
changes)
(Applica;on
op;mized)
“Flash
-‐Aware”
“Transparent”
Flash
Extended
Memory
13. 13
Technology
Preview
Example
Configura/on
HP
ProLiant
DL380
Gen8
Fusion
ioMemory™
PCIe-‐based
flash
Standard
MySQL
database
“Baseline”
Flash
Extended
Memory
Enabled
Standard
Flash
Accelera/on
HP
ProLiant
DL380
Gen8
Fusion
ioMemory™
and
persistent
memory
with
NVMFS
Standard
MySQL
database
“Transparent”
NVMFS
and
ACM
HP
ProLiant
DL380
Gen8
Op/mized
MySQL
database
“Flash
Aware”
NVMFS
and
ACM
Fusion
ioMemory™
and
persistent
memory
with
NVMFS
14. 14
Performance
Results
Latency
(lower
is
beSer)
Throughput
(higher
is
beSer)
Source:
Based
on
internal
tes;ng
by
SanDisk;
Nov
2014
15. 15
Advantages
&
Benefits
! Improve
“Baseline”
MySQL
throughput
performance
by
roughly
60%
via
“Transparent”
accelera;on
(no
sofware
mods)
! Op;mize
MySQL
throughput
performance
by
over
3x
with
“Flash
Aware”
accelera;on
(modified
sofware)
! Improve
“Baseline”
MySQL
latency
by
roughly
2.3x
(Transparent)
and
op;mized
latency
by
more
than
4x
(Flash
Aware)
! Uses
“Flash-‐as-‐Memory”
byte-‐addressable
architecture
and
interface
! Seamless
deployment
–
add
ioMemory
and
NVMFS/ACM
sofware
to
Linux
! Increase
performance
and
capacity
in
flexible
configura;ons
Source:
Based
on
internal
tes;ng
by
SanDisk;
Nov
2014
16. 16
Restore
Data
into
memory
Backup
and
Archiving
In-‐Memory
Compu/ng
–
New
Approach
to
I/O
Backup
Systems
In-‐Memory
Compute
(transac/ons,
processing,
analy/cs)
“Hot
Data”
Transac'ons
over
Time
Persistence
via
CPU
Load/
Store
POSIX
File
System
Persistent
Storage
Flash
Extended
Memory
Persistent MemoryMemory Flash Devices