This document discusses the case for RAMClouds, which are storage systems that keep all data in DRAM rather than on disk. The key points made in the summary are:
1) RAMClouds could provide 100-1000x lower latency and 100-1000x greater throughput than current disk-based storage systems by eliminating disk I/O.
2) RAMClouds would simplify application development, enable richer queries, and provide scalable storage for cloud computing by eliminating many scalability issues faced by disk-based systems.
3) Motivations for RAMClouds include improving application scalability beyond what is possible with disk-based databases, and the trend of disk technology improvements not keeping pace with increases
DataCore™ Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses
the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.
DataCore Virtual SAN virtualizes the local storage on two or more physical x86-64 servers. It can leverage any combination of magnetic disks (SAS, SATA) and optionally flash, to provide persistent storage services as close to the application as possible without having to go out over the wire (network or fabric). Virtual disks provisioned from DataCore Virtual SAN can also be shared across the cluster to support the dynamic migration and failover of applications between hosts.
DataCore Virtual SAN addresses the challenges that exist today within many IT organizations such as single points of failure, poor application performance (particularly within virtualized environments), low storage efficiency and utilization, and high infrastructure costs.
Virtual SAN - A Deep Dive into Converged Storage (technical whitepaper)DataCore APAC
DataCore™ Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses
the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.
DataCore Virtual SAN virtualizes the local storage on two or more physical x86-64 servers. It can leverage any combination of magnetic disks (SAS, SATA) and optionally flash, to provide persistent storage services as close to the application as possible without having to go out over the wire (network or fabric). Virtual disks provisioned from DataCore Virtual SAN can also be shared across the cluster to support the dynamic migration and failover of applications between hosts.
DataCore Virtual SAN addresses the challenges that exist today within many IT organisations such as single points of failure, poor application performance (particularly within virtualized environments), low storage efficiency and utilisation, and high infrastructure costs.
EMC Isilon Best Practices for Hadoop Data StorageEMC
This paper describes the best practices for setting up and managing the HDFS service on an EMC Isilon cluster to optimize data storage for Hadoop analytics. This paper covers OneFS 7.2 or later.
INTELLIGENT DISK SUBSYSTEMS – 2, I/O TECHNIQUES – 1
Caching: Acceleration of Hard Disk Access; Intelligent disk subsystems; Availability of disk subsystems. The Physical I/O path from the CPU to the Storage System; SCSI.
I/O TECHNIQUES – 2, NETWORK ATTACHED STORAGE
Fibre Channel Protocol Stack; Fibre Channel SAN; IP Storage. The NAS Architecture, The NAS hardware Architecture, The NAS Software Architecture, Network connectivity, NAS as a storage system.
DataCore™ Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses
the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.
DataCore Virtual SAN virtualizes the local storage on two or more physical x86-64 servers. It can leverage any combination of magnetic disks (SAS, SATA) and optionally flash, to provide persistent storage services as close to the application as possible without having to go out over the wire (network or fabric). Virtual disks provisioned from DataCore Virtual SAN can also be shared across the cluster to support the dynamic migration and failover of applications between hosts.
DataCore Virtual SAN addresses the challenges that exist today within many IT organizations such as single points of failure, poor application performance (particularly within virtualized environments), low storage efficiency and utilization, and high infrastructure costs.
Virtual SAN - A Deep Dive into Converged Storage (technical whitepaper)DataCore APAC
DataCore™ Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses
the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.
DataCore Virtual SAN virtualizes the local storage on two or more physical x86-64 servers. It can leverage any combination of magnetic disks (SAS, SATA) and optionally flash, to provide persistent storage services as close to the application as possible without having to go out over the wire (network or fabric). Virtual disks provisioned from DataCore Virtual SAN can also be shared across the cluster to support the dynamic migration and failover of applications between hosts.
DataCore Virtual SAN addresses the challenges that exist today within many IT organisations such as single points of failure, poor application performance (particularly within virtualized environments), low storage efficiency and utilisation, and high infrastructure costs.
EMC Isilon Best Practices for Hadoop Data StorageEMC
This paper describes the best practices for setting up and managing the HDFS service on an EMC Isilon cluster to optimize data storage for Hadoop analytics. This paper covers OneFS 7.2 or later.
INTELLIGENT DISK SUBSYSTEMS – 2, I/O TECHNIQUES – 1
Caching: Acceleration of Hard Disk Access; Intelligent disk subsystems; Availability of disk subsystems. The Physical I/O path from the CPU to the Storage System; SCSI.
I/O TECHNIQUES – 2, NETWORK ATTACHED STORAGE
Fibre Channel Protocol Stack; Fibre Channel SAN; IP Storage. The NAS Architecture, The NAS hardware Architecture, The NAS Software Architecture, Network connectivity, NAS as a storage system.
Block Level Storage Vs File Level StoragePradeep Jagan
Video Management System is responsible for accessing, controlling and managing the video content management environment across an Internet Protocol Network.
Class lecture by Prof. Raj Jain on Storage Virtualization. The talk covers Disk Arrays, Data Access Methods, SCSI (Small Computer System Interface), Advanced Technology Attachment (ATA), ESCON and FICON, Fibre Chanel, Fibre Channel Devices, Fibre Channel Protocol Layers, Fibre Channel Flow Control, Fibre Channel Classes of Service, What is Storage Virtualization?, Benefits of Storage Virtualization, Virtualizing Storage, RAID Levels, Nested RAIDs, Synchronous vs. Asynchronous Replication, Virtual Storage Area Network (VSAN), Physical Storage Network, Virtual Storage Network, SAN vs. NAS, iSCSI (Internet Small Computer System Interface), iFCP (Internet Fiber Channel Protocol), FCIP (Fibre Channel over IP), FCoE (Fibre Channel over Ethernet), Virtual File Systems. Video recording available in YouTube.
Personal storage to enterprise storage system journeySoumen Sarkar
A brief journey through storage wonderland. Notes are not visible when you view in slide share. However they would be visible when the slides are viewed in powerpoint [you have to download the slides for that]
INTRODUCTION : Server Centric IT Architecture and its Limitations; Storage – Centric IT Architecture and its advantages; Case study: Replacing a server with Storage Networks; The Data Storage and Data Access problem; The Battle for size and access.
INTELLIGENT DISK SUBSYSTEMS – 1
Architecture of Intelligent Disk Subsystems; Hard disks and Internal I/O Channels, JBOD, Storage virtualization using RAID and different RAID levels;
EMC Isilon Best Practices for Hadoop Data StorageEMC
This white paper describes the best practices for setting up and managing the HDFS service on an Isilon cluster to optimize data storage for Hadoop analytics.
FILE SYSTEM AND NAS: Local File Systems; Network file Systems and file servers; Shared Disk file systems; Comparison of fiber Channel and NAS.
STORAGE VIRTUALIZATION: Definition of Storage virtualization; Implementation Considerations; Storage virtualization on Block or file level; Storage virtualization on various levels of the storage Network; Symmetric and Asymmetric storage virtualization in the Network
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandINFINIDAT
Desktop and server virtualization have brought many benefits to the data center. These two initiatives have allowed IT to respond quickly to the needs of the organization while driving down IT costs, physical footprint requirements and energy demands. But there is one area of the data center that has actually increased in cost since virtualization started to make its way into production… storage. Because of virtualization, more data centers need "ash to meet the random I/O nature of the virtualized environment, which of course is more expensive, on a dollar per GB basis, than hard disk drives. The single biggest problem however is the signi!cant increase in the number of discrete storage systems that service the environment. This “storage sprawl” threatens the return on investment (ROI) of virtualization projects and makes storage more complex to manage.
Learn more at www.infinidat.com.
Storage Virtualization: Towards an Efficient and Scalable FrameworkCSCJournals
Enterprises in the corporate world demand high speed data protection for all kinds of data. Issues such as complex server environments with high administrative costs and low data protection have to be resolved. In addition to data protection, enterprises demand the ability to recover/restore critical information in various situations. Traditional storage management solutions such as direct-attached storage (DAS), network-attached storage (NAS) and storage area networks (SAN) have been devised to address such problems. Storage virtualization is the emerging technology that amends the underlying complications of physical storage by introducing the concept of cloud storage environments. This paper covers the DAS, NAS and SAN solutions of storage management and emphasizes the benefits of storage virtualization. The paper discusses a potential cloud storage structure based on which storage virtualization architecture will be proposed.
Block Level Storage Vs File Level StoragePradeep Jagan
Video Management System is responsible for accessing, controlling and managing the video content management environment across an Internet Protocol Network.
Class lecture by Prof. Raj Jain on Storage Virtualization. The talk covers Disk Arrays, Data Access Methods, SCSI (Small Computer System Interface), Advanced Technology Attachment (ATA), ESCON and FICON, Fibre Chanel, Fibre Channel Devices, Fibre Channel Protocol Layers, Fibre Channel Flow Control, Fibre Channel Classes of Service, What is Storage Virtualization?, Benefits of Storage Virtualization, Virtualizing Storage, RAID Levels, Nested RAIDs, Synchronous vs. Asynchronous Replication, Virtual Storage Area Network (VSAN), Physical Storage Network, Virtual Storage Network, SAN vs. NAS, iSCSI (Internet Small Computer System Interface), iFCP (Internet Fiber Channel Protocol), FCIP (Fibre Channel over IP), FCoE (Fibre Channel over Ethernet), Virtual File Systems. Video recording available in YouTube.
Personal storage to enterprise storage system journeySoumen Sarkar
A brief journey through storage wonderland. Notes are not visible when you view in slide share. However they would be visible when the slides are viewed in powerpoint [you have to download the slides for that]
INTRODUCTION : Server Centric IT Architecture and its Limitations; Storage – Centric IT Architecture and its advantages; Case study: Replacing a server with Storage Networks; The Data Storage and Data Access problem; The Battle for size and access.
INTELLIGENT DISK SUBSYSTEMS – 1
Architecture of Intelligent Disk Subsystems; Hard disks and Internal I/O Channels, JBOD, Storage virtualization using RAID and different RAID levels;
EMC Isilon Best Practices for Hadoop Data StorageEMC
This white paper describes the best practices for setting up and managing the HDFS service on an Isilon cluster to optimize data storage for Hadoop analytics.
FILE SYSTEM AND NAS: Local File Systems; Network file Systems and file servers; Shared Disk file systems; Comparison of fiber Channel and NAS.
STORAGE VIRTUALIZATION: Definition of Storage virtualization; Implementation Considerations; Storage virtualization on Block or file level; Storage virtualization on various levels of the storage Network; Symmetric and Asymmetric storage virtualization in the Network
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandINFINIDAT
Desktop and server virtualization have brought many benefits to the data center. These two initiatives have allowed IT to respond quickly to the needs of the organization while driving down IT costs, physical footprint requirements and energy demands. But there is one area of the data center that has actually increased in cost since virtualization started to make its way into production… storage. Because of virtualization, more data centers need "ash to meet the random I/O nature of the virtualized environment, which of course is more expensive, on a dollar per GB basis, than hard disk drives. The single biggest problem however is the signi!cant increase in the number of discrete storage systems that service the environment. This “storage sprawl” threatens the return on investment (ROI) of virtualization projects and makes storage more complex to manage.
Learn more at www.infinidat.com.
Storage Virtualization: Towards an Efficient and Scalable FrameworkCSCJournals
Enterprises in the corporate world demand high speed data protection for all kinds of data. Issues such as complex server environments with high administrative costs and low data protection have to be resolved. In addition to data protection, enterprises demand the ability to recover/restore critical information in various situations. Traditional storage management solutions such as direct-attached storage (DAS), network-attached storage (NAS) and storage area networks (SAN) have been devised to address such problems. Storage virtualization is the emerging technology that amends the underlying complications of physical storage by introducing the concept of cloud storage environments. This paper covers the DAS, NAS and SAN solutions of storage management and emphasizes the benefits of storage virtualization. The paper discusses a potential cloud storage structure based on which storage virtualization architecture will be proposed.
Learn about Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive Enterprise Workloads. This IBM Redpaper discusses server performance imbalance that can be found in typical application environments and how to address this issue with the 16 Gb Fibre Channel technology to provide required levels of performance and availability for the storage-intensive applications. For more information on Pure Systems, visit http://ibm.co/18vDnp6.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
- In last few years, rapidly increasing businesses
and their capabilities & capacities in terms of computing has
grown in very large scale. To manage business requirements
High performance computing with very large scale resources is
required. Businesses do not want to invest & concentrate on
managing these computing issues rather than their core business.
Thus, they move to service providers. Service providers such as
data centers serve their clients by sharing resources for
computing, storage etc. and maintaining all those.
MAC: A NOVEL SYSTEMATICALLY MULTILEVEL CACHE REPLACEMENT POLICY FOR PCM MEMORYcaijjournal
The rapid development of multi-core system and increase of data-intensive application in recent years call
for larger main memory. Traditional DRAM memory can increase its capacity by reducing the feature size
of storage cell. Now further scaling of DRAM faces great challenge, and the frequent refresh operations of
DRAM can bring a lot of energy consumption. As an emerging technology, Phase Change Memory (PCM)
is promising to be used as main memory. It draws wide attention due to the advantages of low power
consumption, high density and nonvolatility, while it incurs finite endurance and relatively long write
latency. To handle the problem of write, optimizing the cache replacement policy to protect dirty cache
block is an efficient way. In this paper, we construct a systematically multilevel structure, and based on it
propose a novel cache replacement policy called MAC. MAC can effectively reduce write traffic to PCM
memory with low hardware overhead. We conduct simulation experiments on GEM5 to evaluate the
performances of MAC and other related works. The results show that MAC performs best in reducing the
amount of writes (averagely 25.12%) without increasing the program execution time.
Learn how to Add Memory, Improve Performance, and Lower Costs with IBM MAX5 Technology. For more information on IBM Systems, visit http://ibm.co/RKEeMO.
This is the presentation on clusters computing which includes information from other sources too including my own research and edition. I hope this will help everyone who required to know on this topic.
Digital Enterprise Transformsation and SAPugur candan
This PPt talks about SAP and how it drieves digital Enterprise transformation in your company. SAP has a wide range of solutions and those solutions build up your digital infrastructure
GPU programing
The Brick Wall -- UC Berkeley's View
Power Wall: power expensive, transistors free
Memory Wall: Memory slow, multiplies fast ILP Wall: diminishing returns on more ILP HW
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Neuro-symbolic is not enough, we need neuro-*semantic*
Ramcloud
1. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
The Case for RAMClouds:
Scalable High-Performance Storage Entirely in DRAM
John Ousterhout, Parag Agrawal, David Erickson, Christos Kozyrakis, Jacob Leverich, David Mazières,
Subhasish Mitra, Aravind Narayanan, Guru Parulkar, Mendel Rosenblum, Stephen M. Rumble, Eric Stratmann, and
Ryan Stutsman
Department of Computer Science
Stanford University
Abstract
Disk-oriented approaches to online storage are becoming increasingly problematic: they do not scale grace-
fully to meet the needs of large-scale Web applications, and improvements in disk capacity have far out-
stripped improvements in access latency and bandwidth. This paper argues for a new approach to datacenter
storage called RAMCloud, where information is kept entirely in DRAM and large-scale systems are created
by aggregating the main memories of thousands of commodity servers. We believe that RAMClouds can
provide durable and available storage with 100-1000x the throughput of disk-based systems and 100-1000x
lower access latency. The combination of low latency and large scale will enable a new breed of data-
intensive applications.
plications by eliminating many of the scalability issues
1 Introduction that sap developer productivity today. Second, their
For four decades magnetic disks have provided the extremely low latency will enable richer query models
primary means of storing online information in com- that enable a new class of data-intensive applications.
puter systems. Over that period disk technology has Third, RAMClouds will provide the scalable storage
undergone dramatic improvements, and it has been substrate needed for “cloud computing” and other data-
harnessed by a variety of higher-level storage systems center applications [3]: a RAMCloud can support a
such as file systems and relational databases. However, single large application or numerous smaller applica-
the performance of disk has not improved as rapidly as tions, and allow small applications to grow rapidly into
its capacity, and developers are finding it increasingly large ones without additional complexity for the devel-
difficult to scale disk-based systems to meet the needs oper.
of large-scale Web applications. Many people have
proposed new approaches to disk-based storage as a The rest of this paper is divided into five parts. Section
solution to this problem; others have suggested replac- 2 describes the RAMCloud concept and the scale and
ing disks with flash memory devices. In contrast, we performance that we believe are achievable in a RAM-
believe that the solution is to shift the primary locus of Cloud system. Section 3 offers two motivations for
online data from disk to random access memory, with RAMClouds, one from the standpoint of applications
disk relegated to a backup/archival role. and one from the standpoint of the underlying storage
technologies; it also discusses the benefits of extremely
In this paper we argue that a new class of storage called low latency and compares RAMClouds with two alter-
RAMCloud will provide the storage substrate for many native approaches (caching and flash memory). In order
future applications. A RAMCloud stores all of its in- to build a practical RAMCloud numerous research is-
formation in the main memories of commodity servers, sues will need to be addressed; Section 4 introduces a
using hundreds or thousands of such servers to create a few of these issues. Section 5 discusses the disadvan-
large-scale storage system. Because all data is in tages of RAMClouds, such as high cost/bit and high
DRAM at all times, a RAMCloud can provide 100- energy usage. Finally, Section 6 summarizes related
1000x lower latency than disk-based systems and 100- work.
1000x greater throughput. Although the individual
memories are volatile, a RAMCloud can use replication 2 RAMCloud Overview
and backup techniques to provide data durability and
RAMClouds are most likely to be used in datacenters
availability equivalent to disk-based systems.
containing large numbers of servers divided roughly
We believe that RAMClouds will fundamentally into two categories: application servers, which imple-
change the storage landscape in three ways. First, they ment application logic such as generating Web pages or
will simplify the development of large-scale Web ap- enforcing business rules, and storage servers, which
-1-
2. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
provide longer-term shared storage for the application is cost-effective today (memory prices rise dramatically
servers. Traditionally the storage has consisted of files for larger memory sizes). With 1000 servers the confi-
or relational databases, but in recent years a variety of guration offers 64TB of storage at $60/GB. With addi-
new storage mechanisms have been developed to im- tional servers it should be possible to build RAM-
prove scalability, such as Bigtable [4] and memcached Clouds with capacities as large as 500TB today. Within
[16]. Each datacenter typically supports numerous ap- 5-10 years, assuming continued improvements in
plications, ranging from small ones using only a frac- DRAM technology, it will be possible to build RAM-
tion of an application server to large-scale applications Clouds with capacities of 1-10 Petabytes at a cost less
with thousands of dedicated application and storage than $5/GB.
servers.
RAMCloud represents a new way of organizing storage 3 Motivation
servers in such a system. There are two key attributes
3.1 Application scalability
that differentiate a RAMCloud from other storage sys-
tems. First, all information is kept in DRAM at all The motivation for RAMClouds comes from two
times. A RAMCloud is not a cache like memcached sources: applications and technology. From the stand-
[16] and data is not stored on an I/O device, as with point of applications, relational databases have been the
flash memory: DRAM is the permanent home for data. storage system of choice for several decades but they
Disk is used only for backup. Second, a RAMCloud do not scale to the level required by today's large-scale
must scale automatically to support thousands of sto- applications. Virtually every popular Web application
rage servers; applications see a single storage system, has found that a single relational database cannot meet
independent of the actual number of storage servers. its throughput requirements. As the site grows it must
undergo a series of massive revisions, each one intro-
Information stored in a RAMCloud must be as durable ducing ad hoc techniques to scale its storage system,
as if it were stored on disk. For example, failure of a such as partitioning data among multiple databases.
single storage server must not result in data loss or These techniques work for a while, but scalability is-
more than a few seconds of unavailability. Section 4.2 sues return when the site reaches a new level of scale or
discusses techniques for achieving this level of dura- a new feature is introduced, requiring yet more special-
bility and availability. purpose techniques.
Keeping all data in DRAM will allow RAMClouds to For example, as of August 2009 the storage system for
achieve performance levels 100-1000x better than cur- Facebook includes 4000 MySQL servers. Distribution
rent disk-based storage systems: of data across the instances and consistency between
• It should be possible to achieve access latencies of 5- the instances are handled explicitly by Facebook appli-
10 microseconds, measured end-to-end for a process cation code [13]. Even so, the database servers are in-
running in an application server to read a few hun- capable of meeting Facebook’s throughput require-
dred bytes of data from a single record in a single ments by themselves, so Facebook also employs 2000
storage server in the same datacenter. In compari- memcached servers, which cache recently used query
son, disk-based systems offer access times over the results in key-value stores kept in main memory. Un-
network ranging from 5-10ms (if disk I/O is re- fortunately, consistency between the memcached and
quired) down to several hundred microseconds (for MySQL servers must be managed by application soft-
data cached in memory).
• A single multi-core storage server should be able to
service at least 1,000,000 small requests per second.
In comparison, a disk-based system running on a # servers 1000
comparable machine with a few disks can service Capacity/server 64 GB
1000-10000 requests per second, depending on Total capacity 64 TB
cache hit rates. Total server cost $4M
These goals represent what we believe is possible, but Cost/GB $60
achieving them is by no means guaranteed; Section 4 Total throughput 109 ops/sec
discusses some of the obstacles that will have to be
overcome. Table 1. An example RAMCloud configuration using
Table 1 summarizes a RAMCloud configuration that is currently available commodity server technology. Total
server cost is based on list prices and does not include
feasible today. This configuration assumes 64GB of
networking infrastructure or racks.
DRAM on each server, which is the largest amount that
-2-
3. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
ware (e.g., cached values must be flushed explicitly accessed, assuming random accesses? In the mid-
when the database is updated), which adds to applica- 1980s 1-Kbyte records could be accessed on average
tion complexity. about every 10 minutes; with today's disks each record
can only be accessed about 6 times per year on average,
Numerous new storage systems have appeared in recent
and this rate will drop with each future improvement in
years to address the scalability problems with relational
disk capacity. Larger blocks allow more frequent ac-
databases; examples include Bigtable [4], Dynamo [8],
cesses, but even in the best case data on disk can only
and PNUTS [6] (see [23] for additional examples). Un-
be accessed 1/300th as frequently as 25 years ago.
fortunately, each of these is specialized in some way,
giving up some of the benefits of a traditional database If data is accessed in large blocks the situation is not
in return for higher performance in certain domains. quite as dire: today's disks can support about one access
Furthermore, most of the alternatives are still limited in every 1.5 hours to each block. However, the definition
some way by disk performance. One of the motivations of “large block” is changing. In the mid-1980s blocks
for RAMCloud is to provide a general-purpose storage of 400 Kbytes achieved 90% of the maximum transfer
system that scales far beyond existing systems, so ap- rate; today, blocks must contain at least 10 Mbytes to
plication developers do not have to resort to ad hoc achieve 90% of the maximum transfer rate, and large
techniques. blocks will need to be even larger in the future. Video
data meets this threshold, as do data for bulk-
3.2 Technology trends processing applications such as MapReduce [7], but
The second motivation for RAMClouds comes from most forms of on-line data do not; even photos and
disk technology evolution (see Table 2). Disk capacity songs are too small to use the full bandwidth of today's
has increased more than 10000-fold over the last 25 disks.
years and seems likely to continue increasing in the A second way of understanding disk trends is to apply
future. Unfortunately, though, the access rate to infor- Jim Gray’s Rule [12]: if portions of the disk are left
mation on disk has improved much more slowly: the unused then the remaining space can be accessed more
transfer rate for large blocks has improved “only” 50- frequently. As the desired access rate to each record
fold, and seek time and rotational latency have only increases, the disk utilization must decrease, which
improved by a factor of two. increases the cost per usable bit; eventually a crossover
As a result of this uneven evolution the role of disks point is reached where the cost/bit of disk is no better
must inevitably become more archival: it simply isn't than DRAM. The crossover point has increased by a
possible to access information on disk very frequently. factor of 360x over the last 25 years, meaning that data
Table 2 illustrates this in two ways. First, it computes on disk must be used less and less frequently.
the capacity/bandwidth ratio: if the disk is filled with
blocks of a particular size, how often can each block be
Mid-
2009 Improvement
1980s
Disk capacity 30 MB 500 GB 16667x
Maximum transfer rate 2 MB/s 100 MB/s 50x
Latency (seek + rotate) 20 ms 10 ms 2x
Capacity/bandwidth (large blocks) 15 s 5000 s 333x worse
Capacity/bandwidth (1KB blocks) 600 s 58 days 8333x worse
Jim Gray’s Rule [11] (1KB blocks) 5 min. 30 hours 360x worse
Table 2. A comparison of disk technology today versus 25 years ago, based on typical personal computer disks.
Capacity/bandwidth measures how long it takes to read the entire disk, assuming accesses in random order to blocks
of a particular size; this metric also indicates how frequently each block can be accessed on average (assuming the
disk is full). For large blocks (>10 Mbytes today) capacity/bandwidth is limited by the disk transfer rate; for small
blocks it is limited by latency. The last line assumes that disk utilization is reduced to allow more frequent accesses to
a smaller number of records; it uses the approach of Gray and Putzolu [11] to calculate the access rate at which mem-
ory becomes cheaper than disk. For example, with today’s technologies, if a 1KB record is accessed at least once
every 30 hours, it is not only faster to store it in memory than on disk, but also cheaper (to enable this access rate
only 2% of the disk space can be utilized).
-3-
4. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
UI UI
App. Bus.
Logic Logic
Data
Structures
Traditional Application Storage
Application Servers Servers
(a) (b)
Figure 1. In a traditional application (a) the application’s data structures reside in memory on the same machine containing
the application logic and user interface code. In a scalable Web application (b) the data is stored on separate servers from the
application's user interface and business logic. Because of the high latency of data access, it is expensive for a Web applica-
tion to perform iterative explorations of the data; typically the application server must fetch all of the data needed for a Web
page in a small number of bulk requests. Lower latency will enable more complex data explorations.
For these reasons we believe that caches in the future
3.3 Caching will have to be so large that they will provide little cost
Historically, caching has been viewed as the answer to benefit while still introducing significant performance
problems with disk latency: if most accesses are made risk. RAMClouds may cost slightly more than caching
to a small subset of the disk blocks, high performance systems, but they will provide guaranteed performance
can be achieved by keeping the most frequently ac- independent of access patterns or locality.
cessed blocks in DRAM. In the ideal case a system
with caching can offer DRAM-like performance with 3.4 Does latency matter?
disk-like cost.
The most unique aspect of RAMClouds relative to oth-
However, the trends in Table 2 are diluting the benefits er storage systems is the potential for extraordinarily
of caching by requiring a larger and larger fraction of low latency. On the one hand, 5-10 µs latency may
data to be kept in DRAM. Furthermore, some new Web seem unnecessary, and it is hard to point to Web appli-
applications such as Facebook appear to have little or cations that require this level of latency today. Howev-
no locality, due to complex linkages between data (e.g., er, we believe that low latency is one of the most im-
friendships in Facebook). As of August 2009 about portant features of RAMClouds.
25% of all the online data for Facebook is kept in main
There may not be many Web applications that require
memory on memcached servers at any given point in
5-10 µs latency today, but this is because there are no
time, providing a hit rate of 96.5%. When additional
storage systems that can support such a requirement. In
caches on the database servers are counted, the total
contrast, traditional applications expect and get latency
amount of memory used by the storage system equals
significantly less than 5-10 µs (see Figure 1). In these
approximately 75% of the total size of the data (exclud-
applications the data for the application resides in main-
ing images). Thus a RAMCloud would only increase
memory data structures co-located with the applica-
memory usage for Facebook by about one third.
tion's front end. Such data can be explored with very
In addition, the 1000x gap in access time between low latency by the application. In Web applications the
DRAM and disk means that a cache must have excep- data is stored on separate storage servers. Because of
tionally high hit rates to avoid significant performance high data latency, Web applications typically cannot
penalties: even a 1% miss ratio for a DRAM cache afford to make complex unpredictable explorations of
costs a factor of 10x in performance. A caching ap- their data, and this constrains the functionality they can
proach makes the deceptive suggestion that “a few provide. If Web applications are to replace traditional
cache misses are OK” and lures programmers into con- applications, as has been widely predicted, then they
figurations where system performance is poor. will need access to data with latency much closer to
what traditional applications enjoy.
-4-
5. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
Today's Web applications are already using data in benefits when discussing RAMCloud implementation
complex ways and struggling with latency issues. For issues in Section 4.
example, when Facebook receives an HTTP request for
a Web page, the application server makes an average of 3.5 Use flash memory instead of DRAM?
130 internal requests (inside the Facebook site) as part Flash memory offers another alternative with lower
of generating the HTML for the page [13]. These re- latency than disk. Today it is most commonly used in
quests must be issued sequentially, since later requests devices such as cameras and media players, but it is
depend on results produced by earlier requests. The receiving increasing attention for general-purpose on-
cumulative latency of the internal requests is one of the line storage [1]. A RAMCloud could be constructed
limiting factors in overall response time to users, so with flash memory as the primary storage technology
considerable developer effort is expended to minimize instead of DRAM (“FlashCloud”), and this would be
the number and size of requests. Amazon has reported cheaper and consume less energy than a DRAM-based
similar results, with 100-200 requests to generate approach. Nonetheless, we believe a DRAM-based
HTML for each page [8]. implementation is more attractive because it offers
High latency data access has also caused problems for higher performance.
database applications. Queries that do not match the The primary advantage of DRAM over flash memory is
layout of data on disk, and hence require numerous latency. Flash devices have read latencies as low as 20-
seeks, can be intolerably slow. Iterative searches such 50 µs, but they are typically packaged as I/O devices,
as tree walks, where the nth data item cannot be identi- which adds additional latency for device drivers and
fied until the n-1th item has been examined, result in interrupt handlers. Write latencies for flash devices are
too many high latency queries to be practical. In recent 200 µs or more. Overall, a RAMCloud is likely to have
years numerous specialized database architectures have latency 5-10x lower than a FlashCloud, which will in-
appeared, such as column stores, stream processing crease the benefits discussed in Section 3.4. Also, from
engines, and array stores. Each of these reorganizes a research standpoint RAMCloud encourages a more
data on disk in order to reduce latency for a particular aggressive attack on latency in the rest of the system;
style of query; each produces order-of-magnitude spee- with FlashCloud the device latency will dominate, re-
dups for a particular application area, but none is gen- moving the incentive to improve latency of other sys-
eral-purpose. Stonebraker et al. have predicted the end tem components.
of the “one size fits all” database because there is no
disk layout that is efficient for all applications [23]. RAMClouds also provide higher throughput than
However, with sufficiently low latency none of these FlashClouds, which can make them attractive even in
specialized approaches are needed. RAMClouds offer situations where latency isn’t important. Figure 2, re-
the hope of a new “one size fits all” where performance produced from Andersen et al. [1], generalizes Jim
is independent of data placement and a rich variety of Gray's Rule. It indicates which of disk, flash, and
queries becomes efficient. DRAM is cheapest for a given system, given its re-
By providing random access with very low latency to
10000
very large datasets, RAMClouds will not only simplify
the development of existing applications, but they will 1000 Disk
Dataset Size (TB)
also enable new applications that access large amounts
of data more intensively than has ever been possible. 100
One example is massive multiplayer games involving Flash
interactions between thousands of players. More gener- 10
ally, RAMClouds will be well-suited to algorithms that
1.0 DRAM
must traverse large irregular graph structures, where the
access patterns are so unpredictable that the full latency
0.1
of the storage system must be incurred for each item 0.1 1.0 10 100 1000
retrieved. RAMClouds will allow a single sequential Query Rate (Millions/sec)
agent to operate much more quickly on such a struc- Figure 2. This figure (reproduced from Andersen et al.
ture, and they will also permit thousands of agents to [1]) indicates which storage technology has the lowest
operate concurrently on shared graphs. total cost of ownership over 3 years, including server
cost and energy usage, given the required dataset size
Low latency also offers other benefits, such as enabling and query rate for an application (assumes random
a higher level of consistency; we will describe these access workloads).
-5-
6. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
quirements in terms of dataset size and operations/sec. applications have historically been considered
For high query rates and smaller dataset sizes DRAM is “large” can be accommodated easily by a RAM-
cheapest; for low query rates and large datasets disk is Cloud.
cheapest; and flash is cheapest in the middle ground. • The cost of DRAM today is roughly the same as the
cost of disk 10 years ago ($10-30/Gbyte), so any da-
Interestingly, the dividing lines in Figure 2 are all shift-
ta that could be stored cost-effectively on disk then
ing upwards with time, which will increase the cover- can be stored cost-effectively in DRAM today.
age of RAMClouds in the future. To understand this
effect, consider the dividing line between flash and It is probably not yet practical to use RAMClouds for
DRAM. At this boundary the cost of flash is limited by large-scale storage of media such as videos, photos, and
cost/query/sec, while the cost of DRAM is limited by songs (and these objects make better use of disks be-
cost/bit. Thus the boundary moves upwards as the cause of their large size). However, RAMClouds are
cost/bit of DRAM improves; it moves to the right as the practical for almost all other online data today, and
cost/query/sec. of flash improves. For all three storage future improvements in DRAM technology will proba-
technologies cost/bit is improving much more rapidly bly make RAMClouds attractive for media within a few
than cost/query/sec, so all of the boundaries are moving years.
upwards.
In the future it is possible that the latency of flash
4 Research Issues
memory may improve to match DRAM; in addition, There are numerous challenging issues that must be
there are several other emerging memory technologies addressed before a practical RAMCloud system can be
such as phase-change memory that may ultimately constructed. This section describes several of those
prove to be better than DRAM for storage. Even if this issues, along with some possible solutions. Some of
happens, many of the implementation techniques de- these issues, such as data model and concurrency con-
veloped for RAMClouds will carry over to other ran- trol, are common to all distributed storage systems, so
dom access memory technologies. If RAMClouds can there may be solutions devised by other projects that
be implemented successfully they will take memory will also work for RAMClouds; others, such as low
volatility out of the equation, allowing storage technol- latency RPC and data durability, may require a unique
ogy to be selected based on performance, cost, and approach for RAMClouds.
energy consumption.
4.1 Low latency RPC
3.6 RAMCloud applicability today Although latencies less than 10µs have been achieved
RAMClouds are already practical for a variety of appli- in specialized networks such as Infiniband and Myrinet,
cations today: most existing datacenters use networking infrastructure
• As of August 2009 all of the non-image data for Fa- based on Ethernet/IP/TCP, with typical round-trip times
cebook occupies about 260TB [13], which is proba- for remote procedure calls of 300-500 µs. We believe it
bly near the upper limit of practicality for RAM- is possible to reduce this to 5-10 µs, but doing so will
Cloud today. require innovations at several levels. The greatest ob-
• Table 3 estimates that customer data for a large-scale stacle today is latency in the network switches. A large
online retailer or airline would fit comfortably in a datacenter is likely to have a three-tier switching struc-
small fraction of the RAMCloud configuration in ture, so each packet traverses five switches as it moves
Table 1. These applications are unlikely to be the up through the switching hierarchy and back down to
first to use RAMClouds, but they illustrate that many its destination. Typical switches today introduce delays
Online Retailer Airline Reservations
Revenues/year: $16B Flights/day: 4000
Average order size $40 Passengers/flight: 150
Orders/year 400M Passenger-flights/year: 220M
Data/order 1000 - 10000 bytes Data/passenger-flight: 1000 - 10000 bytes
Order data/year: 400GB - 4.0TB Passenger data/year: 220GB - 2.2 TB
RAMCloud cost: $24K-240K RAMCloud cost: $13K-130K
Table 3. Estimates of the total storage capacity needed for one year’s customer data of a hypothetical online retailer and
a hypothetical airline. In each case the total requirements are no more than a few terabytes, which would fit in a modest-
sized RAMCloud. The last line estimates the purchase cost for RAMCloud servers, using the data from Table 1.
-6-
7. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
of 10’s of microseconds each, for a total switching la- • A custom protocol can use an acknowledgment
tency of 100 µs or more in each direction. Newer scheme optimized for the request-response nature of
10GB switches such as the Arista 7100S [13] use cut- RAMCloud communication, resulting in fewer ac-
through routing and claim latencies of less than 1 µs, knowledgment packets than TCP.
but this will still produce a total network delay of near- Although 5-10 µs RPC latency seems like the best that
ly 5 µs in each direction. Additional improvements in can be achieved in the near future, lower latency would
switching latency will be required to achieve RPC be even better. It is an open question whether further
times less than 10 µs. improvements are possible; for example, is it possible
eventually to achieve RPC latency within a datacenter
In order to achieve 5-10 µs latency and 1 million re-
of 1 µs? At this level, even speed-of-light delays will
quests/second/server, RAMCloud servers will need to
be significant.
process each request in 1 µs or less; this will require
significant reductions in software overheads. General- In addition to low latency, RAMClouds will also re-
purpose operating systems introduce high overheads for quire high bandwidth from the networking infrastruc-
interrupt processing, network protocol stacks, and con- ture. Most current datacenter networks are oversub-
text switching. In RAMCloud servers it may make scribed by 10x or more at the upper levels, meaning
sense to use a special-purpose software architecture that the system cannot support random communication
where one core is dedicated to polling the network in- patterns at the full bandwidth of the lowest-level links.
terface(s) in order to eliminate interrupts and context It is not clear that there will be much locality in the
switches. The network core can perform basic packet communication patterns for RAMCloud servers, so
processing and then hand off requests to other cores bisection bandwidth may need to be increased in the
that carry out the requests. RAMCloud servers will upper levels of datacenter networks.
make good use of multi-core architectures, since differ-
ent requests can be handled in parallel on different 4.2 Durability and availability
cores. For RAMClouds to be widely used they must offer a
On application server machines virtualization is becom- high level of durability and availability (at least as good
ing more and more popular as a mechanism for manag- as today's disk-based systems). At a minimum this
ing applications. As a result, incoming packets must means that a crash of a single server must not cause
first pass through a virtual machine monitor and then a data to be lost or impact system availability for more
guest operating system before reaching the application. than a few seconds. RAMClouds must also offer rea-
Achieving low latency will require these overheads to sonable protection against power outages, and they may
be reduced, perhaps by passing packets directly from need to include cross-datacenter replication. We as-
the virtual machine monitor to the application. Another sume that any guarantees about data durability must
approach is to use network interfaces that can be take effect at the time a storage server responds to a
mapped directly into an application's address space so write request.
that applications can send and receive packets without One approach to durability is to replicate each object in
the involvement of either the operating system or the the memories of several server machines and update all
virtual machine monitor. of the replicas before responding to write requests. At
In order to achieve the most efficient network commu- least 3 copies of each object will probably be needed to
nication, it may be necessary to modify the TCP proto- assure an adequate level of durability and availability,
col or use a different reliable-delivery protocol based so this approach would triple the cost of memory,
on UDP: which is the dominant factor in overall system cost. It
• TCP retransmission timeouts are typically hundreds would also increase energy usage significantly. The
of milliseconds or more; this is necessary to manage cost of main-memory replication can be reduced by
congestion in long-haul networks, but it interferes using coding techniques such as parity striping [17],
with efficient operation inside a datacenter: even a but this makes crash recovery considerably more ex-
low rate of packet loss will significantly degrade av- pensive. In any case, replication in memory would de-
erage latency. pend on a reliable power source, so power outages do
• The flow-oriented nature of TCP offers little advan- not take down all of the replicas simultaneously. For
tage for RAMClouds, since individual requests will these reasons, we believe RAMClouds will probably
be relatively small. Flows introduce additional state use a technology other than DRAM for backup copies
and complexity in packet processing without provid- of data.
ing much benefit in an environment like RAMCloud.
-7-
8. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
write formation are occasionally rewritten to squeeze out the
log log stale data and reduce the log size.
Even with log truncation, the total amount of data to
read will be at least as large as the size of the crashed
server's DRAM; at current disk transfer speeds it will
DRAM DRAM DRAM
take ten minutes or more to read this much data from a
single disk. The second optimization is to divide the
DRAM of each primary server into hundreds of shards,
disk disk disk
with each shard assigned to different backup servers.
After a crash, one backup server for each shard reads
Storage Servers its (smaller) log in parallel; each backup server acts as a
temporary primary for its shard until a full copy of the
Figure 3. In the buffered logging approach to durability, lost server’s DRAM can be reconstructed elsewhere in
a single copy of each object is stored in DRAM. When an the RAMCloud. With this approach it should be possi-
object is modified, the changes are logged to 2 or more
other servers. Initially the log entries are stored in the
ble to resume operation (using the backup servers)
DRAM of the backup servers; they are transferred to disk within a few seconds of a crash.
asynchronously in batches in order to utilize the full disk Buffered logging allows both reads and writes to pro-
bandwidth.
ceed at DRAM speeds while still providing durability
An alternative approach to durability is to keep a single and availability. However, the total system throughput
copy of each object in DRAM, but have each server for writes will be limited by the disk bandwidth availa-
back up new data on its local disk as part of every write ble for writing log entries. Assuming typical updates of
operation. However, this approach is undesirable be- a few hundred bytes, a log-cleaning approach for trun-
cause it ties write latency to disk latency and still leaves cation, and a single disk per server with 100 MB/sec
data unavailable if the server crashes. At the very least, throughput, we estimate that each server can support
data must be replicated on multiple server machines. around 50,000 updates/second (vs. 1M reads/sec). It
may make sense to use flash memory instead of disk for
Figure 3 outlines an alternative called buffered logging backup copies, if flash memory can be packaged in a
that uses both disk and memory for backup. In this ap- way that supports higher write throughput than disk.
proach a single copy of each object is stored in DRAM For the highest update rates the only solution is to keep
of a primary server and copies are kept on the disks of replicas in DRAM.
two or more backup servers; each server acts as both
primary and backup. However, the disk copies are not Power failures can be handled in one of three ways in a
updated synchronously during write operations. In- RAMCloud:
stead, the primary server updates its DRAM and for- • Servers continue operating long enough after receiv-
wards log entries to the backup servers, where they are ing warning of an impending power outage to flush
stored temporarily in DRAM. The write operation re- data to disk. With buffered logging a few seconds is
turns as soon as the log entries have been written to sufficient to flush buffered log entries. With replica-
DRAM in the backups. Each backup server collects log tion in DRAM at least 10 minutes will be required.
entries into batches that can be written efficiently to a • Applications tolerate the loss of unwritten log data in
log on disk. Once log entries have been written to disk an unexpected power failure (this option exists only
they can be removed from the backup’s DRAM. for the buffered logging approach).
• All data must be committed to stable storage as part
If a server crashes in the buffered logging approach, the of each write operation; this will degrade write la-
data in its DRAM can be reconstructed on any of the tency and system throughput.
backup servers by processing the disk logs. However, As previously mentioned, some applications may re-
two optimizations will be required in order to recover quire data to be replicated in multiple datacenters. A
quickly enough to avoid service disruption. First, the multi-datacenter approach offers two advantages: it can
disk logs must be truncated to reduce the amount of continue operation even if a single datacenter becomes
data that must be read during recovery. One approach completely unavailable, and it can also reduce the la-
is to create occasional checkpoints, after which the log tency of communication with geographically distri-
can be cleared. Another approach is to use log cleaning buted users as well as long-haul network bandwidth. A
techniques like those developed for log-structured file RAMCloud can still provide high performance for
systems [21], where portions of the log with stale in- reads in this environment, but write performance will
-8-
9. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
be significantly degraded. The bandwidth between da- objects. Most systems provide some sort of aggrega-
tacenters will limit aggregate write throughput, and tion; for example, in a relational database the rows are
applications will have to accept higher latency for write organized into tables and tables can be augmented with
operations (10’s or 100’s of milliseconds) in order to indices to accelerate queries.
update remote datacenters synchronously. An alterna-
The third aspect of a data model is its mechanisms for
tive is to allow write operations to return before remote
naming and indexing: when retrieving or modifying
datacenters have been updated (meaning some “com-
basic objects, how are the objects named? In the sim-
mitted” data could be lost if the originating datacenter
plest case, such as a key-value store, each object has a
crashes). In this case, if the links between datacenters
single unique identifier that must be used in all refer-
have enough bandwidth, then it may be possible to
ences to the object. At the other extreme, relational
achieve the same write performance in a multi-
databases use content-based addressing: rows are se-
datacenter RAMCloud as a single-datacenter imple-
lected by specifying the values of one or more fields in
mentation.
the row. Any field or combination of fields can serve
If a RAMCloud system is to ensure data integrity, it as a “name” for the row when looking it up, and a po-
must be able to detect all significant forms of corrup- werful query language can be used to combine the rows
tion; the use of DRAM for storage complicates this in from different tables in a single query.
several ways. First, bit error rates for DRAM are rela-
The highly-structured relational data model has been
tively high [22] so ECC memory will be necessary to
the dominant one for storage systems over the last four
avoid undetected errors. Even so, there are several oth-
decades, and it provides extraordinary power and con-
er ways that DRAM can become corrupted, such as
venience. However, it does not appear practical to scale
errors in peripheral logic, software bugs that make stray
the relational structures over large numbers of servers;
writes to memory, and software or hardware errors re-
we know of no RDBMS that includes thousands of
lated to DMA devices. Without special attention, such
server machines (or even hundreds) in a single unified
corruptions will not be detected. Thus RAMClouds will
system. In order to achieve scalability, some of the
probably need to augment stored objects with check-
relational features, such as large-scale joins and com-
sums that can be verified when objects are read. It may
plex transactions, will probably have to be sacrificed
also make sense to scan stored data periodically to
(most large-scale Web applications have already made
detect latent errors.
these sacrifices).
4.3 Data model At the opposite extreme, unstructured models such as
Another issue for RAMClouds is the data model for the key-value stores appear highly scalable, but they may
system. There are many possible choices, and existing not be rich enough to support a variety of applications
systems have explored almost every imaginable combi- conveniently. For example, most applications need
nation of features. The data model includes three over- tables or some other form of aggregation, plus index-
all aspects. First, it defines the nature of the basic ob- ing; these operations may be difficult or expensive for
jects stored in the system. In some systems the basic an application to implement without support from the
objects are just variable-length “blobs” of bytes where underlying storage system.
the storage system knows nothing about their internal One approach that seems promising to us is an interme-
structure. At the other extreme, the basic objects can diate one where servers do not impose structure on data
have specific structure enforced by the system; for ex- but do support aggregation and indexing. In this ap-
ample, a relational database requires each record to proach the basic objects are blobs of bytes. The storage
consist of a fixed number of fields with prespecified servers never interpret the contents of the objects, so
types such as integer, string, or date. In either case we different applications can structure their objects diffe-
expect most objects to be small (perhaps a few hundred rently; some applications may use relational-style struc-
bytes), containing information equivalent to a record in tures, while others may use more loosely-structured
a database or an object in a programming language data such as JSON or XML. The servers do provide a
such as C++ or Java. table mechanism for grouping objects; among other
The second aspect of a data model is its support for things, tables make it easier to share a RAMCloud
aggregation: how are basic objects organized into high- among multiple applications (access controls can be
er-level structures? In the simplest case, such as key- specified for tables, and administrative operations such
value stores, the storage system provides no aggrega- as deleting all of the data for a defunct application be-
tion: the only operations are reads and writes of basic come simpler).
-9-
10. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
In this approach objects can be named either using need to be partitioned among multiple servers. It may
unique identifiers or using indices. The servers allow make sense to scatter the table's data as randomly as
any number of indices to be associated with each table. possible in order to balance load among the servers, or
Clients must manage indices with explicit requests it may be preferable to partition the table based on an
(“associate key k with object o in index i”, or “delete index, so that adjacent objects in the index are co-
the association between key k and object o in index i”), located on the same server and can be manipulated with
but can piggyback index updates with object updates; a single RPC. It may also make sense to partition in-
the servers will guarantee that either all or none of the dices along with their tables, so that index entries are
updates complete. This allows well-formed applica- located on the same servers as the corresponding table
tions to maintain consistency between tables and their objects.
indices without requiring servers to understand the
Because of the high throughput of its servers, a RAM-
structure of objects. The complexity of managing in-
Cloud is unlikely to need data replication for perfor-
dices can be hidden in client-side libraries (see Section
mance reasons; replication will be needed only for data
4.7).
durability and availability. If a server becomes over-
loaded then some of its tables can be moved to other
4.4 Distribution and scaling servers; if a single table overloads the server then the
A RAMCloud must provide the appearance of a single table can be partitioned to spread the load. Partitioning
unified storage system even though it is actually im- is already needed to handle large tables, so it makes
plemented with thousands of server machines. The dis- sense to use this mechanism for load balancing as well.
tribution of the system should not be reflected in the Replication for performance will only be needed if the
APIs it provides to application developers, and it access rate to a single object exceeds the 1M
should be possible to reconfigure the system (e.g. by ops/second throughput of a single server; this situation
adding or removing servers) without impacting running may be so rare that it can be handled with special tech-
applications or involving developers. This represents niques in the affected applications.
one of the most important advantages of RAMCloud
over most existing storage systems, where distribution Given the size of a RAMCloud system, its configura-
and scaling must be managed explicitly by developers. tion is likely to change frequently, both because of
changes in the underlying hardware (server crashes,
The primary issue in the distribution and scaling of the new server additions, etc.) and because of changes in
system is data placement. When a new data object is the applications. Thus it will be important for RAM-
created it must be assigned to one or more servers for Clouds to move data between servers efficiently and
storage. Later on, the object may need to be moved transparently. It must be possible to carry out data mi-
between servers in order to optimize the performance gration while applications are running, without impact-
of the system or to accommodate the entry and exit of ing their performance or availability. The logging ap-
servers. Given the scale and complexity of a RAM- proach used for data durability makes data movement
Cloud system, the decisions about data placement and relatively straightforward: data can be copied at leisure
movement will need to be made automatically, without from source to destination while continuing to serve
human involvement. requests at the source; then the update log can be rep-
For example, assume that a RAMCloud provides a ta- layed on the destination to incorporate any changes
ble mechanism for grouping related objects. For small made during the copy; finally a short synchronized op-
tables it is most efficient to store the entire table on a eration between source and destination can be used to
single server. This allows multiple objects to be re- apply any final updates and transfer ownership.
trieved from the table with a single request to a single
server, whereas distributing the table may require mul- 4.5 Concurrency, transactions, and con-
tiple requests to different servers. The single-server sistency
approach also minimizes the amount of configuration One of the most challenging issues for RAMCloud is
information that must be managed for the table. If the how to handle interactions between requests that are
system supports indices, it will be most efficient to being serviced simultaneously. For example, can an
keep each index on the same server as the table for the application server group a series of updates into a
index, so that an indexed lookup can be performed with transactional unit so that other application servers either
a single RPC. see all of the updates or none of them? The ACID
guarantees provided by relational database systems
However, some tables will eventually become too large
handle concurrency in a clean and powerful way that
or too hot for a single server, in which case they will
- 10 -
11. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
simplifies the construction of applications [19]. How- 4.6 Multi-tenancy
ever, the ACID properties scale poorly: no one has yet A single large RAMCloud system must support shared
constructed a system with ACID properties at the scale access by hundreds of applications of varying sizes. A
we envision for RAMClouds. Many Web applications small application should get all of the performance and
do not need full ACID behavior and do not wish to pay durability advantages provided by RAMCloud at a cost
for it. proportional to its memory usage. For example, it
Because of this, recent storage systems have explored a should be possible to provide a few gigabytes of sto-
variety of alternative approaches that give up some of rage to an application for only a few hundred dollars
the ACID properties in order to improve scalability. For per year, since this represents just a small fraction of
example, Bigtable [4] does not support transactions one server. If an application suddenly becomes popular
involving more than a single row and Dynamo [8] does it should be able to scale quickly within its RAMCloud
not guarantee immediate and consistent updates of rep- to achieve high performance levels on short notice.
licas. These restrictions improve some system proper- Because of its scale, a RAMCloud can efficiently
ties, such as scalability or availability, but they also amortize the cost of spare capacity over many applica-
complicate life for developers and cause confusing be- tions.
havior for users. For example, in a system that doesn't In order to support multiple tenants, RAMClouds will
guarantee consistency of replicas, if a user reads a val- need to provide access control and security mechanisms
ue that he or she wrote recently (“read after write”) it is reliable enough to allow mutually antagonistic applica-
possible for the read to return the old value. tions to cohabitate in the same RAMCloud cluster.
One potential benefit of RAMCloud’s extremely low RAMClouds may also need to provide mechanisms for
latency is that it may enable a higher level of consisten- performance isolation, so that one application with a
cy than other systems of comparable scale. ACID very high workload does not degrade performance of
properties become expensive to implement when there other applications. Performance isolation is a difficult
are many transactions executing concurrently, since this challenge, and could require partitioning of both data
increases the likelihood of conflicts. Concurrency is and network bandwidth in the system.
determined by (a) the amount of time each transaction
spans in the system and (b) the overall rate at which 4.7 Server-client functional distribution
transactions enter the system. Longer execution times A RAMCloud system is distributed in two different
for transactions increase the degree of concurrency, as ways: application code runs on separate machines from
does greater system throughput. By reducing latency, storage servers, and the stored data is spread across
RAMClouds reduce the length of time each transaction thousands of servers. We have already discussed the
stays in the system, which reduces aborted transactions distribution between storage servers; we now turn to
(for optimistic concurrency control) and lock wait times the distribution between applications and storage serv-
(for pessimistic concurrency control). As a result, a ers, which raises interesting questions about where var-
low latency system can scale to much higher overall ious functions in the system should be implemented.
throughput before the cost of ACID becomes prohibi-
tive. Applications will access RAMCloud storage using a
library package resident on the application servers. The
However, the potential for stronger consistency will be library might be as simple as a thin wrapper layer on
harder to realize for applications that require replication top of the RAMCloud RPC calls, but it will probably
across datacenters. In these applications inter- implement more significant functionality. For example,
datacenter wire delays will force high write latencies, the library is likely to include a cache of configuration
which will increase the degree of concurrency and information so that RPCs can be directed immediately
make stronger consistency more expensive. to the correct server for the desired data. There are
In addition, as discussed in Section 4.4, the high several other functions whose implementation probably
throughput of a RAMCloud makes data replication un- belongs in the client library as well. For example, sup-
necessary. This eliminates the consistency issues and pose the storage system implements a query that re-
overhead associated with replication, and it creates a trieves several objects from a table and returns them in
simpler programming model for application developers. sorted order. The objects themselves may reside on
multiple servers, so it is most efficient for the client
library to collect partial results from each of the servers
and combine and sort them locally (otherwise all of the
entries would have to be collected on a single server
- 11 -
12. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
machine for sorting and then retransmitted to the client, functionality done right, thereby reducing operational
resulting in extra network traffic). If the object model cost compared to current storage systems.
provided by storage servers consists of blobs of bytes,
Instrumentation will be a key element in RAMCloud
the client library is a natural place to implement a par-
systems. The servers will need to gather significant
ticular data model within objects, as well as higher lev-
amounts of data to drive system management decisions,
el queries and support for indexing as described in Sec-
and data from different servers will need to be inte-
tion 4.3. Different client libraries can implement differ-
grated to form a cohesive overall picture of system be-
ent data models and query languages.
havior. Furthermore, data gathering must not signifi-
It is also possible that some system management func- cantly degrade the latency or throughput of the system.
tions could be implemented in the client library, such as
orchestrating recovery after server crashes. However, 5 RAMCloud Disadvantages
this would complicate the system's recovery model,
The most obvious drawbacks of RAMClouds are high
since an ill-timed client crash might impact the system's
cost per bit and high energy usage per bit. For both of
recovery after a server crash. It probably makes sense
these metrics RAMCloud storage will be 50-100x
for the storage servers to be self-contained, so that they
worse than a pure disk-based system and 5-10x worse
manage themselves and can meet their service level
than a storage system based on flash memory (see [1]
agreements without depending on any particular beha-
for sample configurations and metrics). A RAMCloud
vior of client machines.
system will also require more floor space in a datacen-
Certain functions, such as access control, cannot be ter than a system based on disk or flash memory. Thus,
implemented safely on the clients. There is no way for if an application needs to store a large amount of data
a server to tell whether a particular request came from inexpensively and has a relatively low access rate,
the client library or from the application itself; since the RAMCloud is not the best solution.
application isn't trusted to make access control deci-
However, RAMClouds become much more attractive
sions, the client library cannot be trusted either.
for applications with high throughput requirements.
It may also make sense to migrate functionality from When measured in terms of cost per operation or ener-
the clients to the servers. For example, in bulk- gy per operation, RAMClouds are 100-1000x more
processing applications such as MapReduce [7] it may efficient than disk-based systems and 5-10x more effi-
be attractive for an application to upload code to the cient than systems based on flash memory. Thus for
servers so that data can be processed immediately on systems with high throughput requirements a RAM-
the servers without shipping it over the network. This Cloud can provide not just high performance but also
approach makes sense for simple operations that reduce energy efficiency. It may also be possible to reduce
network traffic, such as counting records with particu- RAMCloud energy usage by taking advantage of the
lar attributes. Of course, uploading code to the servers low-power mode offered by DRAM chips, particularly
will introduce additional security issues; the servers during periods of low activity.
must ensure that malicious code cannot damage the
In addition to these disadvantages, some of RAM-
storage system. If a RAMCloud can return data over
Cloud's advantages will be lost for applications that
the network with very high efficiency, then there will
require data replication across datacenters. In such en-
be less need for code uploading.
vironments the latency of updates will be dominated by
speed-of-light delays between datacenters, so RAM-
4.8 Self-management Clouds will have little or no latency advantage. In addi-
For RAMClouds to be successful they must manage tion, cross-datacenter replication makes it harder for
themselves automatically. With thousands of servers, RAMClouds to achieve stronger consistency as de-
each using hundreds of its peers for a sharded backup scribed in Section 4.5. However, RAMClouds can still
scheme, the overall system complexity will be far too offer exceptionally low latency for reads even with
great for a human operator to understand and manage. cross-datacenter replication.
In addition, a single RAMCloud may support hundreds
of applications with competing needs; it will need to 6 Related Work
monitor its own performance and adjust its configura-
The role of DRAM in storage systems has been steadily
tion automatically as application behavior changes.
increasing over a period of several decades and many
Fortunately, building a new storage system from
of the RAMCloud ideas have been explored in other
scratch provides an opportunity to get this essential
systems. For example, in the mid-1980s there were
- 12 -
13. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
numerous research experiments with databases stored how to build efficient and reliable RAMClouds, as well
entirely in main memory [9, 11]; however, main- as to observe the impact of RAMClouds on application
memory databases were not widely adopted, perhaps development.
because of their limited capacities. The latency benefits
of optimizing a storage system around DRAM have 8 Acknowledgments
also been demonstrated in projects such as Rio Vista
The following people made helpful comments on drafts
[15].
of this paper, which improved both the presentation in
In recent years there has been a surge in the use of the paper and our thinking about RAMClouds: David
DRAM, driven by the performance requirements of Andersen, Michael Armbrust, Jeff Dean, Robert John-
large-scale Web applications. For example, both son, Jim Larus, David Patterson, Jeff Rothschild, and
Google and Yahoo! store their search indices entirely in Vijay Vasudevan.
DRAM. Memcached [16] provides a general-purpose
key-value store entirely in DRAM, and it is widely used 9 References
to offload back-end database systems (however, mem- [1] Andersen, D., Franklin, J., Kaminsky, M., et al.,
cached makes no durability guarantees so it must be “FAWN: A Fast Array of Wimpy Nodes”, Proc.
used as a cache). The Bigtable storage system allows 22nd Symposium on Operating Systems Principles,
entire column families to be loaded into memory, where 2009, to appear.
they can be read without any disk accesses [4]. Bigta- [2] Arista Networks 7100 Series Switches,
ble has also explored many of the issues in federating http://www.aristanetworks.com/en/7100Series.
large numbers of storage servers. [3] Armbrust, M., Fox, A., Griffith, R., et al., Above
The limitations of disk storage have been noted by the Clouds: A Berkeley View of Cloud Computing,
many. For example, Jim Gray has predicted the migra- Technical Report UCB/EECS-2009-28, Electrical
tion of data from disk to random access memory [14], Engineering and Computer Sciences, U.C. Berke-
and the H-store project has reintroduced the notion of ley, February 10, 2009, http://www.eecs.berkeley.
main-memory databases as a solution to database per- edu/Pubs/TechRpts/2009/EECS-2009-28.pdf.
formance problems [20]. Several projects, such as [5] [4] Chang, F., Dean, J, Ghemawat, S., et al., “Bigtable:
and [10], have explored mechanisms for very low la- A Distributed Storage System for Structured Data”,
tency RPC communication. ACM Transactions on Computer Systems, Vol. 26,
No. 2, 2008, pp. 4:1 - 4:26.
7 Conclusion [5] Chun, B., Mainwaring, A., and Culler, D., “Virtual
Network Transport Protocols for Myrinet,” IEEE
In the future, both technology trends and application
Micro, Vol. 18, No. 1 (January 1998), pp. 53-63.
requirements will dictate that a larger and larger frac-
[6] Cooper, B., Ramakrishnan, R., Srivastava, U., et.
tion of online data be kept in DRAM. In this paper we
al., “PNUTS: Yahoo!’s Hosted Data Serving Plat-
have argued that the best long-term solution for many
form,” VLDB ’08, Proc. VLDB Endowment, Vol.
applications may be a radical approach where all data is
1, No. 2, (2008), pp. 1277-1288.
kept in DRAM all the time. The two most important
[7] Dean, J., and Ghemawat, S., “MapReduce: Simpli-
aspects of RAMClouds are (a) their extremely low la-
fied Data Processing on Large Clusters,” Proc. 6th
tency and (b) their ability to aggregate the resources of
USENIX Symposium on Operating Systems Design
large numbers of commodity servers. Together, these
and Implementation, 2004, pp. 137-150.
allow RAMClouds to scale to meet the needs of the
[8] DeCandia, G., Hastorun, D., Jampani, M., et al.,
largest Web applications. In addition, low latency also
“Dynamo: Amazon's Highly Available Key-value
enables richer query models, which will simplify appli-
Store”, Proc. 21st ACM Symposium on Operating
cation development and enable new kinds of applica-
Systems Principles, October 2007, pp. 205-220.
tions. Finally, the federated approach makes RAM-
[9] DeWitt, D., Katz, R., Olken, F., et al., “Implemen-
Clouds an attractive substrate for cloud computing en-
tation Techniques for Made Memory Database
vironments that require a flexible and scalable storage
Systems,” Proc. SIGMOD 1984, pp. 1-8.
system.
[10] Dittia, Z., Integrated Hardware/Software Design of
Numerous challenging issues must be addressed before a High-Performance Network Interface, Ph.D. dis-
a practical RAMCloud can be constructed. At Stanford sertation, Washington University in St. Louis,
University we are initiating a new research project to 2001.
build a RAMCloud system. Over the next few years we [11] Garcia-Molina, H., and Salem, K., “Main Memory
hope to answer some of the research questions about Database Systems: An Overview,” IEEE Transac-
- 13 -
14. Appears in SIGOPS Operating Systems Review, Vol. 43, No. 4, December 2009, pp. 92-105
tions on Knowledge and Data Engineering, Vol. 4,
No. 6, December 1992, pp. 509-516.
[12] Gray, J., and Putzolu, G.F., “The Five-minute Rule
for Trading Memory for Disc Accesses, and the 10
Byte Rule for Trading Memory for CPU Time,”
Proc. SIGMOD 1987, June 1987, pp. 395-398.
[13] Johnson, R., and Rothschild, J., personal commu-
nications, March 24, 2009 and August 20, 2009.
[14] Kallman, R., Kimura, H., Natkins, J., et al., “H-
store: a High-Performance, Distributed Main
Memory Transaction Processing System,” VLDB
’08, Proc. VLDB Endowment, Vol. 1, No. 2,
(2008), pp. 1496-1499.
[15] Lowell, D., and Chen, P., “Free Transactions With
Rio Vista,” 16th ACM Symposium on Operating
Systems Principles, October, 1997, pp. 92-101.
[16] memcached: a distributed memory object caching
system, http://www.danga.com/memcached/.
[17] Patterson, D., Gibson, G., and Katz, R., “A Case
for Redundant Arrays of Inexpensive Disks,” Proc.
SIGMOD 1988, June 1988, pp. 109-116.
[18] Ramakrishnan, R., and Gehrke, J., Database Man-
agement Systems, Third Edition, McGraw-Hill,
2003.
[19] Reuter, A., and Haerder, T., “Principles of Trans-
action-Oriented Database Recovery,” ACM Com-
puting Surveys, Vol. 15, No. 4, December 1983,
pp. 287-317.
[20] Robbins, S., RAM is the new disk…,
http://www.infoq.com/news/2008/06/ram-is-disk.
[21] Rosenblum, M. and Ousterhout, J., “The Design
and Implementation of a Log-Structured File Sys-
tem,” ACM Transactions on Computer Systems,
Vol. 10, No. 1, February 1992, pp. 26-52.
[22] Schroeder, B., Pinheiro, E., and Weber, W-D.,
“DRAM Errors in the Wild: A Large-Scale Field
Study,” SIGMETRICS/Performance’09, pp. 193-
204.
[23] Stonebraker, M., Madden, S., Abadi, D., et al.,
“The End of an Architectural Era (It's Time for a
Complete Rewrite)”, Proc. VLDB ’07, pp. 1150-
1160.
- 14 -