This document describes PNUTS, a massively parallel and geographically distributed database system used by Yahoo for its web applications. PNUTS provides scalable data storage and low-latency access for large numbers of concurrent requests and updates across different continents. It uses a consistency model that allows for relaxed guarantees to improve performance, offering applications transactional features without requiring full serializability. The first version of PNUTS is currently serving Yahoo's production systems.
The document discusses supporting human users in scientific workflows. It outlines the motivation for integrating humans due to limitations in fully automating complex decision-making tasks. It proposes using standards like WS-HumanTask and BPEL4People to integrate users. The document then presents an approach called SW4H (Scientific Workflows for Humans) that uses technologies like Project Bangkok, Apache Camel and ActiveMQ to manage human communication and tasks in scientific workflows.
The document summarizes research findings from a study on identity and access management (IAM) solutions. The research found that organizations are shifting from a point solution approach to IAM, where individual products are integrated by the enterprise, to a platform approach where integration is handled by the vendor. The platform approach was found to provide benefits such as faster provisioning, lower risk from orphaned accounts, quicker integration of new applications and users, fewer security incidents, and lower total costs.
Communication Enabled Business Process- CEBPVincent Perrin
This document discusses enabling communications and collaboration capabilities to integrate with business processes. It proposes leveraging unified communications and collaboration technologies along with service-oriented architecture strategies. This would allow embedding tools like instant messaging, telephony, and video conferencing into business processes to increase contextual collaboration, reduce decision latency, and improve customer service. It presents a strategy for developing common communication services using standards like XML that can be integrated into line of business applications from various vendors to optimize business processes and responsiveness.
OM Plus Report Manager software from Plus Technologies can help you reduce the productivity and monetary costs associated with paper-based reports. This presentation examines the features and advantages (including case studies) of OM Plus RM.
This document discusses PHP adoption in the enterprise. It notes that over 1/3 of websites run on PHP due to its low learning curve, large developer community, and rapid innovation. Gartner predicts PHP adoption by corporate IT will double by 2013. Zend has enabled enterprise PHP adoption through products like Zend Server, Zend Studio, and the Zend Framework, and partnerships with IBM, Microsoft, and others. The document outlines how large companies like a major bank and GE have adopted PHP applications at scale. It positions Zend as the PHP company providing full solutions for building operationally mature PHP applications.
MB Context is a tool that allows users to add comments and insights directly to IBM Cognos reports. Comments are added live so all users have immediate access to up-to-date information. Comments are stored in the database along with the report. This provides a way for users to interact with reports beyond just viewing data and gives them a way to share knowledge with other users of the reports.
This document discusses managing mobile devices in the WAN. It outlines key CIO challenges like increased application complexity and rising network costs. It also covers mobile broadband traffic trends like the explosion of internet and video traffic driven by smartphones and tablets. The document proposes that Exinda addresses these issues through its unified performance management platform, which provides visibility, control, and optimization through features like application optimization, caching, and the new Edge Cache product.
Advancing the Traditional Enterprise: An EA Story InnoTech
Electronic Arts (EA) is a major video game company founded in 1982 with over 9,000 employees worldwide and 500 in Austin. It has exclusive partnerships with prominent sports leagues and brands. While EA was originally known for boxed retail games, it is transitioning to digital services and analyzing player data to improve experiences. There are four technology options considered to handle the growing amounts of player data, each with their own challenges regarding scalability, workload management, and coupling between structured and unstructured data tools.
The document discusses supporting human users in scientific workflows. It outlines the motivation for integrating humans due to limitations in fully automating complex decision-making tasks. It proposes using standards like WS-HumanTask and BPEL4People to integrate users. The document then presents an approach called SW4H (Scientific Workflows for Humans) that uses technologies like Project Bangkok, Apache Camel and ActiveMQ to manage human communication and tasks in scientific workflows.
The document summarizes research findings from a study on identity and access management (IAM) solutions. The research found that organizations are shifting from a point solution approach to IAM, where individual products are integrated by the enterprise, to a platform approach where integration is handled by the vendor. The platform approach was found to provide benefits such as faster provisioning, lower risk from orphaned accounts, quicker integration of new applications and users, fewer security incidents, and lower total costs.
Communication Enabled Business Process- CEBPVincent Perrin
This document discusses enabling communications and collaboration capabilities to integrate with business processes. It proposes leveraging unified communications and collaboration technologies along with service-oriented architecture strategies. This would allow embedding tools like instant messaging, telephony, and video conferencing into business processes to increase contextual collaboration, reduce decision latency, and improve customer service. It presents a strategy for developing common communication services using standards like XML that can be integrated into line of business applications from various vendors to optimize business processes and responsiveness.
OM Plus Report Manager software from Plus Technologies can help you reduce the productivity and monetary costs associated with paper-based reports. This presentation examines the features and advantages (including case studies) of OM Plus RM.
This document discusses PHP adoption in the enterprise. It notes that over 1/3 of websites run on PHP due to its low learning curve, large developer community, and rapid innovation. Gartner predicts PHP adoption by corporate IT will double by 2013. Zend has enabled enterprise PHP adoption through products like Zend Server, Zend Studio, and the Zend Framework, and partnerships with IBM, Microsoft, and others. The document outlines how large companies like a major bank and GE have adopted PHP applications at scale. It positions Zend as the PHP company providing full solutions for building operationally mature PHP applications.
MB Context is a tool that allows users to add comments and insights directly to IBM Cognos reports. Comments are added live so all users have immediate access to up-to-date information. Comments are stored in the database along with the report. This provides a way for users to interact with reports beyond just viewing data and gives them a way to share knowledge with other users of the reports.
This document discusses managing mobile devices in the WAN. It outlines key CIO challenges like increased application complexity and rising network costs. It also covers mobile broadband traffic trends like the explosion of internet and video traffic driven by smartphones and tablets. The document proposes that Exinda addresses these issues through its unified performance management platform, which provides visibility, control, and optimization through features like application optimization, caching, and the new Edge Cache product.
Advancing the Traditional Enterprise: An EA Story InnoTech
Electronic Arts (EA) is a major video game company founded in 1982 with over 9,000 employees worldwide and 500 in Austin. It has exclusive partnerships with prominent sports leagues and brands. While EA was originally known for boxed retail games, it is transitioning to digital services and analyzing player data to improve experiences. There are four technology options considered to handle the growing amounts of player data, each with their own challenges regarding scalability, workload management, and coupling between structured and unstructured data tools.
The article discusses the Guardian's Datastore project, which makes data of public interest freely available online for reuse. Some key points:
- The Datastore contains datasets on topics like MPs' expenses, carbon emissions, and public opinion polls. This data was previously hard to access but the web now allows easy access to billions of statistics.
- Making this data open and machine-readable supports the Guardian's tradition of fact-checking and transparency. It also encourages others to analyze and build upon the data in new ways.
- An early example involved crowdsourcing the review of 500,000 pages of MPs' expenses, revealing new insights. Other Guardian datasets like music recommendations and university rankings are now available for others
The document provides the schedule for a conference on September 14-16. It lists the morning and afternoon sessions for each day, including the topics to be discussed and presenters. Some of the topics included are semantics, parsing, sentiment analysis, word sense disambiguation, and named entity recognition. The keynote speakers are Ricardo Baeza-Yates, Kevin Bretonnel Cohen, Mirella Lapata, Shalom Lappin, and Walter Daelemans. There will also be poster presentations on September 14 and 15.
This document is the introduction to The Little Book of Semaphores by Allen B. Downey. It provides an overview of the book, which teaches synchronization patterns and solutions to classic concurrency problems using semaphores. The introduction discusses the book's goal of helping students deeply understand synchronization through practice solving puzzles over time. It describes how the book was tested on students and improved based on their feedback. The second edition features Python-like pseudocode and new problems contributed by students.
Lamport's algorithm for mutual exclusion uses timestamping of messages and request queues to provide a total ordering of requests for shared resources. Each process maintains a request queue and follows 5 rules: (1) the initiator sends a timestamped request to all processes, (2) other processes add the request to their queue and reply, (3) the initiator is allowed access when its request is at the front of all queues and it has received all replies, (4) the initiator releases the resource by sending a release message, and (5) processes remove the request from their queue upon receiving the release. The algorithm guarantees only one process can access the resource at a time and is deadlock-free due to the timestamp ordering.
This document introduces developing a Scala DSL for Apache Camel. It discusses using Scala features like implicit conversions, passing functions as parameters, and by-name parameters to build a DSL. It provides examples of simple routes in the Scala DSL and compares them to Java. It also covers tooling for Scala in Maven and Eclipse and caveats like interacting with Java generics. The goal is to learn basic Scala concepts and syntax for building a Scala DSL, using Camel as an example.
Megastore providing scalable, highly available storage for interactive servicesJoão Gabriel Lima
The document describes Megastore, a storage system developed by Google to meet the requirements of interactive online services. Megastore blends the scalability of NoSQL databases with the features of relational databases. It uses partitioning and synchronous replication across datacenters using Paxos to provide strong consistency and high availability. Megastore has been widely deployed at Google to handle billions of transactions daily storing nearly a petabyte of data across global datacenters.
The document describes Megastore, a storage system developed by Google to meet the requirements of interactive online services. Megastore blends the scalability of NoSQL databases with the features of relational databases. It uses partitioning and synchronous replication across datacenters using Paxos to provide strong consistency and high availability. Megastore has been widely deployed at Google to handle billions of transactions daily storing nearly a petabyte of data across global datacenters.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. Key aspects of its design include handling frequent component failures as the norm, managing huge files up to multiple gigabytes in size containing many objects, optimizing for file appending and sequential reads of appended data, and co-designing the file system interface to increase flexibility for applications. The largest deployment to date includes over 1,000 storage nodes providing hundreds of terabytes of storage.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. Key aspects of its design include handling frequent component failures as the norm, managing huge files up to multiple gigabytes in size containing many objects, optimizing for file appending and sequential reads of appended data, and co-designing the file system interface to increase flexibility for applications. The largest deployment to date includes over 1,000 storage nodes providing hundreds of terabytes of storage.
High Availability of Services in Wide-Area Shared Computing NetworksMário Almeida
(Check my blog @ http://www.marioalmeida.eu/ )
Highly available distributed systems have been widely used and have proven to be resistant to a wide range of faults. Although these kind of services are easy to access, they require an investment that developers might not always be willing to make. We present an overview of Wide-Area shared computing networks as well as methods to provide high availability of services in such networks. We make some references to highly available systems that are being used and studied at the moment this paper was written (2012).
Resource Overbooking and Application Profiling in Shared ...webhostingguy
This document proposes techniques for overbooking CPU and network resources in shared hosting platforms running third-party applications. It introduces techniques to profile applications on dedicated nodes to estimate their resource needs, which can then be used to guide placement of application components onto shared nodes in an overbooked manner. The paper demonstrates that controlled overbooking of resources by small amounts, such as 1-5%, can significantly increase cluster utilization and the number of supported applications, while still providing performance guarantees to applications through commonly used QoS allocation mechanisms.
The real time publisher subscriber inter-process communication model for dist...yancha1973
1) The document proposes a real-time publisher/subscriber model for inter-process communication in distributed real-time systems.
2) In the model, processes publish and subscribe to messages using logical handles called distribution tags, without knowledge of senders/receivers.
3) An application programming interface is presented that allows processes to create/destroy tags, publish/receive messages, and query senders/receivers.
4) The model is fault-tolerant, supporting applications like clock synchronization across nodes and allowing processes to be upgraded online.
Yahoo is building a cloud infrastructure to provide scalable data storage and processing services. This will make application development and maintenance easier by handling issues like scaling, data partitioning, replication, and hardware provisioning centrally. The cloud aims to optimize human productivity and allow applications to rapidly develop and evolve on top of core services. It will provide a common infrastructure for Yahoo sites and services.
Force.com is Salesforce's multitenant application development platform. It uses a metadata-driven architecture that separates application metadata from tenant data and customizations. This allows Force.com to efficiently generate virtual application components at runtime for each tenant. Force.com stores all application data in a few large database tables and uses metadata and pivot tables to map this data to each tenant's virtual database structures. Key aspects of Force.com's architecture include its metadata-driven data model, APIs for application development and processing, full-text search engine, and Apex programming language.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. The key design drivers were the assumptions that components often fail, files are huge, writes are append-only, and concurrent appending is important. The system has a single master that manages metadata and assigns chunks to chunkservers, which store replicated file chunks. Clients communicate directly with chunkservers to read and write large, sequentially accessed files in chunks of 64MB.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. The key design drivers were the assumptions that components often fail, files are huge, writes are append-only, and concurrent appending is important. The system has a single master that manages metadata and assigns chunks to chunkservers, which store replicated file chunks. Clients communicate directly with chunkservers to read and write large, sequentially accessed files in chunks of 64MB.
Improving Utilization of Infrastructure CloudIJASCSE
This document summarizes a research paper that proposes improving the utilization of infrastructure in cloud computing. The paper discusses how Infrastructure as a Service (IaaS) clouds provide on-demand access to computing resources but must overprovision to do so. The research presents combining on-demand allocation with opportunistic provisioning of idle cloud nodes to other processes through backfill virtual machines. This allows better utilization of resources while still providing access when needed. The paper outlines related work, the proposed methodology using location tracking and efficient request processing through Hadoop, and presents results showing the system design and interfaces.
This document discusses ensuring high availability and data security with Datacore software. It notes that companies' data is their most important asset and infrastructures have become more dynamic with server virtualization. As a result, storage systems and networks must also adapt quickly. Datacore provides a future-proof and flexible solution to ensure 24/7 availability even during operations, maintenance, extensions or migrations. It allows for central management and virtualization of storage resources across systems and locations for high performance, security and simplicity.
The article discusses the Guardian's Datastore project, which makes data of public interest freely available online for reuse. Some key points:
- The Datastore contains datasets on topics like MPs' expenses, carbon emissions, and public opinion polls. This data was previously hard to access but the web now allows easy access to billions of statistics.
- Making this data open and machine-readable supports the Guardian's tradition of fact-checking and transparency. It also encourages others to analyze and build upon the data in new ways.
- An early example involved crowdsourcing the review of 500,000 pages of MPs' expenses, revealing new insights. Other Guardian datasets like music recommendations and university rankings are now available for others
The document provides the schedule for a conference on September 14-16. It lists the morning and afternoon sessions for each day, including the topics to be discussed and presenters. Some of the topics included are semantics, parsing, sentiment analysis, word sense disambiguation, and named entity recognition. The keynote speakers are Ricardo Baeza-Yates, Kevin Bretonnel Cohen, Mirella Lapata, Shalom Lappin, and Walter Daelemans. There will also be poster presentations on September 14 and 15.
This document is the introduction to The Little Book of Semaphores by Allen B. Downey. It provides an overview of the book, which teaches synchronization patterns and solutions to classic concurrency problems using semaphores. The introduction discusses the book's goal of helping students deeply understand synchronization through practice solving puzzles over time. It describes how the book was tested on students and improved based on their feedback. The second edition features Python-like pseudocode and new problems contributed by students.
Lamport's algorithm for mutual exclusion uses timestamping of messages and request queues to provide a total ordering of requests for shared resources. Each process maintains a request queue and follows 5 rules: (1) the initiator sends a timestamped request to all processes, (2) other processes add the request to their queue and reply, (3) the initiator is allowed access when its request is at the front of all queues and it has received all replies, (4) the initiator releases the resource by sending a release message, and (5) processes remove the request from their queue upon receiving the release. The algorithm guarantees only one process can access the resource at a time and is deadlock-free due to the timestamp ordering.
This document introduces developing a Scala DSL for Apache Camel. It discusses using Scala features like implicit conversions, passing functions as parameters, and by-name parameters to build a DSL. It provides examples of simple routes in the Scala DSL and compares them to Java. It also covers tooling for Scala in Maven and Eclipse and caveats like interacting with Java generics. The goal is to learn basic Scala concepts and syntax for building a Scala DSL, using Camel as an example.
Megastore providing scalable, highly available storage for interactive servicesJoão Gabriel Lima
The document describes Megastore, a storage system developed by Google to meet the requirements of interactive online services. Megastore blends the scalability of NoSQL databases with the features of relational databases. It uses partitioning and synchronous replication across datacenters using Paxos to provide strong consistency and high availability. Megastore has been widely deployed at Google to handle billions of transactions daily storing nearly a petabyte of data across global datacenters.
The document describes Megastore, a storage system developed by Google to meet the requirements of interactive online services. Megastore blends the scalability of NoSQL databases with the features of relational databases. It uses partitioning and synchronous replication across datacenters using Paxos to provide strong consistency and high availability. Megastore has been widely deployed at Google to handle billions of transactions daily storing nearly a petabyte of data across global datacenters.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. Key aspects of its design include handling frequent component failures as the norm, managing huge files up to multiple gigabytes in size containing many objects, optimizing for file appending and sequential reads of appended data, and co-designing the file system interface to increase flexibility for applications. The largest deployment to date includes over 1,000 storage nodes providing hundreds of terabytes of storage.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. Key aspects of its design include handling frequent component failures as the norm, managing huge files up to multiple gigabytes in size containing many objects, optimizing for file appending and sequential reads of appended data, and co-designing the file system interface to increase flexibility for applications. The largest deployment to date includes over 1,000 storage nodes providing hundreds of terabytes of storage.
High Availability of Services in Wide-Area Shared Computing NetworksMário Almeida
(Check my blog @ http://www.marioalmeida.eu/ )
Highly available distributed systems have been widely used and have proven to be resistant to a wide range of faults. Although these kind of services are easy to access, they require an investment that developers might not always be willing to make. We present an overview of Wide-Area shared computing networks as well as methods to provide high availability of services in such networks. We make some references to highly available systems that are being used and studied at the moment this paper was written (2012).
Resource Overbooking and Application Profiling in Shared ...webhostingguy
This document proposes techniques for overbooking CPU and network resources in shared hosting platforms running third-party applications. It introduces techniques to profile applications on dedicated nodes to estimate their resource needs, which can then be used to guide placement of application components onto shared nodes in an overbooked manner. The paper demonstrates that controlled overbooking of resources by small amounts, such as 1-5%, can significantly increase cluster utilization and the number of supported applications, while still providing performance guarantees to applications through commonly used QoS allocation mechanisms.
The real time publisher subscriber inter-process communication model for dist...yancha1973
1) The document proposes a real-time publisher/subscriber model for inter-process communication in distributed real-time systems.
2) In the model, processes publish and subscribe to messages using logical handles called distribution tags, without knowledge of senders/receivers.
3) An application programming interface is presented that allows processes to create/destroy tags, publish/receive messages, and query senders/receivers.
4) The model is fault-tolerant, supporting applications like clock synchronization across nodes and allowing processes to be upgraded online.
Yahoo is building a cloud infrastructure to provide scalable data storage and processing services. This will make application development and maintenance easier by handling issues like scaling, data partitioning, replication, and hardware provisioning centrally. The cloud aims to optimize human productivity and allow applications to rapidly develop and evolve on top of core services. It will provide a common infrastructure for Yahoo sites and services.
Force.com is Salesforce's multitenant application development platform. It uses a metadata-driven architecture that separates application metadata from tenant data and customizations. This allows Force.com to efficiently generate virtual application components at runtime for each tenant. Force.com stores all application data in a few large database tables and uses metadata and pivot tables to map this data to each tenant's virtual database structures. Key aspects of Force.com's architecture include its metadata-driven data model, APIs for application development and processing, full-text search engine, and Apex programming language.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. The key design drivers were the assumptions that components often fail, files are huge, writes are append-only, and concurrent appending is important. The system has a single master that manages metadata and assigns chunks to chunkservers, which store replicated file chunks. Clients communicate directly with chunkservers to read and write large, sequentially accessed files in chunks of 64MB.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. The key design drivers were the assumptions that components often fail, files are huge, writes are append-only, and concurrent appending is important. The system has a single master that manages metadata and assigns chunks to chunkservers, which store replicated file chunks. Clients communicate directly with chunkservers to read and write large, sequentially accessed files in chunks of 64MB.
Improving Utilization of Infrastructure CloudIJASCSE
This document summarizes a research paper that proposes improving the utilization of infrastructure in cloud computing. The paper discusses how Infrastructure as a Service (IaaS) clouds provide on-demand access to computing resources but must overprovision to do so. The research presents combining on-demand allocation with opportunistic provisioning of idle cloud nodes to other processes through backfill virtual machines. This allows better utilization of resources while still providing access when needed. The paper outlines related work, the proposed methodology using location tracking and efficient request processing through Hadoop, and presents results showing the system design and interfaces.
This document discusses ensuring high availability and data security with Datacore software. It notes that companies' data is their most important asset and infrastructures have become more dynamic with server virtualization. As a result, storage systems and networks must also adapt quickly. Datacore provides a future-proof and flexible solution to ensure 24/7 availability even during operations, maintenance, extensions or migrations. It allows for central management and virtualization of storage resources across systems and locations for high performance, security and simplicity.
Space-Based Architecture is a software architecture pattern that achieves linear scalability through stateful, high-performance applications using distributed processing units (PUs). Each PU contains business logic, data, and messaging to process end-to-end business transactions. The PUs scale horizontally by adding more units and utilize in-memory data grids, messaging grids, and deployment managers to replicate data changes and distribute workload. While effective for web applications with variable loads, it is a complex pattern and not suited for large relational databases or datasets.
Spring is the most popular and productive enterprise Java development framework in the world, and has always provided developers with portability and choice. The cloud should be no different. Spring applications work flawlessly on all the major platform-as-a-service clouds including Heroku, Google App Engine, and Cloud Foundry. This session will focus on how to design, and create, modern enterprise applications using Spring 3 that are portable across cloud environments.
Developing Network-Friendly Applications outlines strategies for optimizing the user experience of mobile applications that connect via networks. It discusses how mobile networks work and limitations developers should consider. Key recommendations include making few connections, concatenating data transfers, caching assets, using compressed formats, optimized codecs, transmitting only processable data, and conserving battery life. Instant messaging apps in particular are noted as challenging to optimize due to frequent updates that can drain batteries without benefits.
High level programming of embedded hard real-time devicesMr. Chanuwan
This document describes a new Java virtual machine called Fiji VM that targets real-time embedded systems. The Fiji VM ahead-of-time compiles Java bytecode to C code to run on embedded hardware with minimal overhead. Evaluation shows the Fiji VM can achieve throughput close to C for a collision detection benchmark on a LEON3 processor, while still meeting hard real-time deadlines through the use of a concurrent, real-time garbage collector.
Scaling choreographies for the internet of the futurechoreos
The CHOReOS project aims to develop middleware to support large-scale choreographies (distributed service compositions) for the future internet. This will involve integrating distributed service bus, grid/cloud, and pervasive networking technologies to allow choreographies of thousands of services used by millions of users. Key challenges include achieving the necessary scalability levels and addressing the resource constraints of devices in heterogeneous pervasive networks. The proposed middleware architecture builds on existing technologies from partners to address composition, execution, deployment of large choreographies across heterogeneous networks and infrastructure.
The document discusses server virtualization and consolidation in enterprise data centers. It notes that many servers are underutilized but some become overloaded during peaks, and server consolidation aims to increase utilization while maintaining performance. Two main virtualization technologies are hypervisor-based (e.g. VMware, Xen) and operating system-level (e.g. OpenVZ, Linux VServer). The document evaluates the performance and scalability of a multi-tier application running on these virtualization platforms under different consolidation scenarios. It also examines the impact on underlying system metrics to understand virtualization overhead.
Modern Enterprise Software Systems (MESS) is all
about envisioning, developing, managing and evolving
enterprise applications to fulfill business requirements. This
may entail many challenges like rapidly changing business
scenario, increase in complexity, shorter time to market and
business agility. In order to deal with this natural evolution,
achieving modularity across MESS is essential. In this paper,
we describe by way of an example application, some of the
common problems encountered while delivering & managing
enterprise software. We demonstrate that one of the root causes
for these is inadequate support for modularity at the physical
level viz. packaging & deployment. We look at the different
options available for extending the modularity across packaging
and deployment e.g. Impala and Open Service Gateway
initiative (OSGi). Based on our explorations and experiments
we provide a comparison between the two. We conclude the
paper with a note on the future directions for physical
modularity.
This document discusses event driven architecture (EDA) and domain driven design. It begins with an introduction to the speaker and an overview of EDA basics. It then describes problems with traditional SOA implementations, where domain logic gets split across many systems. The document proposes that exposing domain events on a shared event bus allows isolating cross-cutting functions to separate systems while keeping domain logic together. It provides examples of how this approach improves scalability and decouples systems. Finally, it outlines potential business benefits of using EDA like enabling complex event processing, business process management, and business activity monitoring on top of the domain events.
The document discusses trends and challenges facing information technology, including building a civic semantic web and waiving rights over linked data. It also discusses whether semantic technologies could permit meaningful brand relationships. The document contains a chart showing government department spending in the UK, with the Department of Health spending £105.7 billion, followed by local and regional government spending £34.3 billion, and the NHS spending £90.7 billion.
genpaxospublic-090703114743-phpapp01.pdfHiroshi Ono
This document summarizes an Erlang meeting held on July 3, 2009 in Tokyo. It discusses the gen_paxos Erlang module, which implements the Paxos consensus algorithm. Paxos is needed to solve problems like split-brains where data could become inconsistent without coordination between nodes. The document explains the key aspects of Paxos like its phases, data model in gen_paxos, and how nodes communicate through message passing in Erlang. It also provides references to related works and papers about Paxos.
pragmaticrealworldscalajfokus2009-1233251076441384-2.pdfHiroshi Ono
The document discusses Scala and functional programming concepts. It provides examples of building a chat application in 30 lines of code using Lift, defining case classes and actors for messages. It summarizes that Scala is a pragmatically oriented, statically typed language that runs on the JVM and has a unique blend of object-oriented and functional programming. Functional programming concepts like immutable data structures, functions as first-class values, and for-comprehensions are demonstrated with examples in Scala.
This document is the introduction to "The Little Book of Semaphores" by Allen B. Downey. It provides an overview of the book, which uses examples and puzzles to teach synchronization concepts and patterns. The book aims to give students more practice with these challenging concepts than a typical operating systems course allows. It also discusses the book's licensing as free and open source documentation.
This document provides style guidelines for Scala developers at Twitter. It outlines recommendations for imports, implicit usage, reflection, comments, whitespace, logging, project layout, variable naming conventions, and ends by thanking people for attending.
stateyouredoingitwrongjavaone2009-090617031310-phpapp02.pdfHiroshi Ono
The document discusses alternative concurrency paradigms to shared-state concurrency for the JVM, including software transactional memory which allows transactions over shared memory, message passing concurrency using the actor model where actors communicate asynchronously via message passing, and dataflow concurrency where variables can only be assigned once. It provides examples of how these paradigms can be used to implement solutions like transferring funds between bank accounts more elegantly than with shared-state concurrency and locks.
This document discusses using TCP/IP for high performance computing (HPC) applications. It finds that while TCP/IP can achieve bandwidth of 1 Gbps over short distances with low latency, the bandwidth degrades significantly over wide area networks with higher latency. It investigates tuning TCP parameters like socket buffer sizes to improve performance over high latency networks.
Martin Odersky outlines the growth and adoption of Scala over the past 6 years and discusses Scala's future direction over the next 5 years. Key points include:
- Scala has grown from its first classroom use in 2003 to filling a full day of talks at JavaOne in 2009 and developing a large user community.
- Scala 2.8 will include new collections, package objects, named/default parameters, and improved tool support.
- Over the next 5 years, Scala will focus on concurrency and parallelism features at all levels from primitives to tools.
- Other areas of focus include extended libraries, performance improvements, and standardized compiler plugin architecture.
stateyouredoingitwrongjavaone2009-090617031310-phpapp02.pdfHiroshi Ono
This document discusses alternative concurrency paradigms for the Java Virtual Machine (JVM). It begins with an agenda and discusses how Moore's Law no longer solves concurrency problems as processors are becoming multi-core. It then discusses the problems with shared-state concurrency and how separating identity and value can help. The document introduces software transactional memory, message passing concurrency using actors, and dataflow concurrency as alternative paradigms. It uses examples of bank account transfers to demonstrate how these paradigms can be implemented and discusses their advantages over shared-state concurrency.
This document contains the schedule for a conference with sessions on various topics in natural language processing and computational linguistics. The conference will take place from September 14-16. Each day consists of morning and afternoon sessions split into parallel tracks (1a and 1b). Sessions cover areas like semantics, parsing, sentiment analysis, and more. Keynote speakers include Ricardo Baeza-Yates, Kevin Bretonnel Cohen, Mirella Lapata, Shalom Lappin, and Massimo Poesio. Presentations are 20 minutes each with coffee breaks in the mornings and poster sessions in the afternoons.
genpaxospublic-090703114743-phpapp01.pdfHiroshi Ono
This document summarizes a presentation on Paxos and gen_paxos. It introduces Paxos as a distributed consensus algorithm that is robust to network failures and allows data replication across multiple nodes. It then describes the gen_paxos Erlang implementation of Paxos, including its data model, state machine approach, and messaging between nodes. Key aspects of Paxos like the prepare and propose phases are explained through examples. The document also provides context on applications of Paxos and references for further reading.
pragmaticrealworldscalajfokus2009-1233251076441384-2.pdfHiroshi Ono
The document discusses Scala and functional programming concepts. It provides examples of building a chat application in 30 lines of code using Lift, defining messages as case classes, and implementing a chat server and comet component. It then summarizes that Scala is a pragmatically-oriented, statically typed language that runs on the JVM and provides a unique blend of object-oriented and functional programming. Traits allow for static and dynamic mixin-based composition. Functional programming concepts like immutable data structures, higher-order functions, and for-comprehensions are discussed.
This document provides style guidelines for Scala projects at Twitter. It outlines recommendations for imports, implicit usage, reflection, comments, whitespace, logging, project layout, variable naming conventions, and more. The document encourages placing all imports at the top of files, avoiding implicits and reflection when possible, writing Scaladoc comments for all classes and non-trivial methods, using 2 space indentation with no tabs, and following Maven directory conventions.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Monitoring and Managing Anomaly Detection on OpenShift.pdf
pnuts
1. PNUTS: Yahoo!’s Hosted Data Serving Platform
Brian F. Cooper, Raghu Ramakrishnan, Utkarsh Srivastava, Adam Silberstein,
Philip Bohannon, Hans-Arno Jacobsen, Nick Puz, Daniel Weaver and Ramana Yerneni
Yahoo! Research
ABSTRACT Yahoo!’s internal SLAs for page load time, placing stringent
response time requirements on the data management plat-
We describe PNUTS, a massively parallel and geographi-
form. Given that web users are scattered across the globe,
cally distributed database system for Yahoo!’s web applica-
it is critical to have data replicas on multiple continents for
tions. PNUTS provides data storage organized as hashed
low-latency access. Consider social network applications—
or ordered tables, low latency for large numbers of con-
alumni of a university in India may reside in North America
current requests including updates and queries, and novel
and Europe as well as Asia, and a particular user’s data may
per-record consistency guarantees. It is a hosted, centrally
be accessed both by the user from his home in London as
managed, and geographically distributed service, and uti-
well as by his friends in Mumbai and San Francisco. Ideally,
lizes automated load-balancing and failover to reduce oper-
the data platform should guarantee fast response times to
ational complexity. The first version of the system is cur-
geographically distributed users, even under rapidly chang-
rently serving in production. We describe the motivation
ing load conditions brought on by flash crowds, denial of
for PNUTS and the design and implementation of its table
service attacks, etc.
storage and replication layers, and then present experimen-
tal results. High Availability and Fault Tolerance. Yahoo! ap-
plications must provide a high degree of availability, with
1. INTRODUCTION application-specific trade-offs in the degree of fault-tolerance
required and the degree of consistency that is deemed ac-
Modern web applications present unprecedented data man-
ceptable in the presence of faults; e.g., all applications want
agement challenges, even for relatively “simple” tasks like
to be able to read data in the presence of failures, while some
managing session state, content meta-data, and user-generated
insist on also being able to write in the presence of failures,
content such as tags and comments. The foremost require-
even at the cost of risking some data consistency. Downtime
ments of a web application are scalability, consistently good
means money is lost. If we cannot serve ads, Yahoo! does
response time for geographically dispersed users, and high
not get paid; if we cannot render pages, we disappoint users.
availability. At the same time, web applications can fre-
Thus service must continue in the face of a variety of failures
quently tolerate relaxed consistency guarantees. We now ex-
including server failures, network partitions and the loss of
amine these requirements in more detail.
power in a co-location facility.
Scalability. For popular applications such as Flickr and
Relaxed Consistency Guarantees. Traditional database
del.icio.us, the need for a scalable data engine is obvious [4].
systems have long provided us with a well-understood model
We want not only architectural scalability, but the ability
for reasoning about consistency in the presence of concur-
to scale during periods of rapid growth by adding resources
rent operations, namely serializable transactions [5]. How-
with minimal operational effort and minimal impact on sys-
ever, there is a tradeoff between performance and availability
tem performance.
on the one hand and consistency on the other, and it has
Response Time and Geographic Scope. A fundamen- repeatedly been observed that supporting general serializ-
tal requirement is that applications must consistently meet able transactions over a globally-replicated and distributed
system is very expensive [17, 2, 1]. Thus, given our strin-
1
Author emails: {cooperb, ramakris, utkarsh, silberst, plb,
gent performance and availability requirements, achieving
nickpuz, dweaver, yerneni}@yahoo-inc.com
serializability for general transactions is impractical. More-
Hans-Arno Jacobsen’s current affiliation: University of
Toronto, jacobsen@eecg.toronto.edu over, based on our experience with many web applications
at Yahoo!, general transactions are also typically unneces-
sary, since these applications tend to manipulate only one
record at a time. For example, if a user changes an avatar,
Permission to copy without fee all or part of this material is granted provided
that the copies are not made or distributed for direct commercial advantage, posts new pictures, or invites several friends to connect, lit-
the VLDB copyright notice and the title of the publication and its date appear,
tle harm is done if the new avatar is not initially visible to
and notice is given that copying is by permission of the Very Large Data
one friend, etc. (especially if such anomalies are rare).
Base Endowment. To copy otherwise, or to republish, to post on servers
Given that serializability of general transactions is inef-
or to redistribute to lists, requires a fee and/or special permission from the
ficient and often unnecessary, many distributed replicated
publisher, ACM.
systems go to the extreme of providing only eventual con-
VLDB ‘08, August 24-30, 2008, Auckland, New Zealand
Copyright 2008 VLDB Endowment, ACM 000-0-00000-000-0/00/00.
2. sistency [23, 12]: a client can update any replica of an object Record-level Mastering To meet response-time goals,
and all updates to an object will eventually be applied, but PNUTS cannot use write-all replication protocols that are
potentially in different orders at different replicas. However, employed by systems deployed in localized clusters [8, 15].
such an eventual consistency model is often too weak and However, not every read of the data necessarily needs to see
hence inadequate for web applications, as the following ex- the most current version. We have therefore chosen to make
ample illustrates: all high latency operations asynchronous, and to sup-
port record-level mastering. Synchronously writing to
multiple copies around the world can take hundreds of mil-
Example 1. Consider a photo sharing application that
liseconds or more, while the typical latency budget for the
allows users to post photos and control access. For simplicity
database portion of a web request is only 50-100 millisec-
of exposition, let us assume that each user’s record contains
onds. Asynchrony allows us to satisfy this budget despite
both a list of their photos, and the set of people allowed to
geographic distribution, while record-level mastering allows
view those photos. In order to show Alice’s photos to Bob,
most requests, including writes, to be satisfied locally.
the application reads Alice’s record from the database, deter-
mines if Bob is in the access list, and then uses the list of
Hosting PNUTS is a hosted, centrally-managed database
Bob’s photos to retrieve the actual photo files from a separate
service shared by multiple applications. Providing data man-
serving store. A user wishes to do a sequence of 2 updates
agement as a service significantly reduces application devel-
to his record:
opment time, since developers do not have to architect and
U1 : Remove his mother from the list of people who can
implement their own scalable, reliable data management so-
view his photos
lutions. Consolidating multiple applications onto a single
U2 : Post spring-break photos
service allows us to amortize operations costs over multi-
Under the eventual consistency model, update U1 can go to
ple applications, and apply the same best practices to the
replica R1 of the record, while U2 might go to replica R2 .
data management of many different applications. Moreover,
Even though the final states of the replicas R1 and R2 are
having a shared service allows us to keep resources (servers,
guaranteed to be the same (the eventual consistency guaran-
disks, etc.) in reserve and quickly assign them to applica-
tee), at R2 , for some time, a user is able to read a state of
tions experiencing a sudden upsurge in popularity.
the record that never should have existed: the photos have
been posted but the change in access control has not taken
1.2 Contributions
place.
In this paper, we present the design and functionality of
This anomaly breaks the application’s contract with the
PNUTS, as well as the key protocols and algorithms used to
user. Note that this anomaly arises because replica R1 and
route queries and coordinate the growth of the massive data
R2 apply updates U1 and U2 in opposite orders, i.e., replica
store. In order to meet the requirements for a web data plat-
R2 applies update U2 against a stale version of the record.
form, we have made several fundamental—and sometimes
radical—design decisions:
As these examples illustrate, it is often acceptable to read
(slightly) stale data, but occasionally stronger guarantees • An architecture based on record-level, asynchronous
are required by applications. geographic replication, and use of a guaranteed message-
delivery service rather than a persistent log.
1.1 PNUTS Overview
• A consistency model that offers applications transac-
We are building the PNUTS system, a massive-scale, hosted
tional features but stops short of full serializability.
database system to support Yahoo!’s web applications. Our
focus is on data serving for web applications, rather than
• A careful choice of features to include (e.g., hashed
complex queries, e.g., offline analysis of web crawls. We
and ordered table organizations, flexible schemas) or
now summarize the key features and architectural decisions
exclude (e.g., limits on ad hoc queries, no referential
in PNUTS.
integrity or serializable transactions).
Data Model and Features PNUTS exposes a simple re-
• Delivery of data management as a hosted service.
lational model to users, and supports single-table scans with
predicates. Additional features include scatter-gather op-
We discuss these choices, and present the results of an
erations, a facility for asynchronous notification of clients
initial performance study of PNUTS. The first version of our
and a facility for bulk loading.
system is being used in production to support social web
Fault Tolerance PNUTS employs redundancy at multi- and advertising applications. As the system continues to
ple levels (data, metadata, serving components, etc.) and mature, and some additional features (see Section 6) become
leverages our consistency model to support highly-available available, it will be used for a variety of other applications.
reads and writes even after a failure or partition.
2. FUNCTIONALITY
Pub-Sub Message System Asynchronous operations
are carried out over a topic-based pub/sub system called In this section we briefly present the functionality of PNUTS,
Yahoo! Messsage Broker (YMB), which together with PNUTS, and point out ways in which it is limited by our desire to
is part of Yahoo!’s Sherpa data services platform. We chose keep the system as simple as possible while still meeting the
pub/sub over other asynchronous protocols (such as gos- key requirements of Yahoo! application developers. We first
sip [12]) because it can be optimized for geographically dis- outline the data and query model, then present the consis-
tant replicas and because replicas do not need to know the tency model and notification model, and end with a brief
location of other replicas. discussion of the need for efficient bulk loading.
3. 2.1 Data and Query Model dates and deletes for a particular primary key. The inter-
vals between an insert and a delete, shown by a dark line
PNUTS presents a simplified relational data model to the
in the diagram, represent times when the record is physi-
user. Data is organized into tables of records with attributes.
cally present in the database. A read of any replica will
In addition to typical data types, “blob” is a valid data type,
return a consistent version from this timeline, and replicas
allowing arbitrary structures inside a record, but not neces-
always move forward in the timeline. This model is im-
sarily large binary objects like images or audio. (We observe
plemented as follows. One of the replicas is designated as
that blob fields, which are manipulated entirely in applica-
the master, independently for each record, and all updates
tion logic, are used extensively in practice.) Schemas are
to that record are forwarded to the master. The master
flexible: new attributes can be added at any time without
replica for a record is adaptively changed to suit the work-
halting query or update activity, and records are not re-
load – the replica receiving the majority of write requests for
quired to have values for all attributes.
a particular record becomes the master for that record. The
The query language of PNUTS supports selection and pro-
record carries a sequence number that is incremented on ev-
jection from a single table. Updates and deletes must speci-
ery write. As shown in the diagram, the sequence number
fiy the primary key. While restrictive compared to relational
consists of the generation of the record (each new insert is a
systems, single-table queries in fact provide very flexible ac-
new generation) and the version of the record (each update
cess compared to distributed hash [12] or ordered [8] data
of an existing record creates a new version). Note that we
stores, and present opportunities for future optimization by
(currently) keep only one version of a record at each replica.
the system (see Section 3.3.1). Consider again our hypo-
Using this per-record timeline consistency model, we sup-
thetical social networking application: A user may update
port a whole range of API calls with varying levels of con-
her own record, resulting in point access. Another user may
sistency guarantees.
scan a set of friends in order by name, resulting in range
access. PNUTS allows applications to declare tables to be • Read-any: Returns a possibly stale version of the record.
hashed or ordered, supporting both workloads efficently. However, unlike Example 1, the returned record is al-
Our system is designed primarily for online serving work- ways a valid one from the record’s history. Note that
loads that consist mostly of queries that read and write sin- this call departs from strict serializability since with this
gle records or small groups of records. Thus, we expect most call, even after doing a successful write, it is possible
scans to be of just a few tens or hundreds of records, and op- to see a stale version of the record. Since this call has
timize accordingly. Scans can specify predicates which are lower latency than other read calls with stricter guar-
evaluated at the server. Similarly, we provide a “multiget” antees (described next), it provides a way for the appli-
operation which supports retrieving multiple records (from cation to explicitly indicate, on a per-read basis, that
one or more tables) in parallel by specifying a set of primary performance matters more than consistency. For exam-
keys and an optional predicate, but again expect that the ple, in a social networking application, for displaying a
number of records retrieved will be a few thousand at most. user’s friend’s status, it is not absolutely essential to get
Our system, regrettably, also does not enforce constraints the most up-to-date value, and hence read-any can be
such as referential integrity, although this would be very used.
desirable. The implementation challenges in a system with • Read-critical(required version): Returns a version
fine-grained asynchrony are significant, and require future of the record that is strictly newer than, or the same as
work. Another missing feature is complex ad hoc queries the required version. A typical application of this call
(joins, group-by, etc.). While improving query functionality is when a user writes a record, and then wants to read a
is a topic of future work, it must be accomplished in a way version of the record that definitely reflects his changes.
that does not jeapardize the response-time and availability Our write call returns the version number of the record
currently guaranteed to the more “transactional” requests of written, and hence the desired read guarantee can be en-
web applications. In the shorter term, we plan to provide an forced by using a read-critical with required version
interface for both Hadoop, an open source implementation of set to the version returned by the write.
MapReduce [11], and Pig [21], to pull data out of PNUTS for
• Read-latest: Returns the latest copy of the record that
analysis, much as MapReduce pulls data out of BigTable [8].
reflects all writes that have succeeded. Note that read-
2.2 Consistency Model: Hiding the Complex- -critical and read-latest may have a higher latency
than read-any if the local copy is too stale and the sys-
ity of Replication
tem needs to locate a newer version at a remote replica.
PNUTS provides a consistency model that is between the
• Write: This call gives the same ACID guarantees as a
two extremes of general serializability and eventual consis-
transaction with a single write operation in it. This call
tency. Our model stems from our earlier observation that
is useful for blind writes, e.g., a user updating his status
web applications typically manipulate one record at a time,
on his profile.
while different records may have activity with different geo-
graphic locality. We provide per-record timeline consis- • Test-and-set-write(required version): This call per-
tency: all replicas of a given record apply all updates to the forms the requested write to the record if and only if the
record in the same order. An example sequence of updates present version of the record is the same as required-
to a record is shown in this diagram: version. This call can be used to implement transac-
tions that first read a record, and then do a write to the
record based on the read, e.g., incrementing the value
Insert Update Update Delete Insert Update
of a counter. The test-and-set write ensures that two
v. 1.0 v. 1.1 v. 1.2 v. 1.3 v. 2.0 v. 2.1 v. 2.2
such concurrent increment transactions are properly se-
rialized. Such a primitive is a well-known form of opti-
In this diagram, the events on the timeline are inserts, up-
4. Region 1 Region 2
Message
Routers Routers
Tablet broker Tablet
controller controller
Storage units Storage units
Figure 1: PNUTS system architecture
mistic concurrency control [5]. pub/sub infrastructure (see Section 3.2.1), and thus have the
same stringent reliability guarantees as our data replication
Our API, in contrast to that of SQL, may be criticized for
mechanism.
revealing too many implementation details such as sequence
numbers. However, revealing these details does allow the ap-
2.4 Bulk Load
plication to indicate cases where it can do with some relaxed
While we emphasize scalability, we seek to support impor-
consistency for higher performance, e.g., read-critical.
tant database system features whenever possible. Bulk load-
Similarly, a test-and-set write allows us to implement
ing tools are necessary for applications such as comparison
single-row transactions without any locks, a highly desir-
shopping, which upload large blocks of new sale listings into
able property in distributed systems. Of course, if the need
the database every day. Bulk inserts can be done in parallel
arises, our API can be packaged into the traditional BEGIN
to multiple storage units for fast loading. In the hash table
TRANSACTION and COMMIT for single-row transactions, at the
case, the hash function naturally load balances the inserts
cost of losing expressiveness. Note that our consistency
across storage units. However, in the ordered table case,
guarantees are somewhat different than traditional guaran-
bulk inserts of ordered records, records appended to the end
tees such as serializable, repeatable read, read committed,
of the table’s range, or records inserted into already popu-
snapshot isolation and so on. In particular, we make no
lated key ranges require careful handling to avoid hot spots
guarantees as to consistency for multi-record transactions.
and ensure high performance. These issues are discussed
Our model can provide serializability on a per-record ba-
in [25].
sis. In particular, if an application reads or writes the same
record multiple times in the same “transaction,” the appli-
cation must use record versions to validate its own reads and 3. SYSTEM ARCHITECTURE
writes to ensure serializability for the “transaction.”
Figure 1 shows the system architecture of PNUTS. The
In the future, we plan to augment our consistency model
system is divided into regions, where each region contains a
with the following primitives:
full complement of system components and a complete copy
• Bundled updates: Consistency guarantees for write op- of each table. Regions are typically, but not necessarily, ge-
erations that span multiple records (see Section 6). ographically distributed. A key feature of PNUTS is the use
of a pub/sub mechanism for both reliability and replication.
• Relaxed consistency: Under normal operation, if the
In fact, our system does not have a traditional database log
master copy of a record fails, our system has protocols
or archive data. Instead, we rely on the guaranteed delivery
to fail over to another replica. However, if there are ma-
pub/sub mechanism to act as our redo log, replaying updates
jor outages, e.g., the entire region that had the master
that are lost before being applied to disk due to failure. The
copy for a record becomes unreachable, updates cannot
replication of data to multiple regions provides additional
continue at another replica without potentially violat-
reliability, obviating the need for archiving or backups. In
ing record-timeline consistency. We will allow applica-
this section, we first discuss how the components within a
tions to indicate, per-table, whether they want updates
region provide data storage and retrieval. We then examine
to continue in the presence of major outages, potentially
how our pub/sub mechanism, the Yahoo! Message Broker,
branching the record timeline. If so, we will provide au-
provides reliable replication and helps with recovery. Then,
tomatic conflict resolution and notifications thereof. The
we examine other aspects of the system, including query
application will also be able to choose from several con-
processing and notifications. Finally, we discuss how these
flict resolution policies: e.g., discarding one branch, or
components are deployed as a hosted database service.
merging updates from branches, etc.
2.3 Notification 3.1 Data Storage and Retrieval
Trigger-like notifications are important for applications Data tables are horizontally partitioned into groups of
such as ad serving, which must invalidate cached copies of records called tablets. Tablets are scattered across many
ads when the advertising contract expires. Accordingly, we servers; each server might have hundreds or thousands of
allow the user to subscribe to the stream of updates on a tablets, but each tablet is stored on a single server within
table. Notifications are easy to provide given our underlying a region. A typical tablet in our implementation is a few
5. Ordered table with primary key of type STRING Hash table with primary key of type STRING
Tablet boundaries defined by Hash(Primary Key)
MAX_STRING
MIN_STRING
quot;bananaquot;
quot;lemonquot;
quot;peachquot;
quot;grapequot;
0x4A44
0xA443
0x102F
0x943D
0x0000
0xFFFF
Tablet 1 Tablet 2 Tablet 3 Tablet 4 Tablet 5
Tablet 1 Tablet 2 Tablet 3 Tablet 4 Tablet 5
SU 3 SU 1 SU 3 SU 2 SU 1
SU 3 SU 1 SU 3 SU 2 SU 1
(a) (b)
Figure 2: Interval mappings: (a) ordered table, (b) hash table.
hundred megabytes or a few gigabytes, and contains thou- servers per region, with 1,000 tablets each. If keys are 100
sands or tens of thousands of records. The assignment of bytes (which is on the high end for our anticipated appli-
tablets to servers is flexible, which allows us to balance load cations) the total mapping takes a few hundred megabytes
by moving a few tablets from an overloaded server to an of RAM (containing one key and storage unit address per
underloaded server. Similarly, if a server fails, we can divide tablet.) Note that with tablets that are 500 MB on average,
its recovered tablets over multiple existing or new servers, this corresponds to a database that is 500 terabytes in size.
spreading the load evenly. For much larger databases, the mapping may not fit in mem-
Three components in Figure 1 are primarily responsible ory, and we will have to use a mapping that is optimized for
for managing and providing access to data tablets: the stor- disk-based access.
age unit, the router, and the tablet controller. Storage Routers contain only a cached copy of the interval map-
units store tablets, respond to get() and scan() requests by ping. The mapping is owned by the tablet controller,
retrieving and returning matching records, and respond to and routers periodically poll the tablet controller to get any
set() requests by processing the update. Updates are com- changes to the mapping. The tablet controller determines
mitted by first writing them to the message broker, as when it is time to move a tablet between storage units for
described in the next section. The storage unit can use any load balancing or recovery and when a large tablet must be
physical storage layer that is appropriate. For hash tables, split. In each case, the controller will update the authori-
our implementation uses a UNIX filesystem-based hash table tative copy of the mapping. For a short time after a tablet
implemented originally for Yahoo!’s user database. For or- moves or splits, the routers’ mappings will be out of date,
dered tables, we use MySQL with InnoDB because it stores and requests will be misdirected. A misdirected request re-
records ordered by primary key. Schema flexibility is pro- sults in a storage unit error response, causing the router
vided for both storage engines by storing records as parsed to retrieve a new copy of the mapping from the controller.
JSON objects. Thus, routers have purely soft state; if a router fails, we
In order to determine which storage unit is responsible for simply start a new one and do not perform recovery on the
a given record to be read or written by the client, we must failed router. The tablet controller is a single pair of ac-
first determine which tablet contains the record, and then tive/standby servers, but the controller is not a bottleneck
determine which storage unit has that tablet. Both of these because it does not sit on the data path.
functions are carried out by the router. For ordered tables, The primary bottleneck in our system is disk seek capacity
the primary-key space of a table is divided into intervals, on the storage units and message brokers. For this reason,
and each interval corresponds to one tablet. The router different PNUTS customers are currently assigned different
stores an interval mapping, which defines the boundaries of clusters of storage units and message broker machines (as
each tablet, and also maps each tablet to a storage unit. An a simple form of quality-of-service and isolation from other
example is shown in Figure 2a. This mapping is similar to a customers). Customers can share routers and tablet con-
very large root node of a B+ tree. In order to find the tablet trollers. In the future, we would like all customers to be
for a given primary key, we conduct a binary search over able to share all components, so that sudden load spikes can
the interval mapping to find the tablet enclosing the key. be absorbed across all of the available server machines. How-
Once we find the tablet, we have also found the appropriate ever, this requires us to examine flexible quota and admis-
storage server. sion control mechanisms to ensure each application receives
For hash-organized tables, we use an n-bit hash function its fair share of the system.
H() that produces hash values 0 ≤ H() < 2n . The hash
3.2 Replication and Consistency
space [0...2n ) is divided into intervals, and each interval cor-
responds to a single tablet. An example is shown in Fig- Our system uses asynchronous replication to ensure low-
ure 2b. To map a key to a tablet, we hash the key, and then latency updates. We use the Yahoo! message broker, a
search the set of intervals, again using binary search, to lo- publish/subscribe system developed at Yahoo!, both as our
cate the enclosing interval and thus the tablet and storage replacement for a redo log and as our replication mechanism.
unit. We chose this mechanism, instead of a more traditional
linear or extensible hashing mechanism, because of its sym- 3.2.1 Yahoo! Message Broker
metry with the ordered table mechanism. Thus, we can use
Yahoo! Messsage Broker (YMB) is a topic-based pub/sub
the same code to maintain and search interval mappings for
system, which together with PNUTS, is part of Yahoo!’s
both hash and ordered tables. Sherpa data services platform. Data updates are consid-
The interval mapping fits in memory, making it inexpen- ered “committed” when they have been published to YMB.
sive to search. For example, our planned scale is about 1,000 At some point after being committed, the update will be
6. asynchronously propagated to different regions and applied Updates for a record can originate in a non-master re-
to their replicas. Because replicas may not reflect the latest gion, but must be forwarded to the master replica before
updates, we have had to develop a consistency model that being committed. Each record maintains, in a hidden meta-
helps programmers deal with staleness; see Section 2.2. data field, the identity of the current master. If a storage
We are able to use YMB for replication and logging for unit receives a set() request, it first reads the record to
two reasons. First, YMB takes multiple steps to ensure mes- determine if it is the master, and if not, what replica to
sages are not lost before they are applied to the database. forward the request to. The mastership of a record can mi-
YMB guarantees that published messages will be delivered grate between replicas. If a user moves from Wisconsin to
to all topic subscribers even in the presence of single bro- California, the system will notice that the write load for the
ker machine failures. It does this by logging the message record has shifted to a different datacenter (using another
to multiple disks on different servers. In our current con- hidden metadata field in the record that maintains the ori-
figuration, two copies are logged initially, and more copies gin of the last N updates) and will publish a message to
are logged as the message propagates. The message is not YMB indicating the identity of the new master. In the cur-
purged from the YMB log until PNUTS has verified that rent implementation, N = 3, and since our region names are
the update is applied to all replicas of the database. Second, 2 bytes this tracking adds only a few bytes of overhead to
YMB is designed for wide-area replication: YMB clusters re- each record.
side in different, geographically separated datacenters, and In order to enforce primary key constraints, we must send
messages published to one YMB cluster will be relayed to inserts of records with the same primary key to the same
other YMB clusters for delivery to local subscribers. This storage unit; this storage unit will arbitrate and decide which
mechanism isolates individual PNUTS clusters from dealing insert came first and reject the others. Thus, we have to
with update propagation between regions. designate one copy of each tablet as the tablet master, and
YMB provides partial ordering of published messages. Mes- send all inserts into a given tablet to the tablet master. The
sages published to a particular YMB cluster will be deliv- tablet master can be different than the record level master
ered to all subscribers in the order they were published. assigned to each record in the tablet.
However, messages published to different YMB clusters may
3.2.3 Recovery
be delivered in any order. Thus, in order to provide time-
line consistency, we have developed a per-record mastership Recovering from a failure involves copying lost tablets
mechanism, and the updates published by a record’s master from another replica. Copying a tablet is a three step pro-
to a single YMB cluster are delivered in the published or- cess. First, the tablet controller requests a copy from a
der to other replicas (see the next section). While stronger particular remote replica (the “source tablet”). Second, a
ordering guarantees would simplify this protocol, global or- “checkpoint message” is published to YMB, to ensure that
dering is too expensive to provide when different brokers are any in-flight updates at the time the copy is initiated are ap-
located in geographically separated datacenters. plied to the source tablet. Third, the source tablet is copied
to the destination region. To support this recovery proto-
col, tablet boundaries are kept synchronized across replicas,
3.2.2 Consistency via YMB and mastership
and tablet splits are conducted by having all regions split a
Per-record timeline consistency is provided by designat-
tablet at the same point (coordinated by a two-phase com-
ing one copy of a record as the master, and directing all up-
mit between regions). Most of the time in this protocol is
dates to the master copy. In this record-level mastering
spent transferring the tablet from one region to another.
mechanism, mastership is assigned on a record-by-record ba-
Note that in practice, because of the bandwidth cost and la-
sis, and different records in the same table can be mastered
tency needed to retrieve tablets from remote regions, it may
in different clusters. We chose this mechanism because we
be desirable to create “backup regions” which maintain a
have observed significant write locality on a per-record ba-
back-up replica near serving replicas. Then, recovering a
sis in our web workloads. For example, a one week trace
table would involve transferring it from a “region” in the
of updates to 9.8 million user ids in Yahoo!’s user database
same or a nearby datacenter, rather than from a geograph-
showed that on average, 85 percent of the writes to a given
ically distant datacenter.
record originated in the same datacenter. This high locality
justifies the use of a master protocol from a performance
3.3 Other Database System Functionality
standpoint. However, since different records have update
affinity for different datacenters, the granularity of master-
3.3.1 Query Processing
ship must be per-record, not per tablet or per table; other-
wise, many writes would pay expensive cross-region latency Operations that read or update a single record can be di-
to reach the master copy. rectly forwarded to the storage unit holding (the tablet that
All updates are propagated to non-master replicas by pub- contains) the record. However, operations that touch mul-
lishing them to the message broker, and once the update is tiple records require a component that generates multiple
published we treat it as committed. A master publishes its requests and monitors their success or failure. The com-
updates to a single broker, and thus updates are delivered ponent responsible for multi-record requests is called the
to replicas in commit order. Because storage units are so scatter-gather engine, and is a component of the router.
numerous, we prefer to use cheaper, commodity servers in- The scatter-gather engine receives a multi-record request,
stead of expensive, highly reliable storage. By leveraging the splits it into multiple individual requests for single records
reliable publishing properties of YMB, we can survive data- or single tablet scans, and initiates those requests in parallel.
loss failures on storage units—any “committed” update is As the requests return success or failure, the scatter-gather
recoverable from a remote replica, and we do not have to engine assembles the results and then passes them to the
recover any data from the failed storage unit itself. client. In our implementation, the engine can begin stream-
7. ing some results back to the client as soon as they appear. to all topics for a table; whenever a new topic is created
We chose a server-side approach instead of having the client due to a tablet split, the client is automatically subscribed.
initiate multiple parallel requests for several reasons. First, Second, slow clients can cause undelivered messages to back
at the TCP/IP layer, it is preferable to have one connection up on the message broker, consuming resources. Our cur-
per client to the PNUTS service; since there are many clients rent policy is to break the subscriptions of slow notification
(and many concurrent processes per client machine) opening clients and discard their messages when the backlog gets too
one connection to PNUTS for each record being requested large. In the future, we plan to examine other policies.
in parallel overloads the network stack. Second, placing this
3.4 Hosted Database Service
functionality on the server side allows us to optimize, for
example by grouping multiple requests to the same storage PNUTS is a hosted, centrally-managed database service
server in the same web service call. shared by multiple applications. To add capacity, we add
Range queries and table scans are also handled by the servers. The system adapts by automatically shifting some
scatter gather engine. Typically there is only a single client load to the new servers. The bottleneck for some applica-
process retrieving the results for a query. The scatter gather tions is the number of disk seeks that can be done concur-
engine will scan only one tablet at a time and return results rently; for others it is the amount of aggregate RAM for
to the client; this is about as fast as a typical client can caching or CPU cycles for processing queries. In all cases,
process results. In the case of a range scan, this mecha- adding more servers adds more of the bottleneck resource.
nism simplifies the process of returning the top-K results When servers have a hard failure (such as a burnt out power
(a frequently requested feature), since we only need to scan supply or RAID controller failure), we automatically recover
enough tablets to provide K results. After returning the by copying data (from a replica) to other live servers (new
first set of results, the scatter-gather engine constructs and or existing), carrying out little or no recovery on the failed
returns a continuation object, which allows the client to re- server itself. Our goal is to scale to more than ten worldwide
trieve the next set of results. The continuation object con- replicas, each with 1,000 or more servers. At this scale, auto-
tains a modified range query, which, when executed, restarts mated failover and load balancing is the only way to manage
the range scan at the point the previous results left off. Con- the operations load.
tinuation objects allow us to have cursor state on the client This hosted model introduces several complications that
side rather than the server. In a shared service such as must be dealt with. First, different applications have differ-
PNUTS, it is essential to minimize the amount of server- ent workloads and requirements, even within our relatively
side state we have to manage on behalf of clients. narrow niche of web serving applications. Therefore, the
Future versions of PNUTS will include query optimization system must support several different workload profiles, and
techniques that go beyond this simple incremental scanning. be automatically or easily tunable to different profiles. For
For example, if clients can supply multiple processes to re- example, our mastership migration protocol adapts to the
trieve results, we can stream query results back in parallel, observed write patterns of different applications. Second, we
achieving more throughput and leveraging the inherently need performance isolation so that one heavyweight appli-
parallel nature of the system. Also, if a client has specified cation does not negatively impact the performance of other
a predicate for a range or table scan, we might have to scan applications. In our current implementation, performance
multiple tablets in order to find even a few matching results. isolation is provided by assigning different applications to
In this case, we will maintain and use statistics about data different sets of storage units within a region.
to determine the expected number of tablets that must be
scanned, and scan that many tablets in parallel for each set
4. PNUTS APPLICATIONS
of results we return.
In this section, we briefly describe the Yahoo! applica-
We have deliberately eschewed complex queries involving
tions that motivated and influenced PNUTS. Some of these
joins and aggregation, to minimize the likelihood of unantic-
applications are currently running on PNUTS, while others
ipated spikes in system workload. In future, based on expe-
are planned to do so in future.
rience with the system and if there is strong user demand, we
might consider expanding the query language, since there is
User Database Yahoo!’s user database has hundreds of
nothing in our design that fundamentally prevents us from
millions of active user IDs, and billions of total IDs. Each
supporting a richer query language.
record contains user preferences, profile information and us-
age statistics, as well as application-specific data such as
3.3.2 Notifications the location of the user’s mail home or their Yahoo! Instant
PNUTS provides a service for notifying external systems Messenger buddy list. Data is read and possibly written
of updates to data, for example to maintain an external data on every user page view, and thus the volume of traffic is
cache or populate a keyword search engine index. Because extremely high. The massive parallelism of PNUTS will
we already use a pub/sub message broker to replicate up- help support the huge number of concurrent requests. Also,
dates between regions, providing a basic notification service our asynchrony model provides low latency, which is critical
involves allowing external clients to subscribe to our message given the need to read and often write the user database
broker and receive updates. This architecture presented two on every page view. User data cannot be lost, but relaxed
main challenges. First, there is one message broker topic per consistency is acceptable: the user must see his own changes
tablet, and external subscribers need to know which topics but it is fine if other users do not see the user’s changes for
to subscribe to. However, we want to isolate clients from some time. Thus, our record timeline model is a good fit.
knowledge of tablet organization (so that we can split and re- The user database also functions well under a hosted service
organize tablets without having to notify clients). Therefore, model, since many different applications need to share this
our notification service provides a mechanism to subscribe data.
8. Component Servers/region OS
Social Applications Social and “Web 2.0” applications
Storage unit 5 FreeBSD 6.3
require a flexible data store that can support operations Message broker 2 FreeBSD 6.3
geared around information sharing and connections between Router 1 Linux RHEL 4
users. The flexible schemas of PNUTS will help support Tablet controller 1 Linux RHEL 4
Client 1 Linux RHEL 4
rapidly evolving and expanding social applications. Sim-
ilarly, the ordered table abstraction is useful to represent
Region Machine
connections in a social graph. We can create a relationship
West 1, Dual 2.8 GHz Xeon, 4GB RAM,
table with a primary key that is a composite of (Friend1, West 2 6 disk RAID 5 array
Friend2), and then find all of a user’s friends by request- East Quad 2.13 GHz Xeon, 4GB RAM,
1 SATA disk
ing a range scan for all records with key prefixed by a given
user ID. Because such relationship data (and other social in-
Table 1: Machine configurations for different re-
formation) is useful across applications, our hosted service
gions. Storage units and message brokers run on
model is a good fit. Social applications typically have large
FreeBSD because of their use of BSD-only packages.
numbers of small updates, as the database is updated every
The difference in machine configuration between the
time a user posts a photo or writes a blog post. Thus, scala-
West and East areas results from our using repur-
bility to high write rates, as provided by our parallel system,
posed machines for our experiments.
is essential. However, the dissemination of these updates to
other users does not have to be real time, which means that
our relaxed consistency model works well for this applica-
tion. The first set of production applications running on
the system. We compared the performance of the hash and
PNUTS are social applications.
ordered tables. For these experiments, we set up a three-
region PNUTS cluster, with two regions on the west coast
Content Meta-Data Mass storage of data such as mail
of the United States and one region on the east coast. We
attachments, images and video is a challenge at Yahoo! and
ran our experiments on this test cluster instead of the larger
many other web companies. While PNUTS is not opti-
production system so that we could modify parameters and
mized to provide bulk storage, it can play a crucial role
code and measure the performance impact. In ongoing work
as the metadata store of a distributed bulk storage system.
we are gathering measurements on larger installations.
While a distributed bulk filesystem may store the actual
file blocks, PNUTS can manage the structured metadata
5.1 Experimental Setup
normally stored inside directories and inodes. One planned
We used an enhanced version of the production code of
customer of PNUTS will use the system as such a metadata
PNUTS in our experiments. The current production ver-
store, utilizing its scalability and low latency to ensure high
sion supports only hash tables; the enhanced system also
performance for metadata operations such as file creation,
supports ordered tables. The primary change needed to
deletion, renaming and moving between directories. The
support ordered tables was to replace the storage engine
consistency model of PNUTS is critical to properly manag-
(a Yahoo! proprietary disk-based hashtable) with MySQL
ing this metadata without sacrificing scalability.
using InnoDB to get good range scan performance within a
Listings Management Comparison shopping sites such storage unit. We also needed to modify the router to sup-
as Yahoo! Shopping aggregate listings of items for sale from port lookup by primary key in addition to hash of primary
many sources. These sites provide the ability to search for key, and the tablet controller to create tablets named by
items and sort by price, ratings, etc. Our ordered table key range instead of by hash range. The system is written
can be used to store listings, sorted by timestamp, to allow primarily in C++ with some components, in particular the
shopping sites to show the most recent N items. Creating tablet controller and the administrative scripts, written in
an index or view of the data (see Section 6) will allow us to PHP and Perl. We set up three PNUTS regions, two in
retrieve items sorted by secondary attributes such as price. the San Francisco Bay Area in California (regions “West 1”
Also, the flexible schemas in PNUTS will make it easy to and “West 2”) and one in Virginia (region “East”). The
model the varied attributes of different kinds of products. configuration of each region is shown in Table 1.
Our database contained synthetically generated 1 KB rec-
Session Data Web sites often maintain per-session state, ords, replicated across three regions, each with 128 tablets.
and the large number of concurrent sessions active at a large We generated workload against the database by running a
site like Yahoo! means that a scalable storage system is workload process on a separate server in each region. Each
needed to manage the state. Strong consistency is not re- process had 100 client threads, for a total of 300 clients
quired for this state, and to enhance performance, appli- across the system. For each client thread, we specified the
cations may decide to turn off all PNUTS consistency for number of requests, the rate to generate requests, the mix
session tables. Because managing session state is a com- of reads and writes, and the probability that an updated
mon task across many different web applications, running record was mastered in the same region as the client (called
PNUTS as a service allows applications to quickly use the the “locality”). The values we used for these parameters are
session store without having to architect and implement shown in Table 2. In particular, the locality parameter is
their own solution. based on our measurements of the production Yahoo! user
database. Each update overwrote half of the record. In most
5. EXPERIMENTAL RESULTS of our experiments the records to be read or written were
We ran a series of experiments to evaluate the perfor- chosen randomly according to a uniform distribution. We
mance of our system. Our performance metric was average also experimented with a Zipfian distribution to measure the
request latency, since minimizing latency is a primary goal of impact of skew on our system.
9. Total clients 300
140
Requests per client 1,000 Hash table
Ordered table
Request rate 1200 to 3600 requests/sec 120
(4 to 12 requests/sec/client)
Avg. Latency (ms)
100
Read:write mix 0 to 50 percent writes
Locality 0.8 80
60
40
Table 2: Experimental parameters
20
0
5.2 Inserting Data 1200 1800 2400 3000
Requests per second
Our experiment client issues multiple insertion requests
in parallel. In the hash table case, record insertions are nat-
urally load balanced across storage servers because of the Figure 3: Impact of varying request rate on the av-
hashing function, allowing us to take advantage of the paral- erage request latency.
lelism inherent in the system. Recall that to enforce primary
key constraints, we must designate a “tablet master” where 160
all inserts are forwarded. We used 99 clients (33 per region) Hash table
140 Ordered table
to insert 1 million records, one third into each region; in this
120
Avg. Latency (ms)
experiment the tablet master region was West 1. Inserts re-
100
quired 75.6 ms per insert in West 1 (tablet master), 131.5
80
ms per insert into the non-master West 2, and 315.5 ms per
60
insert into the non-master East. These results show the ex-
40
pected effect that the cost of inserting is significantly higher
20
if the insert is initiated in a non-master region that is far
0
away from the tablet master. We experimented with vary- 0 10 20 30 40 50
ing the number of clients and found that above 100 clients, Write percentage
increasing contention for tablets introduced higher latency
and in some cases timeouts on insertions. We can add stor-
age units to alleviate this problem. Insertion time for the Figure 4: Impact of varying read:write ratio on the
ordered table showed a similar trend, requiring 33 ms per average request latency.
insert in West 1, 105.8 ms per insert in the non-master West
2, and 324.5 ms per insert in the non-master East. These
measurements demonstrate that MySQL is faster than our should be optimized or replaced to ensure high performance
hashtable implementation under moderate load. However, if even for lower request rates. Also, while the largest request
the concurrent insert load on the system is too high, MySQL rate we targeted was 3,600 requests/second, the clients were
experiences a performance drop due to contention (in par- only able to achieve 3,090 requests/second for the ordered
ticular, concurrent inserts into the primary key index). For table and 2,700 requests/second for the hash table. Because
this reason, when inserting into the ordered table we only clients sent requests serially, the high latency limited the
used 60 clients. Again, more storage units would help. (For maximum request rate.
performance on bulk-loading records into an ordered table in
5.4 Varying Read/Write Ratio
primary key order, see [25]; this operation requires special
handling in order to achieve high parallelism.) Our storage Next, we examined the effect of the read/write mix on la-
unit implementation is not particularly optimized; for ex- tency. We varied the percentage of writes between 0 and 50
ample out of the 75.6 ms latency per insert into the hash percent of requests, while keeping the request rate at 1200
table, 40.2 ms was spent in the storage unit itself, obtaining requests/second. The results are shown in Figure 4. As
locks, updating metadata structures and writing to disk. It the proportion of reads increases, the average latency de-
should be possible to optimize this performance further. creases. Read requests can be satisfied by a local copy of
the data, while write requests must be satisfied by the mas-
5.3 Varying Load ter record. When a write request originates in a non-master
We ran an experiment where we examined the impact region (20 percent of the time), it must be forwarded to
of increased load on the average latency for requests. In the master region, incurring high latency. Thus, when the
this experiment, we varied the total request rate across the proportion of writes is high, there is a large number of high-
system between 1200 and 3600 requests/second, while 10 latency requests, resulting in a large average latency. In
percent of the requests were writes. The results are shown particular, writes that had to be forwarded from the east to
in Figure 3. The figure shows that latency decreases, and the west coast, or vice versa, required 324.4 ms on average
then increases, with increasing load. As load increases, con- compared to 92.0 ms on average for writes that could com-
tention for resources (in particular, disk seeks) increases, plete locally; these measurements are for the hash table but
resulting in higher latency. The exception is the left portion measurements for the ordered table were comparable. As
of the graph, where latency initially decreases as load in- the number of writes decreases, more requests can be satis-
creases. The high latency at low request rate resulted from fied locally, bringing down the average latency. This result
an anomaly in the HTTP client library we used, which closed demonstrates the benefit of our timeline consistency model:
TCP connections in between requests at low request rates, because most reads can be satisfied by local, but possibly
requiring expensive TCP setup for each call. This library stale data, read latency is low. Even the higher latency for
10. 120 9000
Hash table Hash table 300 clients
140
8000
Ordered table Ordered table 30 clients
100 120 7000
Avg. Latency (ms)
Avg. Latency (ms)
Avg. Latency (ms)
80 100 6000
5000
80
60
4000
60
40 3000
40
2000
20 20 1000
0 0 0
0 0.25 0.5 0.75 1 2 2.5 3 3.5 4 4.5 5 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Zipf Factor Number of storage units Percent of table scanned
Figure 5: Impact of varying the Figure 6: Impact of varying the Figure 7: Impact of varying the
skew of requests on the average re- number of storage units on the av- size of range scans on the average
quest latency. erage request latency. request latency.
writes is mitigated by the fact that per-record mastering al- The results are shown in Figure 7. As the figure shows, the
lows us to assign the master to the region with the most time to range scan increases linearly with the size of the scan.
updates for a record. The “bump” observed at 30 percent However, the average completion time for the whole range
writes for the hash table is an anomaly; on examining our scan is much higher when there are more clients running.
numbers we determined that there was unusually high la- Range scanning is a fairly heavyweight process, and with
tency within the datacenter where our West 1 and West 2 more range scans occurring concurrently, all of the system
regions reside, most likely due to congestion caused by other resources (disk, CPU and bandwidth) become overloaded.
systems running in the datacenter.
6. LONG TERM VISION
5.5 Varying Skew
PNUTS was designed from the ground up to support a
We also examined a workload where the popularity of variety of advanced functionality, such as indexes over data.
records was non-uniform. It has been observed that web However, building a complex system required us to build in
workloads often exhibit Zipfian skew [7]. We ran an exper- phases, and we have first built the basic table storage and
iment where the probability of requesting a record varied consistency layers described in this paper. In this section,
according to a Zipfian distribution, with the Zipf parameter we describe some advanced functionality under design and
varying from 0 (uniform) to 1 (highly skewed). The work- development; the goal is to support them in the production
load had 10 percent writes, and 1,200 requests/second. The platform over the next year or two.
results in Figure 5 show that for the hash table, average
request latency first increases, and then decreases slightly 6.1 Indexes and Materialized Views
with higher skew. More skew causes more load imbalance In order to support efficient query processing, it is often
(and higher latency), but this effect is soon outweighed by critical to provide secondary indexes and materialized views.
better cache locality for popular records (resulting in lower In our system, indexes and views will be treated equiva-
latency). In the ordered table, the improved caching always lently; an index is just a special case of a view that provides
dominates, resulting in better latency with more skew. efficient look up or range scans on secondary attributes of
the base table. Our indexes and views will be stored as
5.6 Varying Number of Storage Units regular ordered tables, but are asynchronously maintained
Next, we examined the impact of the number of storage by the system. An index/view maintainer will listen to
units on latency. One of the main advantages of our sys- the stream of updates from message broker, and generate
tem is the ability to scale by adding more servers. We var- corresponding updates. For example, if a user moves from
ied the number of storage units per region from 2-5. The Wisconsin to California, and we have an index on location,
workload had 10 percent writes, and 1,200 requests/second. the maintainer will delete the Wisconsin index entry for the
The results are shown in Figure 6. As the figure shows, user and insert a California index entry for the user. Further
latency decreases with increasing number of storage units. research is needed to examine the semantic implications of
More storage units provide more capacity, in particular disk answering queries using possibly stale indexes and views.
seek capacity, reducing the latency of individual requests.
6.2 Bundled Updates
The decrease is roughly linear; we attribute the not-quite-
linearity of the hash table result to experimental noise. Several customers have expressed a need for an exten-
sion of the consistency guarantees we provide (Section 2.2).
5.7 Varying Size of Range Scans The extension, called bundled updates, provides atomic, non-
Finally, we examined the impact of range scan size on isolated updates to multiple records. That is, all updates in
latency. This experiment was run only with the ordered the bundle are guaranteed to eventually complete, but other
table. The hash table inherently does not support range transactions may see intermediate states resulting from a
scans by primary key unless we do an expensive table scan. subset of the updates. For example, if Alice and Bob accept
We varied the size of range scans between 0.01 percent and a bi-directed social network connection, we need to update
0.1 percent of the total table. We also conducted runs with both Alice’s and Bob’s records to point to the other user.
30 clients (10 per region) and 300 clients (100 per region). Both updates need to complete (and the application writer