The document provides an overview of Riak, describing it as a key-value store that is distributed, horizontally scalable, fault-tolerant, and highly available. It was built for the web and was inspired by Amazon's Dynamo database. The tutorial then demonstrates how to install Riak, start a cluster, and perform basic PUT and GET operations to store and retrieve values.
Here's an example of how to code with Riak using cURL and ruby to do a basic PUT, GET and more. We then index the data using Apache Solr integration.
No matter what platform we’re discussing, we’re beyond the view of rows and columns. Data is more diverse than ever. More difficult to parse. Here is some of that story.
This is a presentation given by Matt Brender (@mjbrender) at Big Data TechCon 2015.
In this class, we will discuss why companies choose Riak over a relational database with a specific focus on availability, scalability, and the key/value data model. We then analyze the decision points that should be considered when choosing a non-relational solution and review data modeling, querying, and consistency guarantees. Finally, we end with simple patterns for building common applications in Riak using its key/value design, dealing with data conflicts that emerge in an eventually consistent system, and discuss multi-datacenter replication.
Deploying Apache Flume to enable low-latency analyticsDataWorks Summit
The driving question behind redesigns of countless data collection architectures has often been, ?how can we make the data available to our analytical systems faster?? Increasingly, the go-to solution for this data collection problem is Apache Flume. In this talk, architectures and techniques for designing a low-latency Flume-based data collection and delivery system to enable Hadoop-based analytics are explored. Techniques for getting the data into Flume, getting the data onto HDFS and HBase, and making the data available as quickly as possible are discussed. Best practices for scaling up collection, addressing de-duplication, and utilizing a combination streaming/batch model are described in the context of Flume and Hadoop ecosystem components.
Here's an example of how to code with Riak using cURL and ruby to do a basic PUT, GET and more. We then index the data using Apache Solr integration.
No matter what platform we’re discussing, we’re beyond the view of rows and columns. Data is more diverse than ever. More difficult to parse. Here is some of that story.
This is a presentation given by Matt Brender (@mjbrender) at Big Data TechCon 2015.
In this class, we will discuss why companies choose Riak over a relational database with a specific focus on availability, scalability, and the key/value data model. We then analyze the decision points that should be considered when choosing a non-relational solution and review data modeling, querying, and consistency guarantees. Finally, we end with simple patterns for building common applications in Riak using its key/value design, dealing with data conflicts that emerge in an eventually consistent system, and discuss multi-datacenter replication.
Deploying Apache Flume to enable low-latency analyticsDataWorks Summit
The driving question behind redesigns of countless data collection architectures has often been, ?how can we make the data available to our analytical systems faster?? Increasingly, the go-to solution for this data collection problem is Apache Flume. In this talk, architectures and techniques for designing a low-latency Flume-based data collection and delivery system to enable Hadoop-based analytics are explored. Techniques for getting the data into Flume, getting the data onto HDFS and HBase, and making the data available as quickly as possible are discussed. Best practices for scaling up collection, addressing de-duplication, and utilizing a combination streaming/batch model are described in the context of Flume and Hadoop ecosystem components.
This is the talk I gave at the Big Data Meetup in Seattle in March. In this talk, I discuss the fundamentals of Spark Streaming and Flume, and how they integrate with each other.
Meet Up - Spark Stream Processing + KafkaKnoldus Inc.
Stream processing is the real-time processing of data continuously, concurrently, and in a record-by-record fashion.
It treats data not as static tables or files, but as a continuous infinite stream of data integrated from both live and historical sources.
In these slides we'll be looking into Sprak Stream Processing with Kafka.
Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...StreamNative
We will introduce HerdDB a distributed database written in Java.
We will see how a distributed database can be built using Apache BookKeeper as write-ahead commit log.
Apache Drill and Zeppelin: Two Promising Tools You've Never Heard OfCharles Givre
Study after study shows that data preparation and other data janitorial work consume 50-90% of most data scientists’ time. Apache Drill is a very promising tool which can help address this. Drill works with many different forms of “self describing data” and allows analysts to run ad-hoc queries in ANSI SQL against that data. Unlike HIVE or other SQL on Hadoop tools, Drill is not a wrapper for Map-Reduce and can scale to clusters of up to 10k nodes.
We're talking about serious log crunching and intelligence gathering with Elastic, Logstash, and Kibana.
ELK is an end-to-end stack for gathering structured and unstructured data from servers. It delivers insights in real time using the Kibana dashboard giving unprecedented horizontal visibility. The visualization and search tools will make your day-to-day hunting a breeze.
During this brief walkthrough of the setup, configuration, and use of the toolset, we will show you how to find the trees from the forest in today's modern cloud environments and beyond.
Introduction to Apache Spark. With an emphasis on the RDD API, Spark SQL (DataFrame and Dataset API) and Spark Streaming.
Presented at the Desert Code Camp:
http://oct2016.desertcodecamp.com/sessions/all
Cloudera Impala provides a fast, ad hoc query capability to Apache Hadoop, complementing traditional MapReduce batch processing. Learn the design choices and architecture behind Impala, and how to use near-ubiquitous SQL to explore your own data at scale.
As presented to Portland Big Data User Group on July 23rd 2014.
http://www.meetup.com/Hadoop-Portland/events/194930422/
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. Holden Karau and Joey Echeverria explore how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, and some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose. Holden and Joey demonstrate how to effectively search logs from Apache Spark to spot common problems and discuss options for logging from within your program itself. Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but Holden and Joey look at how to effectively use Spark’s current accumulators for debugging before gazing into the future to see the data property type accumulators that may be coming to Spark in future versions. And in addition to reading logs and instrumenting your program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems. Holden and Joey cover how to quickly use the UI to figure out if certain types of issues are occurring in your job.
The talk will wrap up with Holden trying to get everyone to buy several copies of her new book, High Performance Spark.
This presentation will be useful to those who would like to get acquainted with Apache Spark architecture, top features and see some of them in action, e.g. RDD transformations and actions, Spark SQL, etc. Also it covers real life use cases related to one of ours commercial projects and recall roadmap how we’ve integrated Apache Spark into it.
Was presented on Morning@Lohika tech talks in Lviv.
Design by Yarko Filevych: http://www.filevych.com/
Spark and Cassandra with the Datastax Spark Cassandra Connector
How it works and how to use it!
Missed Spark Summit but Still want to see some slides?
This slide deck is for you!
This is the talk I gave at the Big Data Meetup in Seattle in March. In this talk, I discuss the fundamentals of Spark Streaming and Flume, and how they integrate with each other.
Meet Up - Spark Stream Processing + KafkaKnoldus Inc.
Stream processing is the real-time processing of data continuously, concurrently, and in a record-by-record fashion.
It treats data not as static tables or files, but as a continuous infinite stream of data integrated from both live and historical sources.
In these slides we'll be looking into Sprak Stream Processing with Kafka.
Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...StreamNative
We will introduce HerdDB a distributed database written in Java.
We will see how a distributed database can be built using Apache BookKeeper as write-ahead commit log.
Apache Drill and Zeppelin: Two Promising Tools You've Never Heard OfCharles Givre
Study after study shows that data preparation and other data janitorial work consume 50-90% of most data scientists’ time. Apache Drill is a very promising tool which can help address this. Drill works with many different forms of “self describing data” and allows analysts to run ad-hoc queries in ANSI SQL against that data. Unlike HIVE or other SQL on Hadoop tools, Drill is not a wrapper for Map-Reduce and can scale to clusters of up to 10k nodes.
We're talking about serious log crunching and intelligence gathering with Elastic, Logstash, and Kibana.
ELK is an end-to-end stack for gathering structured and unstructured data from servers. It delivers insights in real time using the Kibana dashboard giving unprecedented horizontal visibility. The visualization and search tools will make your day-to-day hunting a breeze.
During this brief walkthrough of the setup, configuration, and use of the toolset, we will show you how to find the trees from the forest in today's modern cloud environments and beyond.
Introduction to Apache Spark. With an emphasis on the RDD API, Spark SQL (DataFrame and Dataset API) and Spark Streaming.
Presented at the Desert Code Camp:
http://oct2016.desertcodecamp.com/sessions/all
Cloudera Impala provides a fast, ad hoc query capability to Apache Hadoop, complementing traditional MapReduce batch processing. Learn the design choices and architecture behind Impala, and how to use near-ubiquitous SQL to explore your own data at scale.
As presented to Portland Big Data User Group on July 23rd 2014.
http://www.meetup.com/Hadoop-Portland/events/194930422/
Apache Spark is one of the most popular big data projects, offering greatly improved performance over traditional MapReduce models. Much of Apache Spark’s power comes from lazy evaluation along with intelligent pipelining, which can make debugging more challenging. Holden Karau and Joey Echeverria explore how to debug Apache Spark applications, the different options for logging in Spark’s variety of supported languages, and some common errors and how to detect them.
Spark’s own internal logging can often be quite verbose. Holden and Joey demonstrate how to effectively search logs from Apache Spark to spot common problems and discuss options for logging from within your program itself. Spark’s accumulators have gotten a bad rap because of how they interact in the event of cache misses or partial recomputes, but Holden and Joey look at how to effectively use Spark’s current accumulators for debugging before gazing into the future to see the data property type accumulators that may be coming to Spark in future versions. And in addition to reading logs and instrumenting your program with accumulators, Spark’s UI can be of great help for quickly detecting certain types of problems. Holden and Joey cover how to quickly use the UI to figure out if certain types of issues are occurring in your job.
The talk will wrap up with Holden trying to get everyone to buy several copies of her new book, High Performance Spark.
This presentation will be useful to those who would like to get acquainted with Apache Spark architecture, top features and see some of them in action, e.g. RDD transformations and actions, Spark SQL, etc. Also it covers real life use cases related to one of ours commercial projects and recall roadmap how we’ve integrated Apache Spark into it.
Was presented on Morning@Lohika tech talks in Lviv.
Design by Yarko Filevych: http://www.filevych.com/
Spark and Cassandra with the Datastax Spark Cassandra Connector
How it works and how to use it!
Missed Spark Summit but Still want to see some slides?
This slide deck is for you!
Riak ( http://wiki.basho.com ), a Dynamo-inspired, open-source key/value datastore, was built to scale from a single machine to a 100+ server cluster without driving you or your operations team crazy. This presentation discusses the characteristics of Riak that become important in small, medium, and large clusters.
A guide to deploying an initial Docker Swarm mode network and then incorporating Asterisk into that swarm. Commands, a discussion of host mode vs overlay networking, and the basics of a deployable Docker Swarm mode Stack file are all covered.
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto Docker, Inc.
Terraform is a tool for building and safely iterating on infrastructure, while Consul provides service discovery, monitoring and orchestration. In this talk we discuss using Terraform and Consul together to build a Docker-based Service Oriented Architecture at scale. We use Consul to provide the runtime control plane for the datacenter, and Terraform is used to modify the underlying infrastructure to allow for elastic scalability.
Talk given at Javascript.MN meetup 8/25/2011 by Derek Anderson.
A basic overview of NodeJS (Yet Another NodeJS Intro) ... All anyone knows is the basics it seems ;-)
I talk about Node, show some LOLCats, demo a LOLChat (lolcat translation realtime chat app: https://github.com/mediaupstream/LOLChat)
and a realtime drawing app: (http://draw.mediaupstream.com)
HUZZAH!
Relayd is a daemon to relay and dynamically redirect incoming connections to a target host.
Its main purposes are to run as a load-balancer, application layer gateway, or transparent proxy.
Redis is a NoSQL technology that rides a fine line between database and in-memory cache. Redis also offers "remote data structures", which gives it a significant advantage over other in-memory databases. This session will cover several PHP clients for Redis, and how to use them for caching, data modeling and generally improving application throughput.
Custom, in depth 5 day PHP course I put together in 2014. I'm available to deliver this training in person at your offices - contact me at rich@quicloud.com for rate quotes.
Secure our data is a complex topic. We can build a very strong protection around our data, but nothing will prevent the one WHO could potentially access it to compromise the data integrity or to expose it.
This because we either under estimate the control we can or should impose, or because we think to do not have the tools to perform such control.
Nowadays to be able to control and manage what can access our data is a must, while how to do with standard tools it is a nightmare.
The presentation will guide you in a journey, there you will discover how implementing a quite robust protection, more than what you thought was possible.
Even more, it is possible and your performances will even improve. Cool right?
We will discuss:
- Access using not standard port
- Implement selective query access
- Define accessibility by location/ip/id
- Reduce to minimum cost of filtering
- Automate the query discovery
Bare Metal to OpenStack with Razor and ChefMatt Ray
Slides from the OpenStack Spring 2013 Summit workshop presented by Egle Sigler (@eglute) and Matt Ray (@mattray) from Rackspace and Opscode respectively. Please refer to http://anystacker.com/ for additional content.
Imagine for a while that Rails wouldn't exist. How would we write a MVC app from scratch?
Rack provides a minimal interface for developing web applications in Ruby. In fact it's the solid foundation of all major Ruby powered web frameworks.
During this talk we will dive deep into Rack. We will see the smallest possible Rack Application and learn how it works, by studying Rack internals. We will grow the Application step by step till we implement it in simple MVC style.
Similar to Riak Training Session — Surge 2011 (20)
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
8. • Key-Value store (plus extras)
• Distributed, horizontally scalable
• Fault-tolerant
• Highly-available
• Built for the Web
9. • Key-Value store (plus extras)
• Distributed, horizontally scalable
• Fault-tolerant
• Highly-available
• Built for the Web
• Inspired by Amazon’s Dynamo
25. Fault-Tolerant
• All nodes participate equally - no
SPOF
• All data is replicated
• Cluster transparently survives...
26. Fault-Tolerant
• All nodes participate equally - no
SPOF
• All data is replicated
• Cluster transparently survives...
• node failure
27. Fault-Tolerant
• All nodes participate equally - no
SPOF
• All data is replicated
• Cluster transparently survives...
• node failure
• network partitions
28. Fault-Tolerant
• All nodes participate equally - no
SPOF
• All data is replicated
• Cluster transparently survives...
• node failure
• network partitions
• Built on Erlang/OTP (designed for FT)
32. Highly-Available
• Any node can serve client requests
• Fallbacks are used when nodes are
down
• Always accepts read and write
requests
33. Highly-Available
• Any node can serve client requests
• Fallbacks are used when nodes are
down
• Always accepts read and write
requests
• Per-request quorums
35. Built for the Web
• HTTP is default (but not only) interface
36. Built for the Web
• HTTP is default (but not only) interface
• Does HTTP/REST well (see
Webmachine)
37. Built for the Web
• HTTP is default (but not only) interface
• Does HTTP/REST well (see
Webmachine)
• Plays well with HTTP infrastructure -
reverse proxy caches, load balancers,
web servers
38. Built for the Web
• HTTP is default (but not only) interface
• Does HTTP/REST well (see
Webmachine)
• Plays well with HTTP infrastructure -
reverse proxy caches, load balancers,
web servers
• Suitable to many web applications
48. Installing Riak
• Download from our local server
• Download a package:
http://downloads.basho.com/riak/
CURRENT
49. Installing Riak
• Download from our local server
• Download a package:
http://downloads.basho.com/riak/
CURRENT
• Clone & build (need git and Erlang) :
$ git clone git://github.com/basho/riak.git
$ make all devrel
62. PUT
$ surge-put.sh
Enter a key: your-key
Enter a value: your-value
Enter a write quorum value:
* About to connect() to 127.0.0.1 port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/your-key?returnbody=true&w=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
> Content-Type: text/plain
> Content-Length: 10
>
63. PUT
$ surge-put.sh
Enter a key: your-key
Enter a value: your-value
Enter a write quorum value:
* About to connect() to 127.0.0.1 port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/your-key?returnbody=true&w=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
> Content-Type: text/plain
> Content-Length: 10
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Date: Tue, 27 Sep 2011 19:21:08 GMT
< Content-Type: text/plain
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
68. GET
$ surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):
* About to connect() to 127.0.0.1 port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
69. GET
$ surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):
* About to connect() to 127.0.0.1 port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT
< ETag: "51h3q7RjTNaHWYpO4P0MJj"
< Date: Tue, 27 Sep 2011 19:31:01 GMT
< Content-Type: text/plain
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
70. GET
$ surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):
* About to connect() to 127.0.0.1 port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT
< ETag: "51h3q7RjTNaHWYpO4P0MJj"
< Date: Tue, 27 Sep 2011 19:31:01 GMT
< Content-Type: text/plain
vector clock
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
71. GET
$ surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):
* About to connect() to 127.0.0.1 port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT
< ETag: "51h3q7RjTNaHWYpO4P0MJj"
< Date: Tue, 27 Sep 2011 19:31:01 GMT
< Content-Type: text/plain
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
caching headers
72. GET
$ surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):
* About to connect() to 127.0.0.1 port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT
< ETag: "51h3q7RjTNaHWYpO4P0MJj"
< Date: Tue, 27 Sep 2011 19:31:01 GMT
< Content-Type: text/plain
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
content-type
85. Try GETting your key
$ . ~/Desktop/surge/surge-get.sh
Enter a key: your-key
86. Try GETting your key
$ . ~/Desktop/surge/surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):
87. Try GETting your key
$ . ~/Desktop/surge/surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):
* About to connect() to 127.0.0.1 port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
88. Try GETting your key
$ . ~/Desktop/surge/surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):
* About to connect() to 127.0.0.1 port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Last-Modified: Tue, 27 Sep 2011 19:50:07 GMT
< ETag: "51h3q7RjTNaHWYpO4P0MJj"
< Date: Tue, 27 Sep 2011 19:51:10 GMT
< Content-Type: text/plain
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
91. Key-Value Data
Model
• All data (objects) are
referenced by keys
mark
beth sean
92. Key-Value Data
Model n_val
allow_mult
quora
• All data (objects) are people
referenced by keys
mark
• Keys are grouped into
buckets
beth sean
93. Key-Value Data
Model n_val
allow_mult
quora
• All data (objects) are people
referenced by keys
mark
• Keys are grouped into
buckets
beth sean
• Simple operations:
get, put, delete
94. Key-Value Data
Model
• All data (objects) are
referenced by keys
people/beth
• Keys are grouped into vclock: ...
content-type: text/html
buckets
last-mod: 20101108T...
links: ...
• Simple operations:
get, put, delete
<html><head>....
• Object is composed
of metadata and value
96. Consistent Hashing
& The Ring
• 160-bit integer
keyspace
0
2 160/4
2 160/2
97. Consistent Hashing
& The Ring
• 160-bit integer
keyspace
0
• Divided into fixed
number of evenly-sized
partitions
32 partitions 2 160/4
2 160/2
98. Consistent Hashing
& The Ring node 0
node 1
• 160-bit integer node 2
keyspace
0 node 3
• Divided into fixed
number of evenly-sized
partitions
32 partitions 2 160/4
• Partitions are claimed by
nodes in the cluster
2 160/2
99. Consistent Hashing
& The Ring node 0
node 1
• 160-bit integer node 2
keyspace
node 3
• Divided into fixed
number of evenly-sized
partitions
• Partitions are claimed by
nodes in the cluster
• Replicas go to the N
partitions following the
key
100. Consistent Hashing
& The Ring node 0
node 1
• 160-bit integer node 2
keyspace
node 3
• Divided into fixed
number of evenly-sized
partitions
• Partitions are claimed by N=3
nodes in the cluster
• Replicas go to the N
partitions following the
key
hash(“conferences/surge”)
108. Vector Clocks
• Every node has an ID (changed in 1.0)
• Send last-seen vector clock in every
“put” or “delete” request
109. Vector Clocks
• Every node has an ID (changed in 1.0)
• Send last-seen vector clock in every
“put” or “delete” request
• Riak tracks history of updates
110. Vector Clocks
• Every node has an ID (changed in 1.0)
• Send last-seen vector clock in every
“put” or “delete” request
• Riak tracks history of updates
• Auto-resolves stale versions
111. Vector Clocks
• Every node has an ID (changed in 1.0)
• Send last-seen vector clock in every
“put” or “delete” request
• Riak tracks history of updates
• Auto-resolves stale versions
• Lets you decide conflicts
115. Hinted Handoff
• Node fails X
X
• Requests go to fallback X
X
X
X
X
X
hash(“conferences/surge”)
116. Hinted Handoff
• Node fails
• Requests go to fallback
• Node comes back
hash(“conferences/surge”)
117. Hinted Handoff
• Node fails
• Requests go to fallback
• Node comes back
• “Handoff” - data
returns to recovered
node
hash(“conferences/surge”)
118. Hinted Handoff
• Node fails
• Requests go to fallback
• Node comes back
• “Handoff” - data
returns to recovered
node
• Normal operations
resume
hash(“conferences/surge”)
175. Links
• Lightweight relationships, like <a>
• Includes a “tag”
• Built-in traversal op (“walking”)
GET /riak/b/k/[bucket],[tag],[keep]
• Limited in number (part of meta)
179. Full-text Search
• Designed for searching prose
• Lucene/Solr-like query interface
• Automatically index K/V values
180. Full-text Search
• Designed for searching prose
• Lucene/Solr-like query interface
• Automatically index K/V values
• Input to MapReduce
181. Full-text Search
• Designed for searching prose
• Lucene/Solr-like query interface
• Automatically index K/V values
• Input to MapReduce
• Customizable index schemas
190. MapReduce
• For more involved queries
• Specify input keys
• Process data in “map” and
“reduce” functions
191. MapReduce
• For more involved queries
• Specify input keys
• Process data in “map” and
“reduce” functions
• JavaScript or Erlang
192. MapReduce
• For more involved queries
• Specify input keys
• Process data in “map” and
“reduce” functions
• JavaScript or Erlang
• Not tuned for batch processing
200. Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker
201. Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker
*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
202. Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker
*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
203. Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker
*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
204. Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker
*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
205. Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker
*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
*** Storing the target ***
* About to connect() to localhost port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/ian HTTP/1.1
...
< HTTP/1.1 204 No Content
206. Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker
*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
*** Storing the target ***
* About to connect() to localhost port 8091 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/ian HTTP/1.1
...
< HTTP/1.1 204 No Content
209. Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
210. Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
211. Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
212. Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
213. Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Expires: Tue, 27 Sep 2011 20:51:37 GMT
< Date: Tue, 27 Sep 2011 20:41:37 GMT
< Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF
< Content-Length: 438
<
214. Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Expires: Tue, 27 Sep 2011 20:51:37 GMT
< Date: Tue, 27 Sep 2011 20:41:37 GMT
< Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF
< Content-Length: 438
<
215. Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Expires: Tue, 27 Sep 2011 20:51:37 GMT
< Date: Tue, 27 Sep 2011 20:41:37 GMT
< Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF
< Content-Length: 438
<
--J6O2SZfLfSEdAv5dv3mttR7AVOF
Content-Type: multipart/mixed; boundary=C1NUMpwbmSvb7CbqEAy1KdYRiUH
--C1NUMpwbmSvb7CbqEAy1KdYRiUH
X-Riak-Vclock: a85hYGBgzGDKBVIcMRuuc/k1GU/KYEpkymNl0Nzw7ThfFgA=
Location: /riak/surge/ian
Content-Type: text/plain
Link: </riak/surge>; rel="up"
Etag: 51h3q7RjTNaHWYpO4P0MJj
Last-Modified: Tue, 27 Sep 2011 20:38:01 GMT
target
--C1NUMpwbmSvb7CbqEAy1KdYRiUH--
--J6O2SZfLfSEdAv5dv3mttR7AVOF--
216. Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Expires: Tue, 27 Sep 2011 20:51:37 GMT
< Date: Tue, 27 Sep 2011 20:41:37 GMT
< Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF
< Content-Length: 438
<
--J6O2SZfLfSEdAv5dv3mttR7AVOF
Content-Type: multipart/mixed; boundary=C1NUMpwbmSvb7CbqEAy1KdYRiUH
--C1NUMpwbmSvb7CbqEAy1KdYRiUH
X-Riak-Vclock: a85hYGBgzGDKBVIcMRuuc/k1GU/KYEpkymNl0Nzw7ThfFgA=
Location: /riak/surge/ian
Content-Type: text/plain
Link: </riak/surge>; rel="up"
Etag: 51h3q7RjTNaHWYpO4P0MJj
Last-Modified: Tue, 27 Sep 2011 20:38:01 GMT
target
--C1NUMpwbmSvb7CbqEAy1KdYRiUH--
--J6O2SZfLfSEdAv5dv3mttR7AVOF--
243. Lab: 2I MapReduce
$ ./mapred-indexed.sh
Query on range or equality? (r/e) r
Range start: 35
Range end: 40
244. Lab: 2I MapReduce
$ ./mapred-indexed.sh
Query on range or equality? (r/e) r
Range start: 35
Range end: 40
Executing MapReduce:
{ "inputs":{ "bucket":"surge", "index":"surgeobj_int" ,"start":35,"end":40},
"query":[ {"map":
{"name":"Riak.mapValuesJson","language":"javascript","keep":true}} ]}
245. Lab: 2I MapReduce
$ ./mapred-indexed.sh
Query on range or equality? (r/e) r
Range start: 35
Range end: 40
Executing MapReduce:
{ "inputs":{ "bucket":"surge", "index":"surgeobj_int" ,"start":35,"end":40},
"query":[ {"map":
{"name":"Riak.mapValuesJson","language":"javascript","keep":true}} ]}
> POST /mapred HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/
0.9.8r zlib/1.2.3
> Host: localhost:8091
> Accept: */*
> Content-Type: application/json
> Content-Length: 164
>
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Date: Wed, 28 Sep 2011 01:41:53 GMT
< Content-Type: application/json
< Content-Length: 97
<
* Connection #0 to host localhost left intact
* Closing connection #0
[{"surgeobj":35},{"surgeobj":40},{"surgeobj":36},{"surgeobj":37},{"surgeobj":
39},{"surgeobj":38}]
286. /etc/riak/
app.config
• Storage Engine
%% Storage_backend specifies the Erlang module defining the storage
287. /etc/riak/
app.config
• Storage Engine
%% Storage_backend specifies the Erlang module defining the storage
%% mechanism that will be used on this node.
288. /etc/riak/
app.config
• Storage Engine
%% Storage_backend specifies the Erlang module defining the storage
%% mechanism that will be used on this node.
{storage_backend, riak_kv_bitcask_backend},
289. /etc/riak/
app.config
• Storage Engine
%% Storage_backend specifies the Erlang module defining the storage
%% mechanism that will be used on this node.
{storage_backend, riak_kv_bitcask_backend},
• About Storage Engines
290. /etc/riak/
app.config
• Storage Engine
%% Storage_backend specifies the Erlang module defining the storage
%% mechanism that will be used on this node.
{storage_backend, riak_kv_bitcask_backend},
• About Storage Engines
• Riak has pluggable storage engines. (Bitcask,
Innostore, LevelDB)
291. /etc/riak/
app.config
• Storage Engine
%% Storage_backend specifies the Erlang module defining the storage
%% mechanism that will be used on this node.
{storage_backend, riak_kv_bitcask_backend},
• About Storage Engines
• Riak has pluggable storage engines. (Bitcask,
Innostore, LevelDB)
• Different engines have different tradeoffs.
292. /etc/riak/
app.config
• Storage Engine
%% Storage_backend specifies the Erlang module defining the storage
%% mechanism that will be used on this node.
{storage_backend, riak_kv_bitcask_backend},
• About Storage Engines
• Riak has pluggable storage engines. (Bitcask,
Innostore, LevelDB)
• Different engines have different tradeoffs.
• Engine selection depends on shape of data.
293. /etc/riak/
app.config
• Storage Engine
%% Storage_backend specifies the Erlang module defining the storage
%% mechanism that will be used on this node.
{storage_backend, riak_kv_bitcask_backend},
• About Storage Engines
• Riak has pluggable storage engines. (Bitcask,
Innostore, LevelDB)
• Different engines have different tradeoffs.
• Engine selection depends on shape of data.
• More on this later.
300. /etc/riak/
app.config
%% http is a list of IP addresses and TCP ports that the
Riak
%% HTTP interface will bind.
{http, [ {"127.0.0.1", 8091 } ]},
%% pb_ip is the IP address that the Riak Protocol Buffers
interface
%% will bind to. If this is undefined, the interface will
not run.
{pb_ip, "127.0.0.1" },
%% pb_port is the TCP port that the Riak Protocol Buffers
interface
%% will bind to
{pb_port, 8081 },
301. /etc/riak/
app.config
• HTTP & Protocol Buffers
%% http is a list of IP addresses and TCP ports that the
Riak
%% HTTP interface will bind.
{http, [ {"127.0.0.1", 8091 } ]},
%% pb_ip is the IP address that the Riak Protocol Buffers
interface
%% will bind to. If this is undefined, the interface will
not run.
{pb_ip, "127.0.0.1" },
%% pb_port is the TCP port that the Riak Protocol Buffers
interface
%% will bind to
{pb_port, 8081 },
304. Securing Riak
• Overview
• Erlang has a “cookie” for “security.”
305. Securing Riak
• Overview
• Erlang has a “cookie” for “security.”
• Do not rely on it. Plain text.
306. Securing Riak
• Overview
• Erlang has a “cookie” for “security.”
• Do not rely on it. Plain text.
• Riak assumes internal environment is trusted.
307. Securing Riak
• Overview
• Erlang has a “cookie” for “security.”
• Do not rely on it. Plain text.
• Riak assumes internal environment is trusted.
• Securing the HTTP Interface
308. Securing Riak
• Overview
• Erlang has a “cookie” for “security.”
• Do not rely on it. Plain text.
• Riak assumes internal environment is trusted.
• Securing the HTTP Interface
• HTTP Auth via Proxy
309. Securing Riak
• Overview
• Erlang has a “cookie” for “security.”
• Do not rely on it. Plain text.
• Riak assumes internal environment is trusted.
• Securing the HTTP Interface
• HTTP Auth via Proxy
• Does support SSL
310. Securing Riak
• Overview
• Erlang has a “cookie” for “security.”
• Do not rely on it. Plain text.
• Riak assumes internal environment is trusted.
• Securing the HTTP Interface
• HTTP Auth via Proxy
• Does support SSL
• Securing Protocol Buffers
311. Securing Riak
• Overview
• Erlang has a “cookie” for “security.”
• Do not rely on it. Plain text.
• Riak assumes internal environment is trusted.
• Securing the HTTP Interface
• HTTP Auth via Proxy
• Does support SSL
• Securing Protocol Buffers
• No security.
331. Bitcask
• A fast, append-only key-value store
• In memory key lookup table
(key_dir)
332. Bitcask
• A fast, append-only key-value store
• In memory key lookup table
(key_dir)
• Closed files are immutable
333. Bitcask
• A fast, append-only key-value store
• In memory key lookup table
(key_dir)
• Closed files are immutable
• Merging cleans up old data
334. Bitcask
• A fast, append-only key-value store
• In memory key lookup table
(key_dir)
• Closed files are immutable
• Merging cleans up old data
• Apache 2 license
338. Innostore
• Based on Embedded InnoDB
• Write ahead log plus B-tree storage
• Similar characterics to the MySQL
InnoDB plugin
339. Innostore
• Based on Embedded InnoDB
• Write ahead log plus B-tree storage
• Similar characterics to the MySQL
InnoDB plugin
• Manages which pages of the B-trees
are in memory
340. Innostore
• Based on Embedded InnoDB
• Write ahead log plus B-tree storage
• Similar characterics to the MySQL
InnoDB plugin
• Manages which pages of the B-trees
are in memory
• GPL license
349. Cluster Capacity
Planning
• Overview
• There are MANY variables that affect performance.
350. Cluster Capacity
Planning
• Overview
• There are MANY variables that affect performance.
• Safest route is to simulate load and see what
happens.
351. Cluster Capacity
Planning
• Overview
• There are MANY variables that affect performance.
• Safest route is to simulate load and see what
happens.
• basho_bench can help
352. Cluster Capacity
Planning
• Overview
• There are MANY variables that affect performance.
• Safest route is to simulate load and see what
happens.
• basho_bench can help
• Documentation
353. Cluster Capacity
Planning
• Overview
• There are MANY variables that affect performance.
• Safest route is to simulate load and see what
happens.
• basho_bench can help
• Documentation
• http://wiki.basho.com/Cluster-Capacity-
Planning.html
354. Cluster Capacity
Planning
• Overview
• There are MANY variables that affect performance.
• Safest route is to simulate load and see what
happens.
• basho_bench can help
• Documentation
• http://wiki.basho.com/Cluster-Capacity-
Planning.html
• http://wiki.basho.com/System-
Requirements.html
357. Things to Consider
(1/3)
• Virtualization
• Riak’s workload is I/O bound
358. Things to Consider
(1/3)
• Virtualization
• Riak’s workload is I/O bound
• Run on bare metal for best performance.
359. Things to Consider
(1/3)
• Virtualization
• Riak’s workload is I/O bound
• Run on bare metal for best performance.
• Virtualize for other factors (agility or cost)
360. Things to Consider
(1/3)
• Virtualization
• Riak’s workload is I/O bound
• Run on bare metal for best performance.
• Virtualize for other factors (agility or cost)
• Object size
361. Things to Consider
(1/3)
• Virtualization
• Riak’s workload is I/O bound
• Run on bare metal for best performance.
• Virtualize for other factors (agility or cost)
• Object size
• Too small means unnecessary disk and network
overhead
362. Things to Consider
(1/3)
• Virtualization
• Riak’s workload is I/O bound
• Run on bare metal for best performance.
• Virtualize for other factors (agility or cost)
• Object size
• Too small means unnecessary disk and network
overhead
• Too large means chunky reads and writes.
363. Things to Consider
(1/3)
• Virtualization
• Riak’s workload is I/O bound
• Run on bare metal for best performance.
• Virtualize for other factors (agility or cost)
• Object size
• Too small means unnecessary disk and network
overhead
• Too large means chunky reads and writes.
• Read/Write Ratio
364. Things to Consider
(1/3)
• Virtualization
• Riak’s workload is I/O bound
• Run on bare metal for best performance.
• Virtualize for other factors (agility or cost)
• Object size
• Too small means unnecessary disk and network
overhead
• Too large means chunky reads and writes.
• Read/Write Ratio
• Riak is not meant for dumping many tiny datapoints.
367. Things to Consider
(2/3)
• “Randomness” of Reads /
Writes
• Recently accessed
objects are cached in
memory.
368. Things to Consider
(2/3)
• “Randomness” of Reads /
Writes
• Recently accessed
objects are cached in
memory.
• A small working set of
objects will be fast.
369. Things to Consider
(2/3)
• “Randomness” of Reads /
Writes
• Recently accessed
objects are cached in
memory.
• A small working set of
objects will be fast.
• Amount of RAM
370. Things to Consider
(2/3)
• “Randomness” of Reads /
Writes
• Recently accessed
objects are cached in
memory.
• A small working set of
objects will be fast.
• Amount of RAM
• Expands the size of
cached working set,
lowers latency.
371. Things to Consider
(2/3)
• “Randomness” of Reads / • Disk Speed
Writes
• Recently accessed
objects are cached in
memory.
• A small working set of
objects will be fast.
• Amount of RAM
• Expands the size of
cached working set,
lowers latency.
372. Things to Consider
(2/3)
• “Randomness” of Reads / • Disk Speed
Writes
• Faster disk means
• Recently accessed lower read/write
objects are cached in latency.
memory.
• A small working set of
objects will be fast.
• Amount of RAM
• Expands the size of
cached working set,
lowers latency.
373. Things to Consider
(2/3)
• “Randomness” of Reads / • Disk Speed
Writes
• Faster disk means
• Recently accessed lower read/write
objects are cached in latency.
memory.
• Number of Processors
• A small working set of
objects will be fast.
• Amount of RAM
• Expands the size of
cached working set,
lowers latency.
374. Things to Consider
(2/3)
• “Randomness” of Reads / • Disk Speed
Writes
• Faster disk means
• Recently accessed lower read/write
objects are cached in latency.
memory.
• Number of Processors
• A small working set of
objects will be fast. • More processors ==
more Map/Reduce
• Amount of RAM throughput.
• Expands the size of
cached working set,
lowers latency.
375. Things to Consider
(2/3)
• “Randomness” of Reads / • Disk Speed
Writes
• Faster disk means
• Recently accessed lower read/write
objects are cached in latency.
memory.
• Number of Processors
• A small working set of
objects will be fast. • More processors ==
more Map/Reduce
• Amount of RAM throughput.
• Expands the size of • Faster processors ==
cached working set, lower Map/Reduce
lowers latency. latency.
378. Things to Consider
(3/3)
• Features
• RiakKV vs. Riak 2I vs. Riak Search
379. Things to Consider
(3/3)
• Features
• RiakKV vs. Riak 2I vs. Riak Search
• Backends
380. Things to Consider
(3/3)
• Features
• RiakKV vs. Riak 2I vs. Riak Search
• Backends
• Bitcask vs. Innostore vs. eLevelDB
381. Things to Consider
(3/3)
• Features
• RiakKV vs. Riak 2I vs. Riak Search
• Backends
• Bitcask vs. Innostore vs. eLevelDB
• Space/Time tradeoff
382. Things to Consider
(3/3)
• Features
• RiakKV vs. Riak 2I vs. Riak Search
• Backends
• Bitcask vs. Innostore vs. eLevelDB
• Space/Time tradeoff
• Partitions
383. Things to Consider
(3/3)
• Features
• RiakKV vs. Riak 2I vs. Riak Search
• Backends
• Bitcask vs. Innostore vs. eLevelDB
• Space/Time tradeoff
• Partitions
• Depends on eventual number of boxes.
384. Things to Consider
(3/3)
• Features
• RiakKV vs. Riak 2I vs. Riak Search
• Backends
• Bitcask vs. Innostore vs. eLevelDB
• Space/Time tradeoff
• Partitions
• Depends on eventual number of boxes.
• Protocol
385. Things to Consider
(3/3)
• Features
• RiakKV vs. Riak 2I vs. Riak Search
• Backends
• Bitcask vs. Innostore vs. eLevelDB
• Space/Time tradeoff
• Partitions
• Depends on eventual number of boxes.
• Protocol
• HTTP vs. Protocol Buffers
386. Things to Consider
(3/3)
• Features
• RiakKV vs. Riak 2I vs. Riak Search
• Backends
• Bitcask vs. Innostore vs. eLevelDB
• Space/Time tradeoff
• Partitions
• Depends on eventual number of boxes.
• Protocol
• HTTP vs. Protocol Buffers
• All clients support HTTP, some support PB.
389. Performance
Tuning
• General Tips
• Start with a “DB” Like Machine Profile
390. Performance
Tuning
• General Tips
• Start with a “DB” Like Machine Profile
• Multi-core 64-bit CPUs
391. Performance
Tuning
• General Tips
• Start with a “DB” Like Machine Profile
• Multi-core 64-bit CPUs
• The More RAM the Better
392. Performance
Tuning
• General Tips
• Start with a “DB” Like Machine Profile
• Multi-core 64-bit CPUs
• The More RAM the Better
• Fast Disk
393. Performance
Tuning
• General Tips
• Start with a “DB” Like Machine Profile
• Multi-core 64-bit CPUs
• The More RAM the Better
• Fast Disk
• Use noatime mounts
394. Performance
Tuning
• General Tips
• Start with a “DB” Like Machine Profile
• Multi-core 64-bit CPUs
• The More RAM the Better
• Fast Disk
• Use noatime mounts
• Limit List Keys and Full Bucket MapReduce
395. Performance
Tuning
• General Tips
• Start with a “DB” Like Machine Profile
• Multi-core 64-bit CPUs
• The More RAM the Better
• Fast Disk
• Use noatime mounts
• Limit List Keys and Full Bucket MapReduce
• Benchmark in Advance
396. Performance
Tuning
• General Tips
• Start with a “DB” Like Machine Profile
• Multi-core 64-bit CPUs
• The More RAM the Better
• Fast Disk
• Use noatime mounts
• Limit List Keys and Full Bucket MapReduce
• Benchmark in Advance
• Graph Everything
409. Performance
Tuning
• Innostore Backend
• Similar to MySQL + InnoDB
410. Performance
Tuning
• Innostore Backend
• Similar to MySQL + InnoDB
• buffer pool size
411. Performance
Tuning
• Innostore Backend
• Similar to MySQL + InnoDB
• buffer pool size
• o_direct
412. Performance
Tuning
• Innostore Backend
• Similar to MySQL + InnoDB
• buffer pool size
• o_direct
• log_files_in_groups
413. Performance
Tuning
• Innostore Backend
• Similar to MySQL + InnoDB
• buffer pool size
• o_direct
• log_files_in_groups
• Separate Spindles for Log and Data
414. Performance
Tuning
• Innostore Backend
• Similar to MySQL + InnoDB
• buffer pool size
• o_direct
• log_files_in_groups
• Separate Spindles for Log and Data
• Documentation
415. Performance
Tuning
• Innostore Backend
• Similar to MySQL + InnoDB
• buffer pool size
• o_direct
• log_files_in_groups
• Separate Spindles for Log and Data
• Documentation
• http://wiki.basho.com/Setting-Up-Innostore.html
416. Performance
Tuning
• Innostore Backend
• Similar to MySQL + InnoDB
• buffer pool size
• o_direct
• log_files_in_groups
• Separate Spindles for Log and Data
• Documentation
• http://wiki.basho.com/Setting-Up-Innostore.html
• http://wiki.basho.com/Innostore-Configuration-and-
Tuning.html
419. basho_bench
• What Does It Do?
• Generates load against a Riak cluster. Logs and graphs results.
420. basho_bench
• What Does It Do?
• Generates load against a Riak cluster. Logs and graphs results.
• Intended for cluster capacity planning.
421. basho_bench
• What Does It Do?
• Generates load against a Riak cluster. Logs and graphs results.
• Intended for cluster capacity planning.
• Major Parameters
422. basho_bench
• What Does It Do?
• Generates load against a Riak cluster. Logs and graphs results.
• Intended for cluster capacity planning.
• Major Parameters
• Test Harness / Key Generator / Value Generator
423. basho_bench
• What Does It Do?
• Generates load against a Riak cluster. Logs and graphs results.
• Intended for cluster capacity planning.
• Major Parameters
• Test Harness / Key Generator / Value Generator
• Read / write / update mix.
424. basho_bench
• What Does It Do?
• Generates load against a Riak cluster. Logs and graphs results.
• Intended for cluster capacity planning.
• Major Parameters
• Test Harness / Key Generator / Value Generator
• Read / write / update mix.
• Number of workers.
425. basho_bench
• What Does It Do?
• Generates load against a Riak cluster. Logs and graphs results.
• Intended for cluster capacity planning.
• Major Parameters
• Test Harness / Key Generator / Value Generator
• Read / write / update mix.
• Number of workers.
• Worker rate. (X per second vs. as fast as possible)
426. basho_bench
• What Does It Do?
• Generates load against a Riak cluster. Logs and graphs results.
• Intended for cluster capacity planning.
• Major Parameters
• Test Harness / Key Generator / Value Generator
• Read / write / update mix.
• Number of workers.
• Worker rate. (X per second vs. as fast as possible)
• Documentation
427. basho_bench
• What Does It Do?
• Generates load against a Riak cluster. Logs and graphs results.
• Intended for cluster capacity planning.
• Major Parameters
• Test Harness / Key Generator / Value Generator
• Read / write / update mix.
• Number of workers.
• Worker rate. (X per second vs. as fast as possible)
• Documentation
• http://wiki.basho.com/Benchmarking-with-Basho-Bench.html
435. Lager
• Notes
• Basho’s replacement for Erlang logging tools
• It plays nice with your tools
436. Lager
• Notes
• Basho’s replacement for Erlang logging tools
• It plays nice with your tools
• /var/log/riak/console.log
Editor's Notes
\n
\n
\n
\n
\n
\n
\n
\n
Think of it like a big hash-table\n
Think of it like a big hash-table\n
Think of it like a big hash-table\n
Think of it like a big hash-table\n
Think of it like a big hash-table\n
Think of it like a big hash-table\n
Think of it like a big hash-table\n
X = throughput, compute power for MapReduce, storage, lower latency\n
X = throughput, compute power for MapReduce, storage, lower latency\n
X = throughput, compute power for MapReduce, storage, lower latency\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Consistent hashing means:\n1) large, fixed-size key-space\n2) no rehashing of keys - always hash the same way\n
Consistent hashing means:\n1) large, fixed-size key-space\n2) no rehashing of keys - always hash the same way\n
Consistent hashing means:\n1) large, fixed-size key-space\n2) no rehashing of keys - always hash the same way\n
Consistent hashing means:\n1) large, fixed-size key-space\n2) no rehashing of keys - always hash the same way\n
Consistent hashing means:\n1) large, fixed-size key-space\n2) no rehashing of keys - always hash the same way\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &#x201C;get&#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n