This document provides information about MongoDB and its suitability for e-commerce applications. It discusses how MongoDB allows for a flexible schema that can accommodate different product types like books, music albums, jeans, without needing to define all attributes in advance. This flexibility addresses the "data dilemma" that traditional relational databases have in modeling diverse e-commerce data. Examples of companies successfully using MongoDB for e-commerce are also provided.
MongoDB and Ecommerce : A perfect combinationSteven Francia
Presentation given at the MongoDB NYC Meetup by Steve Francia, VP of Engineering at OpenSky. OpenSky uses MongoDB to develop the next ecommerce platform. OpenSky also uses Symfony 2, Doctrine 2, PHP 5.3, PHPUnit 3.5, jQuery, node.js, Git (with gitflow) and a touch of Java and Python. The OpenSky team contributes back to many of these technologies and employs core members of the Symfony 2 and Doctrine 2 teams.
※다운로드하시면 더 선명한 자료를 보실 수 있습니다.
동접 200만 명이 접속할 수백 대의 게임 서버가 최소한의 MySQL 서버만으로 서비스할 수 있는 구조를 설명합니다.
고성능/고효율의 MySQL 스케일링 기법을 공유합니다. 대규모 게임 서비스에서 이미 검증된 것은 안 비밀~
목차
1. 기본적인 아기텍처
2. ProxySQL을 이용한 더 나은 아키텍처
3. 최종 아키텍처
대상
- 대규모 게임 서비스에 MySQL을 사용한 경험에 관심 있는 분
- ProxySQL에 관심이 있는 서버 개발자 혹은 DBA
- 게임 서버 개발 과정에서 DB 쪽을 유연하게 구성하고 싶은 분
■관련 동영상: https://youtu.be/8Eb_n7JA1yA
MongoDB and Ecommerce : A perfect combinationSteven Francia
Presentation given at the MongoDB NYC Meetup by Steve Francia, VP of Engineering at OpenSky. OpenSky uses MongoDB to develop the next ecommerce platform. OpenSky also uses Symfony 2, Doctrine 2, PHP 5.3, PHPUnit 3.5, jQuery, node.js, Git (with gitflow) and a touch of Java and Python. The OpenSky team contributes back to many of these technologies and employs core members of the Symfony 2 and Doctrine 2 teams.
※다운로드하시면 더 선명한 자료를 보실 수 있습니다.
동접 200만 명이 접속할 수백 대의 게임 서버가 최소한의 MySQL 서버만으로 서비스할 수 있는 구조를 설명합니다.
고성능/고효율의 MySQL 스케일링 기법을 공유합니다. 대규모 게임 서비스에서 이미 검증된 것은 안 비밀~
목차
1. 기본적인 아기텍처
2. ProxySQL을 이용한 더 나은 아키텍처
3. 최종 아키텍처
대상
- 대규모 게임 서비스에 MySQL을 사용한 경험에 관심 있는 분
- ProxySQL에 관심이 있는 서버 개발자 혹은 DBA
- 게임 서버 개발 과정에서 DB 쪽을 유연하게 구성하고 싶은 분
■관련 동영상: https://youtu.be/8Eb_n7JA1yA
Noah Davis & Luke Melia of Weplay share a series of examples of Redis in the real world. In doing so, they cover a survey of Redis' features, approach, history and philosophy. Most examples are drawn from the Weplay team's experience using Redis to power features on Weplay.com, a social site for youth sports.
by Mahesh Pakal, AWS
PostgreSQL is a powerful, enterprise class open source object-relational database system with an emphasis on extensibility and standards-compliance. PostgreSQL boasts many sophisticated features and runs stored procedures in more than a dozen programming languages. We’ll explore the advantages and limitations of PostgreSQL, examples of where it is best suited for use, and examples of who is using PostgreSQL to power their applications.
Learn the fundamentals of Amazon DynamoDB and see the DynamoDB console first-hand as we walk through a demo of building a serverless web application using this high-performance key-value and JSON document store.
This webinar discusses Amazon DynamoDB, a NoSQL, highly scalable, SSD-based, zero administration database service in the AWS Cloud. We explain how DynamoDB works and also walk through some best practices and tips to get the most out of the service.
This presentation was presented at Percona Live UK.
Although a DBMS hides the internal mechanics of indexing. But to be able to create efficient indexes, you need to know how they work. This talk will help you understand the mechanics of the data structure used to store indexes and as to how it applies to InnoDB. At the end of the talk you will be able to learn how to use cost-analysis to pick and choose correct index definitions and will learn how to create indexes that will work efficiently with InnoDB.
This ppt was used by Devrim at pgDay Asia 2017. He talked about some important facts about WAL - Transaction Logs or xlogs in PostgreSQL. Some of these can really come handy on a bad day
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
In this webinar you'll learn about the best practices for Google BigQuery—and how Matillion ETL makes loading your data faster and easier. Find out from our experts how to leverage one of the largest, fastest, and most capable cloud data warehouses to improve your business and save money.
In this webinar:
- Discover how to work fast and efficiently with Google BigQuery
- Find out the best ways to monitor and control costs
- Learn to leverage Matillion ETL and optimize Google BigQuery
- Get tips and tricks for better performance
Intro to MongoDB
Get a jumpstart on MongoDB, use cases, and next steps for building your first app with Buzz Moschetti, MongoDB Enterprise Architect.
@BuzzMoschetti
This presentation will demonstrate how you can use the aggregation pipeline with MongoDB similar to how you would use GROUP BY in SQL and the new stage operators coming 3.4. MongoDB’s Aggregation Framework has many operators that give you the ability to get more value out of your data, discover usage patterns within your data, or use the Aggregation Framework to power your application. Considerations regarding version, indexing, operators, and saving the output will be reviewed.
PostgreSQL (or Postgres) began its life in 1986 as POSTGRES, a research project of the University of California at Berkeley.
PostgreSQL isn't just relational, it's object-relational.it's object-relational. This gives it some advantages over other open source SQL databases like MySQL, MariaDB and Firebird.
Augmenting RDBMS with MongoDB for ecommerceSteven Francia
Steve Francia, VP of Engineering at OpenSky a NYC based social commerce company, on how OpenSky augments using RDBMS with MongoDB to develop the next ecommerce platform.
OpenSky utilizes both traditional SQL solutions and combines them with NoSQL to overcome the limitations of each, increase development speed and scale quickly.
Noah Davis & Luke Melia of Weplay share a series of examples of Redis in the real world. In doing so, they cover a survey of Redis' features, approach, history and philosophy. Most examples are drawn from the Weplay team's experience using Redis to power features on Weplay.com, a social site for youth sports.
by Mahesh Pakal, AWS
PostgreSQL is a powerful, enterprise class open source object-relational database system with an emphasis on extensibility and standards-compliance. PostgreSQL boasts many sophisticated features and runs stored procedures in more than a dozen programming languages. We’ll explore the advantages and limitations of PostgreSQL, examples of where it is best suited for use, and examples of who is using PostgreSQL to power their applications.
Learn the fundamentals of Amazon DynamoDB and see the DynamoDB console first-hand as we walk through a demo of building a serverless web application using this high-performance key-value and JSON document store.
This webinar discusses Amazon DynamoDB, a NoSQL, highly scalable, SSD-based, zero administration database service in the AWS Cloud. We explain how DynamoDB works and also walk through some best practices and tips to get the most out of the service.
This presentation was presented at Percona Live UK.
Although a DBMS hides the internal mechanics of indexing. But to be able to create efficient indexes, you need to know how they work. This talk will help you understand the mechanics of the data structure used to store indexes and as to how it applies to InnoDB. At the end of the talk you will be able to learn how to use cost-analysis to pick and choose correct index definitions and will learn how to create indexes that will work efficiently with InnoDB.
This ppt was used by Devrim at pgDay Asia 2017. He talked about some important facts about WAL - Transaction Logs or xlogs in PostgreSQL. Some of these can really come handy on a bad day
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
In this webinar you'll learn about the best practices for Google BigQuery—and how Matillion ETL makes loading your data faster and easier. Find out from our experts how to leverage one of the largest, fastest, and most capable cloud data warehouses to improve your business and save money.
In this webinar:
- Discover how to work fast and efficiently with Google BigQuery
- Find out the best ways to monitor and control costs
- Learn to leverage Matillion ETL and optimize Google BigQuery
- Get tips and tricks for better performance
Intro to MongoDB
Get a jumpstart on MongoDB, use cases, and next steps for building your first app with Buzz Moschetti, MongoDB Enterprise Architect.
@BuzzMoschetti
This presentation will demonstrate how you can use the aggregation pipeline with MongoDB similar to how you would use GROUP BY in SQL and the new stage operators coming 3.4. MongoDB’s Aggregation Framework has many operators that give you the ability to get more value out of your data, discover usage patterns within your data, or use the Aggregation Framework to power your application. Considerations regarding version, indexing, operators, and saving the output will be reviewed.
PostgreSQL (or Postgres) began its life in 1986 as POSTGRES, a research project of the University of California at Berkeley.
PostgreSQL isn't just relational, it's object-relational.it's object-relational. This gives it some advantages over other open source SQL databases like MySQL, MariaDB and Firebird.
Augmenting RDBMS with MongoDB for ecommerceSteven Francia
Steve Francia, VP of Engineering at OpenSky a NYC based social commerce company, on how OpenSky augments using RDBMS with MongoDB to develop the next ecommerce platform.
OpenSky utilizes both traditional SQL solutions and combines them with NoSQL to overcome the limitations of each, increase development speed and scale quickly.
CouchDB presentation with some technical details, made for a technical audience, shows use cases, comparison to other nosql databases and why it's useful for publishers
Presented in DDD Melbourne on on Sat Aug 8th 2015
Himanshu Desai, Ahmed El-Harouny & Daniel Janczak
DocumentDB, Mongo or RavenDB? If you are starting out on a new project and considering NoSQL database as an option, which one should you do choose? What if the option you choose today may not work out to be the best one for your needs?
Come and join us for this session, we will take you on a journey where we will explain each of these database on their merits and compare them and also share War stories.
http://dddmelbourne.com
Slides from workshop held on 12/14 in Asbury Park, NJ
http://www.meetup.com/Jersey-Shore-Tech/events/148118762/?gj=ro2_e&a=ro2_gnl&rv=ro2_e&_af_eid=148118762&_af=event
I've seen projects with shiny, new code render into unmaintainable big balls of mud within 2-3 years. Multiple times. But regardless of whether it's the code base as a whole that's rotten, or whether it's just the UI and User Experience that needs a major overhaul: the question on rewrite vs refactoring will come up sooner or later. Based on years of experience, and a plethora of bad decisions cumulating into epic failures, I'll share my experience on how to have a code base that stays maintainable - even after years. After this talk, you'll have more insight into whether you should refactor or rewrite, and how to do it right from now on.
An unprecedented amount of data is being created and is accessible. This presentation will instruct on using the new NoSQL technologies to make sense of all this data.
State of the Gopher Nation - Golang - August 2017Steven Francia
This talk is an overview of the Go project. It covers “what we’ve done”, “why we did it” and “where we are going” as a project.
It highlights our accomplishments, challenges and how the Go Project is working on our challenges.
The Future of the Operating System - Keynote LinuxCon 2015Steven Francia
Linux has become the foundation for infrastructure everywhere as it defined application portability from the desktop to the phone and from to the data center to the cloud. As applications become increasingly distributed in nature, the Docker platform serves as the cornerstone of Linux’s evolution solidifying the dominance of Linux today and into tomorrow.
Given as a Keynote at LinuxCon 2015 in Tokyo
Given at GopherFest 2015. This is an updated version of the talk I gave in NYC Nov 14 at GothamGo.
“We need to think about failure differently. Most people think mistakes are a necessary evil. Mistakes aren't a necessary evil, they aren't evil at all. They are an inevitable consequence of doing something new and as such should be seen as valuable. “ - Ed Catmull
As Go is a "new" programming language we are all experimenting and learning how to write better Go. While most presentations focus on the destination, this presentation focuses on the journey of learning Go and the mistakes I personally made while developing Hugo, Cobra, Viper, Afero & Docker.
What every successful open source project needsSteven Francia
In the last few years open source has transformed the software industry. From Android to Wikipedia, open source is everywhere, but how does one succeed in it? While open source projects come in all shapes and sizes and all forms of governance, no matter what kind of project you’re a part of, there are a set of fundamentals that lead to success. I’d like to share some of the lessons I’ve learned from running two of the largest commercial open source projects, Docker and MongoDB, as well as some very successful community projects.
This presentation was delievered at sinfo.org in Feb 2015.
7 Common mistakes in Go and when to avoid themSteven Francia
I've spent the past two years developing some of the most popular libraries and applications written in Go. I've also made a lot of mistakes along the way. Recognizing that "The only real mistake is the one from which we learn nothing. -John Powell", I would like to share with you the mistakes that I have made over my journey with Go and how you can avoid them.
Go for Object Oriented Programmers or Object Oriented Programming without Obj...Steven Francia
Object Oriented (OO) programming has dominated software engineering for the last two decades. The paradigm built on powerful concepts such as Encapsulation, Inheritance, and Polymoprhism has been internalized by the majority of software engineers. Although Go is not OO in the strict sense, we can continue to leverage the skills we’ve honed as OO engineers to come up with simple and solid designs.
Gopher Steve Francia, Author of
[Hugo](http://hugo.spf13.com), [Cobra](http://github.com/spf13/cobra), and many
other popular Go packages makes these difficult concepts accessible for everyone.
If you’re a OO programmer, especially one with a background with dynamic languages and are curious about go then this talk is for you. We will cover everything you need to know to leverage your existing skills and quickly start coding in go including:
How to use our Object Oriented programming fundamentals in go
Static and pseudo dynamic typing in go
Building fluent interfaces in go
Using go interfaces and duck typing to simplify architecture
Common mistakes made by those coming to go from other OO languages (Ruby, Python, Javascript, etc.),
Principles of good design in go.
This presentation will give developers an introduction and practical experience
of using MongoDB with the Go language. MongoDB Chief Developer Advocate &
Gopher Steve Francia presents plainly what you need to know about using MongoDB
with Go.
As an emerging language Go is able to start fresh without years of relational database dependencies. Application and library developers are able to build applications using the excellent Mgo MongoDB driver and the reliable go sql package for relational database. Find out why some people claim Go and MongoDB are a “pair made in heaven” and “the best database driver they’ve ever used” in this talk by Gustavo Niemeyer, the author of the mgo driver, and Steve Francia, the drivers team lead at MongoDB Inc.
We will cover:
Connecting to MongoDB in various configurations
Performing basic operations in Mgo
Marshaling data to and from MongoDB
Asynchronous & Concurrent operations
Pre-fetching batches for seamless performance
Using GridFS
How MongoDB uses Mgo internally
This presentation was given as a Workshop at OSCON 2014.
New to Go? This tutorial will give developers an introduction and practical
experience in building applications with the Go language. Gopher Steve Francia,
Author of [Hugo](http://hugo.spf13.com),
[Cobra](http://github.com/spf13/cobra), and many other popular Go packages
breaks it down step by step as you build your own full featured Go application.
Starting with an introduction to the Go language. He then reviews the fantastic
go tools available. With our environment ready we will learn by doing. The
remainder of the time will be dedicated to building a working go web and cli
application. Through our application development experience we will introduce
key features, libraries and best practices of using Go.
This tutorial is designed with developers in mind. Prior experience with any of the
following languages: ruby, perl, java, c#, javascript, php, node.js, or python
is preferred. We will be using the MongoDB database as a backend for our
application.
We will be using/learning a variety of libraries including:
* bytes and strings
* templates
* net/http
* io, fmt, errors
* cobra
* mgo
* Gin
* Go.Rice
* Cobra
* Viper
Discover & identify ideal storage solution for our needs by examining the history of data storage & the modern database systems including Key Value, Relational, Graph and Document databases.
This presentation was given at RootsTech 2013 in March
MongoDB, Hadoop and humongous data - MongoSV 2012Steven Francia
Learn how to integrate MongoDB with Hadoop for large-scale distributed data processing. Using tools like MapReduce, Pig and Streaming you will learn how to do analytics and ETL on large datasets with the ability to load and save data against MongoDB. With Hadoop MapReduce, Java and Scala programmers will find a native solution for using MapReduce to process their data with MongoDB. Programmers of all kinds will find a new way to work with ETL using Pig to extract and analyze large datasets and persist the results to MongoDB. Python and Ruby Programmers can rejoice as well in a new way to write native Mongo MapReduce using the Hadoop Streaming interfaces.
While Hadoop is the most well-known technology in big data, it’s not always the most approachable or appropriate solution for data storage and processing. In this session you’ll learn about enterprise NoSQL architectures, with examples drawn from real-world deployments, as well as how to apply big data regardless of the size of your own enterprise.
This tutorial will introduce the features of MongoDB by building a simple location-based application using MongoDB. The tutorial will cover the basics of MongoDB’s document model, query language, map-reduce framework and deployment architecture.
The tutorial will be divided into 5 sections:
Data modeling with MongoDB: documents, collections and databases
Querying your data: simple queries, geospatial queries, and text-searching
Writes and updates: using MongoDB’s atomic update modifiers
Trending and analytics: Using mapreduce and MongoDB’s aggregation framework
Deploying the sample application
Besides the knowledge to start building their own applications with MongoDB, attendees will finish the session with a working application they use to check into locations around Portland from any HTML5 enabled phone!
TUTORIAL PREREQUISITES
Each attendee should have a running version of MongoDB. Preferably the latest unstable release 2.1.x, but any install after 2.0 should be fine. You can dowload MongoDB at http://www.mongodb.org/downloads.
Instructions for installing MongoDB are at http://docs.mongodb.org/manual/installation/.
Additionally we will be building an app in Ruby. Ruby 1.9.3+ is required for this. The current latest version of ruby is 1.9.3-p194.
For windows download the http://rubyinstaller.org/
For OSX download http://unfiniti.com/software/mac/jewelrybox/
For linux most users should know how to for their own distributions.
We will be using the following GEMs and they MUST BE installed ahead of time so you can be ahead of the game and safe in the event that the Internet isn’t accommodating.
bson (1.6.4)
bson_ext (1.6.4)
haml (3.1.4)
mongo (1.6.4)
rack (1.4.1)
rack-protection (1.2.0)
rack shotgun (0.9)
sinatra (1.3.2)
tilt (1.3.3)
Prior ruby experience isn’t required for this. We will NOT be using rails for this app.
Replication, Durability, and Disaster RecoverySteven Francia
This session introduces the basic components of high availability before going into a deep dive on MongoDB replication. We'll explore some of the advanced capabilities with MongoDB replication and best practices to ensure data durability and redundancy. We'll also look at various deployment scenarios and disaster recovery configurations.
Strategies for multi-data center deployment. Diving into the details of deploying of MongoDB across multiple data centers.
Covers the advantages of a multi data center deployment for read/write locality, the various deployment strategies, and disaster preparedness and recovery.
In addition, we’ll look at the MongoDB roadmap and planned enhancements around data center awareness.
This presentation was given at MongoNYC 2012. The animations didn’t survive the transformation to the web, so not all the meaning carries over perfectly.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
3. • 15+ years building the
internet
• Father, husband, skateboarder
• Chief Solutions Architect @
10gen
• Author of upcoming O’Reilly
publication
“MongoDB and PHP”
21. MongoDB philosophy
• Keep functionality when we can (key/
value stores are great, but we need more)
• Non-relational (no joins) makes scaling
horizontally practical
• Document data models are good
• Database technology should run
anywhere VMs, cloud, metal, etc
22. MongoDB is:
Application Document
Oriented
{ author: “steve”,
High date: new Date(),
text: “About MongoDB...”,
Performance tags: [“tech”, “database”]}
Fully
Consistent
Horizontally Scalable
23. Under the hood
•Written in C++
•Runs on nearly anything
•Data serialized to BSON
•Extensive use of memory-mapped files
27. CMS / Blog
Needs:
• Business needed modern data store for rapid development and
scale
Solution:
• Use PHP & MongoDB
Results:
• Real time statistics
• All data, images, etc stored together, easy access, easy
deployment, easy high availability
• No need for complex migrations
• Enabled very rapid development and growth
28. Photo Meta-Data
Problem:
• Business needed more flexibility than Oracle could deliver
Solution:
• Use MongoDB instead of Oracle
Results:
• Developed application in one sprint cycle
• 500% cost reduction compared to Oracle
• 900% performance improvement compared to Oracle
29. Customer Analytics
Problem:
• Deal with massive data volume across all customer sites
Solution:
• Use MongoDB to replace Google Analytics / Omniture options
Results:
• Less than one week to build prototype and prove business case
• Rapid deployment of new features
30. Online Dictionary
Problem:
• MySQL could not scale to handle their 5B+ documents
Solution:
• Switched from MySQL to MongoDB
Results:
• Massive simplification of code base
• Eliminated need for external caching system
• 20x performance improvement over MySQL
31. E-commerce
Problem:
• Multi-vertical E-commerce impossible to model (efficiently) in
RDBMS
Solution:
• Switched from MySQL to MongoDB
Results:
• Massive simplification of code base
• Rapidly build, halving time to market (and cost)
• Eliminated need for external caching system
• 50x+ improvement over MySQL
40. Let’s Use an Example
How about we start with books
41. Book Product Schema
Product {
id:
sku: General Product
product dimensions:
shipping weight:
attributes
MSRP:
price:
description:
...
author: Orson Scott Card
title: Enders Game
binding: Hardcover
publication date: July 15, 1994 Book Specific
publisher name: Tor Science Fiction attributes
number of pages: 352
ISBN: 0812550706
language: English
...
43. Album Product Schema
Product {
id:
sku: General Product
product dimensions: attributes stay
shipping weight:
MSRP:
the same
price:
description:
...
artist: MxPx
title: Panic Album Specific
release date: June 7, 2005
label: Side One Dummy
attributes are
track listing: [ The Darkest ... different
language: English
format: CD
...
44. Okay, it’s getting
hairy but is still
manageable, right?
Now the business want to sell jeans
45. Jeans Product Schema
Product {
id: General Product
sku: attributes stay the
product dimensions: same
shipping weight:
MSRP:
price:
description:
...
brand: Lucky Jeans specific
gender: Mens attributes are
make: Vintage
totally different ...
style: Straight Cut
length: 34 and not consistent
width: 34 across brands &
color: Hipster
make
material: Cotten Blend
...
52. EAV
as popularized by Magento
“For purposes of flexibility, the Magento database heavily
utilizes an Entity-Attribute-Value (EAV) data model.
As is often the case, the cost of flexibility is complexity -
Magento is no exception.
The process of manipulating data in Magento is often
more “involved” than that typically experienced using
traditional relational tables.”
- Varien
53. EAV
• Crazy SQL queries
• Hundreds of joins in a
query... or
• Hundreds of queries joined
in the application
• No database enforced
integrity
57. Single Table Inheritance
(insanely wide tables)
• No data integrity enforcement
• Only can use FK for common
elements
• Very wasteful (but disk is
cheap!)
• Can’t effectively index
58. Generic Columns
• No data integrity enforcement
• No data type enforcement
• Only can use FK for common
elements
• Wasteful (but disk is cheap!)
• Can’t index
59. Serialized in Blob
• Not searchable
• No integrity
• All the disadvantages of a
document store, but none of the
advantages
• Never should be used
• One exception is Oracle XML
which operates similar to a
document store
60. Concrete Table Inheritance
(a table for each product attribute set)
• Allows for data integrity
• Querying across attribute
sets quite hard to do (lots
of joins, OR statements
and full table scanning)
• New table needs to be
created for each new
attribute set
61. Class table inheritance
(single product table,
each attribute set in own table)
• Likely best of SQL within the
constraint
solution
• Supports data type enforcement
• No data integrity enforcement
• Easybrowse pages) since
(for
querying across categories
common data in single table
• Every set needs a new table
• Requiresare very complicated
changes
a ton of forsight, as
79. Wanna Play?
• grab products.js from
http://github.com/spf13/
mongoProducts
• mongo --shell products.js
• > use mongoProducts
80. Embedded documents
are great for orders
•Ordered items need to be fixed at the
time of purchase
•Embed them right in the order
db.order.find( { 'items.sku': '00e8da9f' } );
db.order.find( {
'items.details.actor': 'James Stewart'
} ).count();
81. What about
transactions?
Using the right solution for each situation
99. Isolation
• // Pseudo-isolated updates
db.foo.update( { x : 1 } , { $inc : { y : 1 } } , false , true );
• // Isolated updates
db.foo.update( { x : 1 , $atomic : 1 } , { $inc : { y : 1 } } , false ,
true );
• But there are caveats...
100. Isolation
• // Pseudo-isolated updates
db.foo.update( { x : 1 } , { $inc : { y : 1 } } , false , true );
• // Isolated updates
db.foo.update( { x : 1 , $atomic : 1 } , { $inc : { y : 1 } } , false ,
true );
• But there are caveats...
• Despite the $atomic keyword, this is not an atomic update,
since atomicity implies “all or nothing”
101. Isolation
• // Pseudo-isolated updates
db.foo.update( { x : 1 } , { $inc : { y : 1 } } , false , true );
• // Isolated updates
db.foo.update( { x : 1 , $atomic : 1 } , { $inc : { y : 1 } } , false ,
true );
• But there are caveats...
• Despite the $atomic keyword, this is not an atomic update,
since atomicity implies “all or nothing”
• $atomic here means update is done without an interference
from any other operation (isolated)
102. Isolation
• // Pseudo-isolated updates
db.foo.update( { x : 1 } , { $inc : { y : 1 } } , false , true );
• // Isolated updates
db.foo.update( { x : 1 , $atomic : 1 } , { $inc : { y : 1 } } , false ,
true );
• But there are caveats...
• Despite the $atomic keyword, this is not an atomic update,
since atomicity implies “all or nothing”
• $atomic here means update is done without an interference
from any other operation (isolated)
• An isolated update can only act on a single collection. Multi-
collection updates are not transactional, thus not
isolatable.
107. • Atomic single document writes
• If you need atomic writes across multi-document
transactions don't use Mongo
• Many if not most e-commerce transactions could be
accomplished within a single document write
108. • Atomic single document writes
• If you need atomic writes across multi-document
transactions don't use Mongo
• Many if not most e-commerce transactions could be
accomplished within a single document write
• Unique indexes
• This only works on keys used by the entire
collection
109. • Atomic single document writes
• If you need atomic writes across multi-document
transactions don't use Mongo
• Many if not most e-commerce transactions could be
accomplished within a single document write
• Unique indexes
• This only works on keys used by the entire
collection
• Isolated (not atomic) single collection updates.
• Mongo does not support locking
• There are ways to work around this
110. • Atomic single document writes
• If you need atomic writes across multi-document
transactions don't use Mongo
• Many if not most e-commerce transactions could be
accomplished within a single document write
• Unique indexes
• This only works on keys used by the entire
collection
• Isolated (not atomic) single collection updates.
• Mongo does not support locking
• There are ways to work around this
• It’s durable
111. There are ways to
guarantee ACID
properties in MongoDB
Here are 2 good approaches useful for
E-commerce transactions
113. Optimistic
Concurrency
•Read the current state of a product
114. Optimistic
Concurrency
•Read the current state of a product
•Make your changes with the assertion
that your product has the same state as
it did when you last read it
117. Optimistic concurrency
in MongoDB
We’ll use an update-if-current strategy.
This example is straight from the documentation:
118. Optimistic concurrency
in MongoDB
We’ll use an update-if-current strategy.
This example is straight from the documentation:
> t = db.inventory
> p = t.findOne({sku:'abc'})
> t.update({_id:p._id, qty:p.qty}, {'$inc': {qty: -1}});
> db.$cmd.findOne({getlasterror:1});
{"err" : , "updatedExisting" : true , "n" : 1 , "ok" : 1}
// it worked
119. Optimistic concurrency
in MongoDB
We’ll use an update-if-current strategy.
This example is straight from the documentation:
> t = db.inventory
> p = t.findOne({sku:'abc'})
> t.update({_id:p._id, qty:p.qty}, {'$inc': {qty: -1}});
> db.$cmd.findOne({getlasterror:1});
{"err" : , "updatedExisting" : true , "n" : 1 , "ok" : 1}
// it worked
... If that didn't work, try again until it does.
120. Optimistic
concurrency
•Read the current state of a product.
•Make your changes with the assertion
that your product has the same state as
it did when you last read it.
121. Optimistic
concurrency
•Read the current state of a product.
•Make your changes with the assertion
that your product has the same state as
it did when you last read it.
• It is also possible to use OCC to
bootstrap pessimistic concurrency and
fake row level locking
123. OCC works great for
companies like Amazon
•Amazon has a long-tail catalog
•A long tail catalog lends itself well to
optimistic concurrency, because it has
low data contention
133. Flash sales and
auctions are defined by
high data contention
•The model doesn't work otherwise
•They can't afford to be optimistic
134. Flash sales and
auctions are defined by
high data contention
•The model doesn't work otherwise
•They can't afford to be optimistic
•Order really matters
139. 1. I go to Barneys and see a pair of shoes I just have to
buy.
2. I call “dibs” (by grabbing them off the shelf).
3. I take them up to the cash register and purchase
them:
• Store inventory has been manually decremented.
• I pay for them with my trusty AmEx.
4. If all goes according to plan, I walk out of the store.
5. If my card was declined, the shoes are “rolled back”
... out onto the shelves and sold to the next customer
who wants them.
140. All of this is
accomplished
without concurrency
145. 1. Select a product.
2. Update the document to hold inventory.
146. 1. Select a product.
2. Update the document to hold inventory.
• Store inventory has been
decremented.
147. 1. Select a product.
2. Update the document to hold inventory.
• Store inventory has been
decremented.
3. Purchase the product(s)
148. 1. Select a product.
2. Update the document to hold inventory.
• Store inventory has been
decremented.
3. Purchase the product(s)
• Process payment
149. 1. Select a product.
2. Update the document to hold inventory.
• Store inventory has been
decremented.
3. Purchase the product(s)
• Process payment
4. Roll back if anything went wrong.
151. MongoDB e-commerce
transactions
• Each Item (not SKU) has it’s own document
• Document consists of...
• a reference to the SKU (product)
• a state ( available / sold / ... )
• potentially other data (timestamp, order
ref)
153. Transactions
in MongoDB
We’ll use a simple update statement
here.
154. Transactions
in MongoDB
We’ll use a simple update statement
here.
> t = db.inventory
> sku = sku.findOne({sku:'abc'})
> t.update({ref_id:sku._id, state: 'available'}, {'$set':
{state: 'ordered'}});
> db.$cmd.findOne({getlasterror:1});
{"err" : , "updatedExisting" : true , "n" : 1 , "ok" : 1}
// it worked
155. Transactions
in MongoDB
We’ll use a simple update statement
here.
> t = db.inventory
> sku = sku.findOne({sku:'abc'})
> t.update({ref_id:sku._id, state: 'available'}, {'$set':
{state: 'ordered'}});
> db.$cmd.findOne({getlasterror:1});
{"err" : , "updatedExisting" : true , "n" : 1 , "ok" : 1}
// it worked
... If that didn't work, no inventory available
157. Cart in Cart Action
An added benefit, it can easily provide
inventory hold in cart.
158. Cart in Cart Action
An added benefit, it can easily provide
inventory hold in cart.
> t = db.inventory
> sku = sku.findOne({sku:'abc'})
> t.update({ref_id:sku._id, state: 'available'}, {'$set':
{state: 'in cart'}});
> db.$cmd.findOne({getlasterror:1});
{"err" : , "updatedExisting" : true , "n" : 1 , "ok" : 1}
// it worked
159. Cart in Cart Action
An added benefit, it can easily provide
inventory hold in cart.
> t = db.inventory
> sku = sku.findOne({sku:'abc'})
> t.update({ref_id:sku._id, state: 'available'}, {'$set':
{state: 'in cart'}});
> db.$cmd.findOne({getlasterror:1});
{"err" : , "updatedExisting" : true , "n" : 1 , "ok" : 1}
// it worked
just like reality, each item is either
available, in a cart, or purchased
160. http://spf13.com
http://github.com/spf13
@spf13
Questions?
download at mongodb.org
PS: We’re hiring!! Contact us at jobs@10gen.com
Editor's Notes
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Remember in 1995 there were around 10,000 websites. Mosiac, Lynx, Mozilla (pre netscape) and IE 2.0 were the only web browsers. \nApache (Dec ’95), Java (’96), PHP (June ’95), and .net didn’t exist yet. Linux just barely (1.0 in ’94)\n
Remember in 1995 there were around 10,000 websites. Mosiac, Lynx, Mozilla (pre netscape) and IE 2.0 were the only web browsers. \nApache (Dec ’95), Java (’96), PHP (June ’95), and .net didn’t exist yet. Linux just barely (1.0 in ’94)\n
Remember in 1995 there were around 10,000 websites. Mosiac, Lynx, Mozilla (pre netscape) and IE 2.0 were the only web browsers. \nApache (Dec ’95), Java (’96), PHP (June ’95), and .net didn’t exist yet. Linux just barely (1.0 in ’94)\n
\n
\n
\n
\n
By reducing transactional semantics the db provides, one can still solve an interesting set of problems where performance is very important, and horizontal scaling then becomes easier.\n\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
actually, just the first 1/3 of it. \n
\n
Ironically this is how magento solves the performance problems associated with EAV, by caching the data into insanely wide tables.\n
\n
\n
\n
Can’t create a FK as each set references a different table. “Key” really made of attribute table name id and attribute table name\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Whenever you use a inter system coordination you need to implement your own atomic checks in the application... But SOAP does have transactions.. so not quite accurate. \n\nkyle idea... but we are fairly atomic with authorize.net\n\natomicity, consistency, isolation, durability.\n\n
Whenever you use a inter system coordination you need to implement your own atomic checks in the application... But SOAP does have transactions.. so not quite accurate. \n\nkyle idea... but we are fairly atomic with authorize.net\n\natomicity, consistency, isolation, durability.\n\n
Mongo has a grip of atomic operations: set, unset, etc.\n
Mongo has a grip of atomic operations: set, unset, etc.\n
Mongo has a grip of atomic operations: set, unset, etc.\n
\n
\n
update( { where }, { values }, upsert?, multiple? )\n\nIsolated is not atomic. Atomic implies that there is an all-or-nothing semantic to the update; this is not possible with more than one document. Isolated just means than you are the only one writing when the update is done; this means each update is done without any interference from any other.\n\nMongoDB supports atomic operations on single documents.  MongoDB does not support traditional locking and complex transactions for a number of reasons:\nFirst, in sharded environments, distributed locks could be expensive and slow.  Mongo DB's goal is to be lightweight and fast.\nWe dislike the concept of deadlocks.  We want the system to be simple and predictable without these sort of surprises.\nWe want Mongo DB to work well for realtime problems.  If an operation may execute which locks large amounts of data, it might stop some small light queries for an extended period of time.  (We don't claim Mongo DB is perfect yet in regards to being "real-time", but we certainly think locking would make it even harder.)\n\n\n\n
update( { where }, { values }, upsert?, multiple? )\n\nIsolated is not atomic. Atomic implies that there is an all-or-nothing semantic to the update; this is not possible with more than one document. Isolated just means than you are the only one writing when the update is done; this means each update is done without any interference from any other.\n\nMongoDB supports atomic operations on single documents.  MongoDB does not support traditional locking and complex transactions for a number of reasons:\nFirst, in sharded environments, distributed locks could be expensive and slow.  Mongo DB's goal is to be lightweight and fast.\nWe dislike the concept of deadlocks.  We want the system to be simple and predictable without these sort of surprises.\nWe want Mongo DB to work well for realtime problems.  If an operation may execute which locks large amounts of data, it might stop some small light queries for an extended period of time.  (We don't claim Mongo DB is perfect yet in regards to being "real-time", but we certainly think locking would make it even harder.)\n\n\n\n
update( { where }, { values }, upsert?, multiple? )\n\nIsolated is not atomic. Atomic implies that there is an all-or-nothing semantic to the update; this is not possible with more than one document. Isolated just means than you are the only one writing when the update is done; this means each update is done without any interference from any other.\n\nMongoDB supports atomic operations on single documents.  MongoDB does not support traditional locking and complex transactions for a number of reasons:\nFirst, in sharded environments, distributed locks could be expensive and slow.  Mongo DB's goal is to be lightweight and fast.\nWe dislike the concept of deadlocks.  We want the system to be simple and predictable without these sort of surprises.\nWe want Mongo DB to work well for realtime problems.  If an operation may execute which locks large amounts of data, it might stop some small light queries for an extended period of time.  (We don't claim Mongo DB is perfect yet in regards to being "real-time", but we certainly think locking would make it even harder.)\n\n\n\n
update( { where }, { values }, upsert?, multiple? )\n\nIsolated is not atomic. Atomic implies that there is an all-or-nothing semantic to the update; this is not possible with more than one document. Isolated just means than you are the only one writing when the update is done; this means each update is done without any interference from any other.\n\nMongoDB supports atomic operations on single documents.  MongoDB does not support traditional locking and complex transactions for a number of reasons:\nFirst, in sharded environments, distributed locks could be expensive and slow.  Mongo DB's goal is to be lightweight and fast.\nWe dislike the concept of deadlocks.  We want the system to be simple and predictable without these sort of surprises.\nWe want Mongo DB to work well for realtime problems.  If an operation may execute which locks large amounts of data, it might stop some small light queries for an extended period of time.  (We don't claim Mongo DB is perfect yet in regards to being "real-time", but we certainly think locking would make it even harder.)\n\n\n\n
update( { where }, { values }, upsert?, multiple? )\n\nIsolated is not atomic. Atomic implies that there is an all-or-nothing semantic to the update; this is not possible with more than one document. Isolated just means than you are the only one writing when the update is done; this means each update is done without any interference from any other.\n\nMongoDB supports atomic operations on single documents.  MongoDB does not support traditional locking and complex transactions for a number of reasons:\nFirst, in sharded environments, distributed locks could be expensive and slow.  Mongo DB's goal is to be lightweight and fast.\nWe dislike the concept of deadlocks.  We want the system to be simple and predictable without these sort of surprises.\nWe want Mongo DB to work well for realtime problems.  If an operation may execute which locks large amounts of data, it might stop some small light queries for an extended period of time.  (We don't claim Mongo DB is perfect yet in regards to being "real-time", but we certainly think locking would make it even harder.)\n\n\n\n
update( { where }, { values }, upsert?, multiple? )\n\nIsolated is not atomic. Atomic implies that there is an all-or-nothing semantic to the update; this is not possible with more than one document. Isolated just means than you are the only one writing when the update is done; this means each update is done without any interference from any other.\n\nMongoDB supports atomic operations on single documents.  MongoDB does not support traditional locking and complex transactions for a number of reasons:\nFirst, in sharded environments, distributed locks could be expensive and slow.  Mongo DB's goal is to be lightweight and fast.\nWe dislike the concept of deadlocks.  We want the system to be simple and predictable without these sort of surprises.\nWe want Mongo DB to work well for realtime problems.  If an operation may execute which locks large amounts of data, it might stop some small light queries for an extended period of time.  (We don't claim Mongo DB is perfect yet in regards to being "real-time", but we certainly think locking would make it even harder.)\n\n\n\n
\n
\n
MongoDB supports atomic operations on single documents.  MongoDB does not support traditional locking and complex transactions for a number of reasons:\nFirst, in sharded environments, distributed locks could be expensive and slow.  Mongo DB's goal is to be lightweight and fast.\nWe dislike the concept of deadlocks.  We want the system to be simple and predictable without these sort of surprises.\nWe want Mongo DB to work well for realtime problems.  If an operation may execute which locks large amounts of data, it might stop some small light queries for an extended period of time.  (We don't claim Mongo DB is perfect yet in regards to being "real-time", but we certainly think locking would make it even harder.)\n\n
MongoDB supports atomic operations on single documents.  MongoDB does not support traditional locking and complex transactions for a number of reasons:\nFirst, in sharded environments, distributed locks could be expensive and slow.  Mongo DB's goal is to be lightweight and fast.\nWe dislike the concept of deadlocks.  We want the system to be simple and predictable without these sort of surprises.\nWe want Mongo DB to work well for realtime problems.  If an operation may execute which locks large amounts of data, it might stop some small light queries for an extended period of time.  (We don't claim Mongo DB is perfect yet in regards to being "real-time", but we certainly think locking would make it even harder.)\n\n
MongoDB supports atomic operations on single documents.  MongoDB does not support traditional locking and complex transactions for a number of reasons:\nFirst, in sharded environments, distributed locks could be expensive and slow.  Mongo DB's goal is to be lightweight and fast.\nWe dislike the concept of deadlocks.  We want the system to be simple and predictable without these sort of surprises.\nWe want Mongo DB to work well for realtime problems.  If an operation may execute which locks large amounts of data, it might stop some small light queries for an extended period of time.  (We don't claim Mongo DB is perfect yet in regards to being "real-time", but we certainly think locking would make it even harder.)\n\n
MongoDB supports atomic operations on single documents.  MongoDB does not support traditional locking and complex transactions for a number of reasons:\nFirst, in sharded environments, distributed locks could be expensive and slow.  Mongo DB's goal is to be lightweight and fast.\nWe dislike the concept of deadlocks.  We want the system to be simple and predictable without these sort of surprises.\nWe want Mongo DB to work well for realtime problems.  If an operation may execute which locks large amounts of data, it might stop some small light queries for an extended period of time.  (We don't claim Mongo DB is perfect yet in regards to being "real-time", but we certainly think locking would make it even harder.)\n\n
\n
lemme show you an example\n
lemme show you an example\n
or instead of qty, use an version_id.\n object id / md5 as a version. \n
or instead of qty, use an version_id.\n object id / md5 as a version. \n
or instead of qty, use an version_id.\n object id / md5 as a version. \n
or instead of qty, use an version_id.\n object id / md5 as a version. \n
\n
Imagine what would happen if everyone tried to access the same record at the same time. Just think of all those spinning while loops :)\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Mind if I tell you a story?\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
By using a single document we avoid any need for complicated transactions. \n Any number of concurrent read operations are allowed, but typically only one write operation (although some write operations yield and in the future more concurrency will be added). The write lock acquisition is greedy: a pending write lock acquisition will prevent further read lock acquisitions until fulfilled.\n
By using a single document we avoid any need for complicated transactions. \n Any number of concurrent read operations are allowed, but typically only one write operation (although some write operations yield and in the future more concurrency will be added). The write lock acquisition is greedy: a pending write lock acquisition will prevent further read lock acquisitions until fulfilled.\n
By using a single document we avoid any need for complicated transactions. \n Any number of concurrent read operations are allowed, but typically only one write operation (although some write operations yield and in the future more concurrency will be added). The write lock acquisition is greedy: a pending write lock acquisition will prevent further read lock acquisitions until fulfilled.\n
By using a single document we avoid any need for complicated transactions. \n Any number of concurrent read operations are allowed, but typically only one write operation (although some write operations yield and in the future more concurrency will be added). The write lock acquisition is greedy: a pending write lock acquisition will prevent further read lock acquisitions until fulfilled.\n
By using a single document we avoid any need for complicated transactions. \n Any number of concurrent read operations are allowed, but typically only one write operation (although some write operations yield and in the future more concurrency will be added). The write lock acquisition is greedy: a pending write lock acquisition will prevent further read lock acquisitions until fulfilled.\n
By using a single document we avoid any need for complicated transactions. \n Any number of concurrent read operations are allowed, but typically only one write operation (although some write operations yield and in the future more concurrency will be added). The write lock acquisition is greedy: a pending write lock acquisition will prevent further read lock acquisitions until fulfilled.\n
\n
\n
\n
\n
\n
Inventory can be provided by using count. Can be cached as value on the sku.\nAs the items themselves are atomic, the order need not be to reserve inventory.\n
Inventory can be provided by using count. Can be cached as value on the sku.\nAs the items themselves are atomic, the order need not be to reserve inventory.\n
Inventory can be provided by using count. Can be cached as value on the sku.\nAs the items themselves are atomic, the order need not be to reserve inventory.\n
remember by default, update only updates 1 document and the operation is atomic on that document.\n\n
remember by default, update only updates 1 document and the operation is atomic on that document.\n\n
remember by default, update only updates 1 document and the operation is atomic on that document.\n\n