SlideShare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.
SlideShare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.
Successfully reported this slideshow.
Activate your 30 day free trial to unlock unlimited reading.
MongoDB World 2019: The Sights (and Smells) of a Bad Query
“Why is MongoDB so slow?” you may ask yourself on occasion. You’ve created indexes, you’ve learned how to use the aggregation pipeline. What the heck? Could it be your queries? This talk will outline what tools are at your disposal (both in MongoDB Atlas and in MongoDB server) to identify inefficient queries.
“Why is MongoDB so slow?” you may ask yourself on occasion. You’ve created indexes, you’ve learned how to use the aggregation pipeline. What the heck? Could it be your queries? This talk will outline what tools are at your disposal (both in MongoDB Atlas and in MongoDB server) to identify inefficient queries.
8.
What stinks?
https://en.wikipedia.org/wiki/Code_smell
A code smell is any characteristic in the source code of
a program that possibly indicates a deeper problem.
Code smells are usually not bugs; they are not technically incorrect and
do not prevent the program from functioning.
Instead, they indicate weaknesses in design that may slow down
development or increase the risk of bugs or failures in the future.
9.
Scenario
• Atlas M10 Cluster
• 2-3M documents
• Generated and imported
using a template
(mgeneratejs)
• Ran some adhoc queries
16.
Let’s see that again … as an Explain Plan
• Query time higher than we
would expect
• COLLSCAN
• Large number of documents
scanned (entire collection!)
22.
Let’s find some data – in order this time
• Looking for everyone older than
25 from Utah, sorted by name
• IXSCAN …. But slow
• Far more keys and documents
examined than returned (we’ll
come back to this one)
• In memory sort!
26.
What’s the explain telling us?
• IXSCAN using the
expected index
• Faster execution
• Same number of
documents scanned
and returned …
• In memory sort???
29.
METRICS AND
LOGS AND
CHARTS …
OH MY!
{ context: “analysis”, smell: “good?” }
30.
What can the Atlas Metrics tell us?
• Sort operation that
couldn’t use an index
• Ratio of documents
scanned to the number of
documents returned
• Increased average
read/write time per
operation
39.
What can stink?
• Alerts (from Atlas, Ops Manager, Cloud Manager)
• COLLSCAN (in logs or explain plans)
• SORT (in explain plans or in logs as hasSortStage)
• Non-ESR Compound Indexes
• Sharp rise in one or more metrics (in Atlas, Ops Manager or
Cloud Manager)
• Queries slower than slowms
• Large docsExamined/n ratio
40.
Links
Notes/links from this talk available at:
http://bit.ly/ab-mdbw19
alexbevi
alexbevi
@alexbevi