Charity Majors
@mipsytipsy
Parse Production Engineer
Igor Canadi
@igorcanadi
Facebook Software Engineer
Storage Engine Wars
at Parse
Parse
“does crazy shit with MongoDB”
Parse
• 500k+ workloads
• millions of colls, 10’s of millions of indexes
• ~35 replica sets
• 240 GB primary data
• nearly 1 PB data on AWS
• ~2 DBA-type engineers
Storage Engine Goals
• handle 10M collections+indexes
• compression
• document-level locking
• no stalls or outliers
• faster writes, ballpark read latencies
Storage Engine Wars
RocksDB
RocksDB
• open source storage engine
• write-optimized (LSM trees)
• highly compressible
• heavy investment by Facebook
• vibrant open-source community
Battle tested
• LinkedIn: Feed, Apache Samza
• Yahoo: Sherpa’s local storage
• Facebook: tens of PB, tens of billions of QPS,
hundreds of different workloads
RocksDB::Internals
LSM architecture
Memtable
Write Request from ApplicationRead Request from Application
Transaction log
Read Only data in
RAM on disk
Periodic
Compaction
Write amplification - B
tree
Row
Row
Row
Row
read
Row
Row
Row
Row
modify
write
Write amplification -
Leveled LSM
Level 0
Level 1
Level 2
Level 3
Compaction
Fragmentation - B tree
Row
Row
Row
Row
split
Row
Row
Row
Row
Row
Fragmentation - LSM
Level 0
Level 1
Level 2
Level 3
Comparing with InnoDB
0
0.25
0.5
0.75
1
1.25
Database size (relative)
InnoDB RocksDB
0
0.25
0.5
0.75
1
1.25
Bytes written (relative)
InnoDB RocksDB
LSM read penalty
B-tree LSM
internal nodes
leaf nodes
range scan with covering index
is sequential read
memtable
L0
L1
L2
L3
we need more reads for range scans
Integration with
MongoDB
type key value
Collection <prefix><record_id> <BSON doc>
Unique index <prefix><key> <record_id>
Non-unique index <prefix><key><record_id>
Storage Engine Goals
• handle 10M collections+indexes
• compression
• document-level locking
• no stalls or outliers
• faster writes, ballpark read latencies
Storage efficiency
Storage efficiency
Higher throughput
No stalls
Possibly trade-off read latencies
Latencies
TODO latency graph
Key findings
• 90% compression
• 50-200x faster writes
• much less IO exercised when records are smaller
• queries marginally slower when:
• scanning a lot of documents, large documents
• querying cached data
• capped collections suck
RocksDB IS AWESOME
Operations::Rollout
Gaining confidence
• snapshot and replay (Flashback)
• hidden secondaries
• secondaries
• primaries
• mixed replsets for a loooong time :)
MongoDB 3.0 issues
• $nearSphere 10x performance regression
• https://jira.mongodb.org/browse/SERVER-18056
• long running reads on secondaries blocking
replication
• https://jira.mongodb.org/browse/SERVER-18190
Tombstone trap
R T R T R T R T R… T R T R R
Solution: Automagically compact
Operations::Production
Backing up RocksDB
• table files are immutable
• …so backups are easy. just hardlink!
• we’re building a tool that will send incremental
backups to S3
Monitoring
db.serverStatus()[‘rocksdb’]
Monitoring
db.serverStatus()[‘rocksdb’]
Monitoring
• Tombstones
• Disk I/O saturation
• CPU usage
• Latency
db.serverStatus()[‘rocksdb’]
Current status
• deployed as primary on 25% of replica sets
• secondaries on 50% of replica sets
• made ops tools storage engine agnostic
• made monitoring storage engine aware
Next steps
• more performance improvements
• improve operational tooling, monitoring
• continue to test alongside WT, TokuMX
• ** wider community adoption **
Future of RocksDB
Future of Mongo+Rocks
World domination.
Let us know what you think. :)
<< link to google group >>
Resources
• http://rocksdb.org
• http://blog.parse.com/announcements/mongodb-rocksdb-parse/
• http://blog.parse.com/learn/engineering/mongodb-rocksdb-benchmark-setup-
compression/
• http://blog.parse.com/learn/engineering/mongodb-rocksdb-writing-so-fast-it-
makes-your-head-spin/
• http://www.acmebenchmarking.com/
• << links to debs and rpms >>
Charity Majors
@mipsytipsy
Parse Production Engineer
Igor Canadi
@igorcanadi
Facebook Software Engineer

Storage Engine Wars at Parse

Editor's Notes

  • #2 Hi there MongoDB World! My name is Charity. I work on backend operations at Parse, and this is Igor Canadi, who is an amazing software engineer on the RocksDB database engineering team here at Facebook. We are super excited to be here. This has been an *unbelievably* exciting year to be working in the MongoDB space.
  • #3 It was just one year ago at MongoDB World that Eliot announced that they were planning to build a modular storage engine api. They had a janky little RocksDB demo that they ran in the keynote, do you guys remember that? They had a few engineers who were literally up all night trying to put the mongo + rocks demo together leading up to the conference, and omg, they were crossing their fingers so hard that everything would work during the live demo. :) And now look at where we are! There’s the old mmapv1 engine, the RocksDB engine, the newly acquired WiredTiger, the TokuMX engine which was recently acquired by Percona and is now compatible with the oplog, and even more storage engine innovation in the works. And at Parse, we’ve been watching this all play out VERY closely. We developed a framework for load testing our production workloads offline, and we’ve benchmarked basically every viable engine out there — Rocks, WT, TokuMX, and mmap. We’re taking this problem *very* seriously. Why?
  • #4 … because we do crazy shit with mongo.
  • #5 We use MongoDB in ways that it was never really designed for. Which isn’t actually saying anything bad about mongo, because nobody has ever designed a database for what we do. It’s really saying something amazing about mongo that we’ve been able to build a business supporting so many amazing developers, while doing so many crazy things on it! We run 500k+ apps on MongoDB. That means half a million different workloads — write heavy, read heavy, small objects, large objects, geo apps, games, social networking apps — you name it, we run it somewhere. We have nearly a petabyte of mongo data running on AWS, including ~240 gb of primary data. And we do all of this with about two people spending their time managing our DB infra.
  • #6 So mmap is obviously great for some things. Reads are pretty fast, esp if you can fit everything into memory. It’s simple and easy to understand and administer. uhhh — is there anything else nice that we can say about mmap? Not really. Ok, so when we started salivating over the storage engine API, we made a wishlist of all the things we really, really wanted. Number one, it has to be able to handle our workloads, which means anything that uses file-per-index and file-per-collection is basically out. We couldn’t even get half of our workloads to import into TokuMX or WiredTiger. We have millions of collections and tens of millions of indexes, so we start running into filesystem and storage engine limitations really fast. So … it has to *work*, right. The second most important thing is compression. This is important for a few reasons. Like cost — when you’re storing a petabyte of data in AWS on database-class storage, using PIOPS or SSDs, that gets really freaking expensive. SSDs are expensive, snapshots are expensive — like 40% of our AWS budget goes just to running mongo hardware. This is especially meaningful for us because a lot of our data is infrequently accessed! Not all mobile apps are active in any given week, so we shouldn’t have to have ALL of our data in RAM. Compression doubly great because the smaller the storage footprint, the more we can fit in to memory and the faster queries will be. Everything else on our wishlist is basically dwarfed by the importance of those two bullet points. But as long as we’re making a wishlist, I would wish for higher throughput — document level locking instead of db-level locking. And I really don’t ever want my storage engine to lock up and stall or have weird outlier latencies.
  • #7 So …. we benchmarked basically everything out there. We started with TokuMX, recently acquired by Percona, over a year ago. It was the only other option out there that had compression at the time, and compression was super important to us. But the oplog between toku and mongo wasn’t compatible, which made the migration path back and forth basically impossible, so we eventually gave up on that. Next we started trying to benchmark wiredtiger, but kept running into stalls. For a long time it wouldn’t even import our data sets. And around this time we started making friends with the RocksDB team here at Facebook.
  • #8 And we realized … RocksDB is *awesome*.
  • #9 RocksDB, for those of you who don’t know, is an open source storage engine developed by Facebook. It uses Log-Structured-Merge trees instead of B-trees, so it’s highly write optimized, and it’s *incredibly* fast. The compression rates are phenomenal — I have graphs for you later, just wait and see how amazing this shit is. It also has a really powerful open source community, and it’s already being used for production workloads at Facebook and several other big shops.
  • #10 As a DBA, you never want to be running something that no one else in the world is running. It’s terrifying, right? This is why it mattered to us that RocksDB is already being used in production at a number of shops. RocksDB is being used as the storage engine for LinkedIn’s feed and Yahoo’s biggest distributed database Sherpa. There are a bunch of other startups that are also using Rocks although they aren’t public about it. And let’s not forget the elephant in the room — Facebook. Facebook is running tens of billions of QPS on RocksDB across hundreds of different services and workloads. RocksDB is quickly becoming the standard way to store online data at Facebook, and we hope to make it the industry standard. Facebook wants RocksDB to be the best write-optimized storage engine in the world, and they’ve committed a lot of resources to making that come true.
  • #11 So now that I’ve talked a little bit about Parse’s workload and what our very special needs are, I want to turn it over to Igor. Igor is the badass software engineer who’s been leading up the efforts integrating MongoDB with RocksDB. He’s going to take you on a dive under the hood and talk a little bit more about the internals of RocksDB — how LSM trees work, why it’s fast and efficient and compressible, etc. After that we’re going to share some of our war stories rolling out rocks, and our internal benchmarks using RocksDB in production. Finally we’ll wrap up by giving you a few tips about running and optimizing Rocks in production. => Igor Thank you Charity. So Charity mentioned that RocksDB is write-optimized and it’s based on Log-structured-merge trees. In next couple of slides, I’ll talk a bit more about what that means.
  • #12 Let’s start with a quick overview of the standard LSM architecture. In LSM architecture we have a write buffer in memory that we call memtable. When a write request comes in, we write it into the memtable. We also write it to the transaction log, so that we can recover the data if the process crashes. Once the memtable is full, we flush it out to disk. After a while, this process will generate a bunch of files. For that reason, we also run periodic compaction. Compactions take couple files and merge them together. When a read request comes in, we first consult the memtable. If the key we’re looking for is not there, we have to go read each of those files on disk. This was a quick overview of the LSM architecture. Next, let’s discuss how it differs from B-tree-based architectures.
  • #13 First, we’ll discuss write amplification. Write amplification is defined by a ratio for every byte that comes into your database, how many bytes do you need to rewrite on storage. For example, if your database is taking 10MB/s and you’re writing 100MB/s to storage, your write amplification is 10. Obviously, the smaller the write amplification, the better. Let’s assume that we want to modify a row in a b-tree based architecture. Before we modify the page, we need to load it from storage. After we load it from storage, we modify the row and then write the dirty page out to disk. What is the write amplification in this scenario? Well, it depends on the record size. If we’re modifying only a single byte and b-tree page is 4K, our write amplification is 4096. If our row is 100 bytes, the write amplification is 40. Write amplification of b-tress depends on the record size. Not only your document size, but also size of each index entries, which can be quite small. Another thing to notice is that if we want to modify a page, we need to read it from storage. So if for example your indexes don’t fit into memory, your write performance can suffer.
  • #14 Let’s now talk about write amplification in Leveled LSM architecture. In Leveled LSM architecture, files are organized by levels. Each level is usually 10 times bigger than the previous one. How do we calculate write amplification here? When a byte comes into the database, it’s written into level 0. This byte’s goal is to finally end up in the last level. We can say that last level is the stable size of the database, while the other levels are just deltas. For a byte from level 0 to end up in level 3 it needs to go through 3 compaction. Each compaction will push it one level down. What we found empirically is that each compaction adds write amplification of 3-5 and that total write amplification of the system is 9-15. However, there are two interesting things to notice here: when maintaining non-unique secondary indexes, you don’t need to read anything from storage write amplification doesn’t depend on the record size. it’s the same wether the records are 10 bytes or 100 KB.
  • #15 Next, let’s discuss fragmentation. Let’s assume you have a b-tree page that’s 4KB. If you insert 100-byte row into that page, you’ll need to split the page and create two pages. Those two pages will be 8KB on disk, although you’re only using a bit more than half of that 8KB. This means that you can use up to 100% extra space due to fragmentation.
  • #16 On the other hand, fragmentation on leveled LSM is only 10-20%. You can think of level 3 as the stable size of the database. Levels 0, 1 and 2 are just keeping deltas and they are causing fragmentation. As I said earlier, level 2 is 10 times smaller than level 3, so it’s easy to see that fragmentation here is adding just about 11% extra data. So in theory RocksDB should be better on both fragmentation and write amplification. How does it work on real-world data?
  • #17 As an experiment, we compared MySQL on RocksDB with our production instance of MySQL on InnoDB. And here are the results! The left graph shows the database size. RocksDB needs only half the space of InnoDB. The right graph shows total bytes written through time. RocksDB also writes half the amount of data of InnoDB. InnoDB is a very good and mature storage engine, so we’re very excited that RocksDB was able to move the interesting metrics by so much. Now that we’ve shown these great results, you must be thinking: “There must be a catch somewhere”. TODO get raw data
  • #18 …and that catch is that we need to pay penalty on reads. B-trees keeps the data nice and sorted in the leaf pages. When you insert a row into a b-tree, it tries really hard to still keep the data in leaf pages sorted. It pays the penalty on writes. LSM, on the other hand, takes your write, puts it into a memtable and it’s done. When you do a range scan on a B-tree, your data is nice and sorted and you read your leaf pages sequentially. Doing a range scan in LSM is a bit more expensive. There’s more places that you need to go look for your data. Sure, you can make sure that your upper levels are cached, but it takes more memory to cache upper levels of LSM than it takes to cache internal nodes of a b-tree.
  • #19 I just want to give you a rough idea of our integration with Mongo. Our primary goal was to support Parse, which has millions and millions of collection and indexes. This led us early on to an important design decision. We decided to keep all data in a single RocksDB namespace. We distinguish data for different collections and indexes by a 4-byte prefix to each key. Other than the prefix, the data format is the same as wiredtiger. For collection, our key is a record_id and value is a BSON document. For unique index, our RocksDB key is an index key and a value is a record_id which points to the document. And for non-unique index, we combine the key and a record_id as a RocksDB key. That way one key can map to multiple record ids. And with that, I’ll turn over to Charity who’ll talk about some of our benchmark results.
  • #20 Alright, so you may remember this exact same slide from earlier — here were our goals for the magical storage of the future that we wanted. We wanted it to handle 10s of millions of collections and indexes, be highly compressible, have no stalls or severe outlier queries, and we were willing to trade off read latencies to some extent to get better writes. So let’s look at some graphs about how that went. Btw you don’t have to worry about screenshotting any of this — we have all of these results and more published on the Parse blog, and we will serve up links to the posts at the end of the presentation.
  • #21 Storage efficiency. HOLY CRAP. We got literally 90% storage savings. This would cut our petabyte of storage in AWS down to *100 terabytes total*. Can you even IMAGINE saving 90% on your storage costs. WiredTiger had similar results. Note that this is using the B-tree implementation of WT, since the LSM version won’t be available until mongo version 3.2.
  • #22 Query latencies. We benchmarked 2.6 mmap, 3.0 mmap, and 3.0 rocksdb so that you would get apples-to-apples comparisons between 3.0 as much as possible. We unfortunately weren’t able to benchmark the WT b-tree implementation because we couldn’t get it to run with our data sets. This is taken from one of our shared replica sets for 10s of thousands of apps, so it has lots of good contention and tenancy data. As you can see, inserts, updates and deletes are at somewhere between 50x and a couple hundred times as fast. Write ops are simply blindingly fast in RocksDB. Average query lengths are a little bit slower. Though p99 is about the same or actually faster than 2.6 mmap. Queries in RocksDB are slower if it’s scanning a lot of objects or objects that aren’t cached in memory. Keep in mind, this is basically worst case scenario for queries because Parse is like the wild wild west for data. We’re just a platform, we don’t have control over the schemas our developers create, or the object sizes or query patterns. If you can impose more order on your incoming requests, RocksDB will look even more impressive than this.
  • #23 So our key findings are: basically we saved 90% on storage, 50-200% faster on writes. We were able to shift work from the mongo internal locks to actually exercise the capacity of our CPU and disk I/O, which means we can more effectively throw hardware at this problem when necessary. These are crazy numbers. You don’t get to see numbers like these very often when you’re working with data! Read queries are a little bit slower on average. Mostly when we are scanning a lot of documents, or dealing with large documents. Though interestingly, the 99th% of reads are actually faster with rocks than mmap. But this is a tradeoff we’re happy to make in exchange for document-level locking and massively faster writes. So now that we’ve talked about benchmarking, let’s talk some operational aspects for a bit.
  • #24 Like I said, we are the first shop in the world running mongo + rocks and it is a little terrifying to be the only one running *any* database in production. So let’s talk a little bit about how we gained confidence in the rocks engine and rolled it out in to production, as well as how we learned to monitor it and detect any anomalies.
  • #25 I’m gonna skim through the benchmarking section pretty quickly because our coworker Michael Kania did a whole session on this earlier today — how to benchmark and replay production workloads on MongoDB. If you didn’t manage to catch that session, it will be on the internet, and you can watch it later. Basically: we first captured production workloads and replayed them against production data. Once we had a few days’ worth of traffic running in our testbed with acceptable performance and no weird crashes, we added hidden secondaries to real production replica sets. We ran with those for a while. Discovered some issues, fixed those, re-rolled the binaries. Then we ran with secondaries serving reads for a while. Found some issues, fixed those, re-rolled. Hilariously, we actually ended up electing primaries under duress. We had some customers that were hitting hard limits with the mmap write lock. We crossed our fingers and went ALRIGHT LET’S DO THIS. ROCKS PRIMARY IN PRODUCTION. That’s a scary move. But you know why we felt ok doing it? Because we knew we could roll back at any time by electing the old mmap primary. We had confidence in replication and we knew it wasn’t a one-way migration. This is what’s brilliant about the storage engines sharing a single oplog format. It really lowers the barriers to testing new storage engines when you know you can elect the old one and have a simple rollback strategy. And frankly we still haven’t ditched our mmap secondaries, even on rocks primaries that have been running for months. The penalty for keeping a mixed replica set is only money. And money is much cheaper than customers’ data.
  • #26 We did run into a few really interesting bugs while benchmarking and testing mongo 3.0. It’s funny because this is the most aggressive we have ever been about getting on a new mongo version. Usually we wait for .4, .5, .6, .8 or whatever, because frankly a lot of early releases have been pretty buggy. But this time we’re so eager to get on a new storage engine that we’re way out there ahead of the pack, getting all those alpha bugs under control. You could say that we’re testing mongo 3.0 so you don’t have to. One issue we found was that $nearsphere queries degraded by like *10x*. This wasn’t a rocks thing, this was a mongo 3.0 thing. They've since fixed it to be more like a 3x perf regression, and they recommend working around it by using $near or a few other things. But this blocked us upgrading to 3.0 for quite a while. Another thing, which was even more interesting, was that long-running reads on secondaries actually blocked replication. Very few of the 3.0 issues that we ran into were actually RocksDB or storage-engine related. One *super* interesting and important one that we found though is what we’ve been calling the Tombstone Trap.
  • #27 The biggest pain when dealing with LSM databases are the tombstones. If you ever worked with Cassandra or LevelDB before, you know what I’m talking about. When you delete a record from an LSM database, you don’t actually delete it. You just record a tombstone which says “this record is dead”. When you pile up a bunch of tombstones in a range, you can fall into so-called tombstone trap. Let’s illustrate what this means. Let’s say you’re scanning through this collection. You’re at the first record and you’re interested in the next record that is alive. To get that record you have to iterate through all those tombstones and potentially there might be *a lot* of them. Like millions. This will cause terrible stalls, possibly lot of IO and it’s not going to be a very fun experience. We actually hit this issue in production. We were piling up tombstones from capped collections that weren’t getting compacted for some reason. At some point, our machine just went crazy. All queries were stuck at iterating through all those tombstones and the world just stopped. We had to kill the node. There are three things that we’re doing to address this: We moved capped collections somewhere else. Recently we made capped collections a bit better, but we still don’t recommend running capped collections on MongoDB on RocksDB. For each operation we do, we export RocksDB counters to our internal monitoring tools. One counter we export is actually “how much tombstones did this operation have to deal with”. We closely monitor graph for this counter. As soon as an operation detects that it had to deal with 50K tombstones, we automagically initiate compaction on that key-range. <Igor>
  • #28 So let’s talk about running in production for a few minutes. What’s different about running rocks in production vs running mmap? Well, there are a couple things. Any time you run a new storage engine, you’re kind of running half a new database. There’s a lot of material to cover here, and more will be posted on our blog, but let’s briefly touch on backups and monitoring.
  • #29 Our old strategy for backing up mongodb was to do an fsync lock, then an EBS snapshot of all the raided volumes. EBS is convenient because it automatically does incremental snapshots to S3 and only charges you from the differential blocks. This is basically the main reason we kept running on raided PIOPS volumes for years, instead of using ephemeral SSD storage which is actually much faster. With RocksDB though, the table files are immutable. This is one of the killer features of RocksDB. Doing a mongodump from traditional mongo can take days or weeks, which makes it basically impossible to do a consistent snapshot unless you have a filesystem level snapshot. But with rocks all you really need to do is hardlink the files. Then you can take an LVM snapshot or whatever, and upload the incremental changes to S3. This will be faster, take up less space, and eliminate the significant performance hit you take using PIOPS and EBS snapshots.
  • #30 In traditional mongo, all metrics are basically derived from running db.serverStatus() a lot, and graphing the values it displays. In mongo + RocksDB, you can get special storage engine specific stats by running db.serverStatus()[‘rocksdb’]. Which shows you something like this:
  • #31 There’s a lot of stuff here! Leveled compaction stats, pending compaction threads, running snapshots, etc.
  • #32 You’re probably going to want to graph or consume the output of all these metrics. At this point, we think the most important metrics to pay attention to are the Tombstone count and base level system metrics, like latency and CPU/IO, as well as latency numbers across the board. In mmap versions of mongo, the most important thing you wanted to monitor and alert on were lock percentages. Global read lock, write lock, per-db read lock and write lock — these are the metrics that corresponded to suffering for the people using your data. Using Rocks, you have document-level locking. So you literally don’t have those global locks to monitor any more. You’re also able to do a much better job of stressing your hardware — particularly your CPUs and your I/O percentage used. So it becomes much more important to monitor your system level metrics than your db lock metrics.
  • #33 So basically at this point, we’ve rolled RocksDB primaries out to about a quarter of our Mongo fleet. We have shadow secondaries and live secondaries in about half of our replica sets. We’re proceeding with caution even though we’ve had amazing success so far, just because the nature of our workloads varies so widely. It’s totally possible that we could roll it out to 90% of our primaries with zero problems, and then hit some really critical bugs on the last 10%. So we’re being careful. We don’t have a consistent, predictable workload like shops like Facebook do. So even though we’re having really good experiences and highly optimistic about the rest, we’re proceeding cautiously because we’re very aware that we’re the first alpha customer of MongoDB + RocksDB, and we don’t want to take anything for granted. It also takes a while to build out the tooling around our databases so it’s storage engine agnostic, as well as engine-aware so we get correct monitoring for each individual node.
  • #34 We’re going to keep iterating on Mongo + RocksDB, both in staging and production. It’s important to us to keep benchmarking other storage engines in the ecosystem, so we know where we excel and where we can stand to do more. We’re excited to see the WiredTiger LSM tree implementation in the 3.2 release, it will be really interesting to see how two write-optimized storage engines match up against each other. But our number one priority at this point in time is to increase community adoption of mongo + rocks. Facebook is very dedicated to making RocksDB the best write-optimized storage engine in the world. For a long time this meant mysql-only, but Mongo is now a critical part of the database community that we want to support!
  • #35 We’re hugely excited about the future of RocksDB. It’s already the basis of most services at Facebook that need local storage. Even better, at Facebook we’re experimenting with moving our entire MySQL deployment to RocksDB. As Igor already mentioned, the initial results show that we can save a lot of money on storage by using MySQL on RocksDB. The project is called MyRocks and we have a team working really hard on making this a reality.
  • #36 So please — give it a try! RocksDB is currently the only write-optimized LSM storage option available for MongoDB, so if you have a write-heavy workload, we think you’re going to *love* it. << update with instructions for downloading and installing prebuilt packages, debs and rims >> Let us know what you think. Please mail the google group with any issues or problems you have to report, and let us know if you have great experiences too. We’re super excited to work with you guys on rolling out Rocks.
  • #37 Here we have some links to resources for RocksDB, as well our benchmarking blog posts, and a link to Tim Callaghan’s benchmarking blog (from whom this talk’s title was shamelessly stolen).
  • #38 You can find us at the RocksDB booth in the vendor area, where we have awesome t-shirts for you guys and would love to talk more about our experiences. :) Any questions? (if time)