3. iii
Atlantis HyperScale Supports 60,000 Mailboxes
Contents
About DeepStorage ii
The Bottom Line 1
About Jetstress 2
Jetstress and Deduplicating Storage 4
Atlantis HyperScale 5
HyperScale Configuration 6
Running Jetstress 7
Test Configuration 7
Test Results 8
Individual Server Data 9
Appendix 1—Jetstress Performance Report 10
4. 1
Atlantis HyperScale Supports 60,000 Mailboxes
The Bottom Line
We tested an Atlantis Computing HyperScale CX-12, an all-flash, hyperconverged appli-
ance with four server nodes, using Microsoft’s Jetstress to simulate a significant Exchange
mailbox configuration.
Key observations include:
• Data deduplication and compression for efficiency
• Simple management through vCenter and the Atlantis USX console
• Support for 60,000 mailboxes at 200 messages/day/mailbox
-- 2.5 times the mailboxes of a leading HCI provider’s ESRP report
-- Five times the IOPS of that HCI provider’s system
• High availability protects against SSD or node failures
• Space-efficient, metadata-based snapshots
5. 2
Atlantis HyperScale Supports 60,000 Mailboxes
Hyperconverged infrastructures that combine the compute power for running virtual
machines with the storage for those VMs are rapidly becoming important tools in the IT
architect’s kit. While there are many advantages to the hyperconverged approach—in-
cluding the elimination of expensive, high-maintenance, dedicated storage arrays—some
IT architects have limited HCI to end-user computing and other less-than-mission-crit-
ical applications. Our position is that HCI systems have evolved significantly to make
them applicable for all but the most demanding applications.
While it may not have the immediate revenue impact of ERP or other line-of-business
applications, electronic mail is clearly a mission-critical application. Microsoft’s Ex-
change, which is pretty much ubiquitous in enterprise environments, has traditionally
presented a challenging high-IOPS workload to storage systems.
To determine if a hyperconverged appliance solution could provide the storage perfor-
mance required by Exchange, we put Atlantis Computing’s HyperScale CX-12 appliance
to the test with Microsoft’s Jetstress. This Technology Validation Report presents the
results of our testing.
About Jetstress
Microsoft Exchange has historically been quite demanding of storage systems, and as
a result, storage performance has been problematic for many Exchange installations.
Microsoft’s Jetstress tool was designed to allow organizations to test a proposed storage
system’s ability to support Exchange workloads.
As its name suggests, Jetstress uses the Jet database engine that is at the heart of the
Exchange server. Jetstress performs read, insert, delete, and other operations against
the Jet database to simulate Exchange users. By accessing the core database directly,
Jetstress can create much higher levels of disk I/O than using full Exchange servers
that have the overhead of processing email before writing to the database.
While current versions of Exchange have reduced their demand for storage IOPS—in
part, by eliminating single-instance storage and, thereby, sacrificing space for IOPS—
Jetstress remains one of the more realistic benchmarks in our toolbox. Jetstress is such
a good model of Exchange server disk I/O that storage analyst Ray Lucchesi publishes
comparative analysis of Jetstress performance reports through Microsoft’s ESRP (Ex-
change Solution Reviewed Program) at http://silvertonconsulting.com/cms1/dispatches.
In addition to providing a workload that closely models Exchange’s I/O patterns, Jet-
stress also provides a simple pass/fail indication at the end of each test. Should storage
latency exceed the specified thresholds on any Exchange database, or if there are any
errors during the test run, Jetstress will declare the test a failure.
Our testing used the strict mode latency limits of:
Average database read latency 20ms
Average log file write latency 10ms
Maximum database read latency 100ms
Maximum log file write latency 100ms
6. 3
Atlantis HyperScale Supports 60,000 Mailboxes
Jetstress also performs a full database integrity check using checksums at the end of
each test to ensure that the system under test actually stores the data Jetstress sends
it.
Most Jetstress performance testing seeks to determine how many Exchange mailboxes
a given storage system can support. While all Exchange mailboxes—like the animals in
George Orwell’s Animal Farm—are created equal, some mailboxes are more equal than
others.
To allow organizations to model environments from those with a large number of very
light users, like a university that provides free email accounts for alumni, to the most
juiced up of power users, Jetstress takes not just the number of mailboxes but also the
number of IOPS per mailbox as input parameters.
Microsoft has provided, as guidance for Exchange server sizing, estimates1
of how many
IOPS Exchange will perform to support users sending and/or receiving 50 to 500 mes-
sages a day. These are shown in the table below.
Messages per
mailbox per
day
Estimated
IOPS per
mailbox
50 0.034
100 0.067
150 0.101
200 0.134
250 0.168
300 0.201
350 0.235
400 0.268
450 0.302
500 0.335
Unfortunately, Microsoft provides no guidance as to how many email messages an aver-
age Exchange user sends and receives, leaving Jetstress users to make their own esti-
mates. Some organizations, seeking to publish that the system they are testing can sup-
port an impressive number of mailboxes, choose workloads with low-IOPS-per-mailbox.
One leading vendor of hyperconverged infrastructure appliances used 0.05 IOPS per
mailbox for its ESRP report. This would represent users averaging roughly 70 messages
a day. Users comparing published Jetstress results should pay more attention to the
total IOPS supported than just the number of mailboxes, by multiplying the number of
mailboxes by the IOPS per mailbox.
For our testing, we assumed the average user in an organization today processes around
150 messages a day. Of course, many of those messages will go directly into the user’s
SPAM folder, but they still present a load to the hypothetical Exchange server we’re
1
From the Technet blog: https://technet.microsoft.com/en-us/library/ee832791(v=exchg.141).aspx.
7. 4
Atlantis HyperScale Supports 60,000 Mailboxes
using Jetstress to emulate. We told Jetstress to test with 0.121 IOPS per mailbox, to
represent users processing 150 messages a day, plus the 20% headroom that Microsoft
recommends as a best practice (0.121 = 0.101 × 1.2).
Jetstress and Deduplicating Storage
As storage benchmarks go, Jetstress is pretty good. It performs the same sort of data-
base operations, using the same database engine, as an operating Exchange mailbox
server and, therefore, creates I/Os in the same sizes and patterns, including hotspots, as
a real Exchange server.
Jetstress was, however, developed before data deduplication became a common storage
system feature, and the data it creates is a rich ground for data deduplication in partic-
ular. When you tell Jetstress to create multiple databases, which is basically required to
simulate large Exchange environments, it generates one database and then copies that
database to create the others.
In our testing, we spread the 20,000 mailboxes simulated by each of our three Jetstress
VMs across eight databases. By creating eight copies of the same database, we gave
Atlantis a dataset that was duplicate-rich. This made the 92% data reduction Atlantis’
USX reported unsurprising.
If we were trying to generate impressive data-reduction numbers, we would have cre-
ated sixteen or 32 databases and boosted data reduction to 95% or higher. Since we
weren’t looking to create impressive but ultimately meaningless numbers, when our
testing with large numbers of databases didn’t show higher performance than the eight-
database configuration, we settled on eight for further testing.
Performance testing with unrealistically deduplicatable data can give artificially high
results on hybrid storage systems that deduplicate data in their performance tier. If
a multi-database Jetstress is per-
formed on such a system, it will
be able to satisfy several times as
many I/O requests from its flash,
3D XPoint, or other solid-state
performance tier than it could when
serving Exchange users in the real
world.
The Atlantis HyperScale appliance we tested for this Technology Validation Report is an
all-flash system. Since it stores all its data in what would, on a hybrid system, be the per-
formance tier, any performance boost from highly duplicated data should be minimal.
We do not believe that Jetstress is a valid
tool for benchmarking hybrid storage
systems that perform data reduction.
8. 5
Atlantis HyperScale Supports 60,000 Mailboxes
Atlantis HyperScale
Atlantis Computing has partnered with several leading server vendors to create their
line of all-flash HyperScale appliances. Atlantis preinstalls and preconfigures their
USX storage software, and an appropriate set of SSDs, on servers from several leading
vendors—including Supermicro, Lenovo, HP, Cisco, and Dell—to create the HyperScale
appliances. Form factors vary a bit between server vendors, with some using standard
1U servers and others using higher-density appliances that squeeze four server nodes
into a 2U chassis.
Regardless of which server vendor the customer chooses, the basic configuration of the
HyperScale appliances remains the same. Atlantis has designed three models, with each
model’s number representing the usable storage capacity provided by that appliance:
• HyperScale CX-4—presents 4TB from two nodes for remote and branch offices
• HyperScale CX-12—presents 12TB from four nodes
• HyperScale CX-24—presents 24TB from four nodes
As a hyperconverged appliance, each node in the HyperScale both acts as a hypervisor
host for virtual machines and provides storage from its SSDs through virtual machines
running Atlantis’ USX. HyperScale appliances support Citrix Xenserver—including the
Xenserver Free Edition, as a low-cost solution—or VMware’s vSphere, if customers pre-
fer to work with the corporate standard hypervisor and management software.
The nodes are pretty beefy servers in their own right. Each is equipped with:
• Dual Xeon E5-2680 v3 processors (twelve 2.5Ghz cores each)
• 256 to 768GB DRAM
• Two 10Gbps plus two 1Gbps Ethernet ports
• Three 800GB SSDs (CX-4 and CX-12) or 1600GB SSDs (CX-24)
9. 6
Atlantis HyperScale Supports 60,000 Mailboxes
The 10Gbps interfaces are used for storage traffic between the hypervisors running
on each node, for VM storage, and between the various virtual machines that provide
storage services. The 1Gbps interfaces are for management. Each server node is also
equipped with a small SATADOM as a boot device.
Storage services are provided by Atlantis’ own USX. USX is a very flexible software-
defined storage platform, which we explored in some detail in an earlier report,
Atlantis USX Covers All the Storage Bases. The HyperScale system uses USX to manage
its SSDs, creating all flash volumes for virtual machines. While HyperScale users might
not have all the flexibility USX offers, they still have a rich set of storage features on a
simplified platform. Those storage features include:
• Data reduction, via compression, and data deduplication
• Data protection via two-way replication
• High availability protects against SSD or node failures
• Space-efficient, metadata-based snapshots†
• Support for VMware’s VVOLs, VAAI and VASA 2.0†
• Synchronous replication for stretched clusters†
Features marked with a dagger (†) have been added to USX since we produced our last
report.
With three 800GB SSDs per node, a CX-12
configuration has a total of 9.6TB of raw
flash. By our calculations, after overhead
for data protection, the CX-12 has 7.2TB
of capacity available for provisioning. That
means Atlantis is assuming a very conser-
vative data reduction factor of 1.67:1 when
they pitch the CX-12 as having 12TB of
effective capacity. We see that as a relatively conservative estimate for virtual machine
data stores, which commonly reduce by as much as 5:1
Even if the estimate isn’t conservative, Atlantis is guaranteeing customers that they’ll
actually get the advertised 12 or 24TB of their data on a system, subject, of course, to
some expected limitations on unreducible data types, such as compressed archives or
video. Should a customer get less than the guaranteed amount of usable storage, Atlan-
tis will provide additional licenses to their appliance to access the flash capacity that’s
normally locked up for spares.
HyperScale Configuration
We performed our testing remotely on a HyperScale CX-12 system, with 256GB of RAM
per node, in Atlantis’ quality-control lab. Atlantis preconfigured the system, a task that
would normally be performed by a partner
The system was configured with vSphere 5.5 and a vCenter Server Virtual Appliance
(VCSA). The storage was configured as three vSphere datastores, each just under a tera-
byte in size.
Choosing a HyperScale takes just
two decisions:“Which server
vendor do I like?” and “How much
capacity do I need?”
10. 7
Atlantis HyperScale Supports 60,000 Mailboxes
To keep our testing in line with what we’d recommend for use in the real world, we cre-
ated three Jetstress servers and used three Atlantis volumes on the four-node CX-12
we were testing. This system can, therefore, tolerate a node failure while still provid-
ing sufficient resources for the affected VMs to restart on the three remaining nodes.
Data reduction skeptics should note that, even after the system rebuilds any affected
volumes on three nodes, providing 12TB of effective capacity will only require that the
data reduce 2.5:1. The system was also configured to tolerate a one-node failure, and
the Jetstress environment and the HyperScale appliance were configured with the three
datastores and three Jetstress servers to accommodate this.
While the good folks at Atlantis configured the CX-12 for us, we built the Jetstress serv-
ers and performed all the testing ourselves. Each Jetstress server was provisioned with
eight logical drives (VMDKs) in order to store the roughly 16TB of databases we needed
to perform our testing. We took advantage of the system’s data deduplication, creat-
ing thinly provisioned VMDKs of approximately 800GB each and then expanding those
volumes to 1.4TB each.
Running Jetstress
The goal of Jetstress testing is to determine the maximum number of mailboxes the
system can support over the two hours it takes to perform a valid Jetstress run. This
requires multiple Jetstress runs with different parameters, as we iterate to the results
that we’ll report.
Since performance testing takes two hours, and the checksum verification at the end of
each test adds several additional hours, we would be lucky to get in even two runs a day
if we were running Jetstress manually. To accelerate this process, and to save our own
sanity, we developed a set of tools that automate the process.
The JetTest tools allow us to synchronize the start times for tests across multiple ma-
chines and, more importantly, to queue up multiple tests with different numbers of
mailboxes, IOPS/mailbox, or execution threads, and to run them automatically. We
placed JetTest and the other tools in the suite in the public domain. Readers can find an
introduction to JetTest on the DeepStorage.net website.
Test Configuration
As we described above, Microsoft’s Jetstress uses the same Jet database engine to simu-
late some set of user mailboxes. Our tests targeted a configuration of:
• Three virtual servers running Jetstress 2013
—— Windows Server 2012 R2
—— 8 vCPUs
—— 64GB vRAM
• 60,000 mailboxes—20,000 per Jetstress server
• Eight databases per server
• Mailbox size 800MB
11. 8
Atlantis HyperScale Supports 60,000 Mailboxes
• 0.121 IOPS per mailbox (150 messages/day plus 20% headroom)
• Background database maintenance enabled
• Two database copies
The three Jetstress servers were each run from a separate server node, and each stored its
data in an independent volume. The vCenter appliance and USX management VMs were
run on the fourth server node to limit their impact on the results. We did not attempt to
isolate or optimize the Atlantis VMs that provide the shared storage environment.
Test Results
Our configuration targeted a total of 7260 IOPS (60,000 mailboxes × 0.121 IOPS/mail-
box) with latency within Jetstress’ strict limits—most significantly, database read la-
tency below 20ms—and achieved 8798 IOPS, or just over 20% more than required, while
averaging 7.2ms of latency.
Aggregating the performance of our three test servers, we can see in table TK below
that the HyperScale system significantly exceeded our targets. We could have repeat-
ed the tests with 72,000 mailboxes, but 60,000 is such a nice round number that we
thought it would make a better headline.
Targeted Achieved Difference
IOPS 7260 8798 21%
Average database read latency 20ms 7.2ms 64%
Average log file write latency 10ms 3.2ms 68%
It appears that the HyperScale system still had more performance to give, as most of
the databases on each system actually had significantly lower latency than the 20ms
limit. Jetstress on the HyperScale system had higher latencies on the first few data-
bases on each server than on the remaining databases. As readers can see in the full
Jetstress performance report (Appendix 1), while the average database read latency for
the server was 7.3ms, database 1 had 16.1ms of latency, over twice the average.
We believe this is an interaction between Jetstress’ unnaturally deduplicatable data and
the Atlantis deduplication engine.
12. 9
Atlantis HyperScale Supports 60,000 Mailboxes
Individual Server Data
Jetstress1 Jetstress2 Jetstress3 Aggregate
(sum or average)
Database writes/sec 723.288 717.613 726.059 2162.133
Database reads/sec 1515.464 1512.482 1517.106 4532.601
Database read latency 7.304ms 6.872ms 7.429ms 7.201ms
Database write latency 46.177ms 44.946ms 47.825ms 46.316ms
Log writes/sec 344.219 378.572 371.598 1090.768
Log reads/sec 8.505 8.566 8.440 25.454
Log write latency 3.218ms 3.097ms 3.372ms 3.229ms
Log read latency 1.720ms 1.508ms 1.686ms 1.638ms
As expected, there were only very minor performance differences between the three
Jetstress systems. Write latencies are generally higher than read latencies, as we would
expect from a system using NAND flash.
13. 10
Atlantis HyperScale Supports 60,000 Mailboxes
Appendix 1—Jetstress Performance Report
We have included below the Jetstress 2013 performance report from one of the three
servers used to perform our testing. Since the three servers were essentially identical
virtual machines, the results from the other two servers are quite similar, and repro-
ducing them here would be a waste of paper, or screen real estate for those in the 21st
century using an electronic reader, computer, smartphone, or Star Fleet-issue PADD.
The full Jetstress output for all three virtual machines can be found at
https://www.dropbox.com/sh/x5wd26z4869sn41/AADPZurOTYWRIj-JvZ5tSu0ya?dl=0.
Microsoft Exchange Jetstress 2013
Performance Test Result Report
Test Summary
Overall Test Result Pass
Machine Name JETSTRESS1
Test Description Atlantisfirsttest3x20000usersat.121IOPSMB800MB
Test Start Time 12/14/20158:52:23AM
Test End Time 12/14/201510:57:15AM
Collection Start Time 12/14/20158:53:18AM
Collection End Time 12/14/201510:53:04AM
Jetstress Version 15.00.0995.000
ESE Version 15.00.0516.026
Operating System WindowsServer2012R2Standard(6.2.9200.0)
Performance Log C:JetstressResults20KPerformance_2015_12_14_8_52_39.blg
Database Sizing and Throughput
Achieved Transactional I/O per
Second
2937.191
Target Transactional I/O per Second 2420
Initial Database Size (bytes) 13451217338368
Final Database Size (bytes) 13456896425984
Database Files (Count) 8
Jetstress System Parameters
Thread Count 24
Minimum Database Cache 256.0MB
Maximum Database Cache 2048.0MB
Insert Operations 40%
Delete Operations 20%
Replace Operations 5%
Read Operations 35%
Lazy Commits 70%
16. 13
Atlantis HyperScale Supports 60,000 Mailboxes
12/14/20158:52:40AM--Attainingprerequisites:
12/14/20158:53:18AM--MSExchangeDatabase(JetstressWin)DatabaseCacheSize,Last:1966305000.0(lowerbound:1932735000.0,upperbound:none)
12/14/201510:53:19AM--Performancelogginghasended.
12/14/201510:57:07AM--JetInteropbatchtransactionstats:50495,50495,50495,50495,50495,50495,50494and50494.
12/14/201510:57:07AM--Dispatchingtransactionsends.
12/14/201510:57:07AM--Shuttingdowndatabases...
12/14/201510:57:15AM--Instance3600.1(complete),Instance3600.2(complete),Instance3600.3(complete),Instance3600.4(complete),Instance3600.5(complete),Instance3600.6(complete),
Instance3600.7(complete)andInstance3600.8(complete)
12/14/201510:57:15AM--C:JetstressResults20KPerformance_2015_12_14_8_52_39.blghas481samples.
12/14/201510:57:15AM--Creatingtestreport...
12/14/201510:57:20AM--Instance3600.1has16.1forI/ODatabaseReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.1has5.3forI/OLogWritesAverageLatency.
12/14/201510:57:20AM--Instance3600.1has5.3forI/OLogReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.2has11.9forI/ODatabaseReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.2has4.6forI/OLogWritesAverageLatency.
12/14/201510:57:20AM--Instance3600.2has4.6forI/OLogReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.3has8.1forI/ODatabaseReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.3has3.7forI/OLogWritesAverageLatency.
12/14/201510:57:20AM--Instance3600.3has3.7forI/OLogReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.4has5.8forI/ODatabaseReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.4has2.9forI/OLogWritesAverageLatency.
12/14/201510:57:20AM--Instance3600.4has2.9forI/OLogReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.5has4.6forI/ODatabaseReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.5has2.5forI/OLogWritesAverageLatency.
12/14/201510:57:20AM--Instance3600.5has2.5forI/OLogReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.6has3.9forI/ODatabaseReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.6has2.2forI/OLogWritesAverageLatency.
12/14/201510:57:20AM--Instance3600.6has2.2forI/OLogReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.7has4.0forI/ODatabaseReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.7has2.2forI/OLogWritesAverageLatency.
12/14/201510:57:20AM--Instance3600.7has2.2forI/OLogReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.8has4.1forI/ODatabaseReadsAverageLatency.
12/14/201510:57:20AM--Instance3600.8has2.4forI/OLogWritesAverageLatency.
12/14/201510:57:20AM--Instance3600.8has2.4forI/OLogReadsAverageLatency.
12/14/201510:57:20AM--Testhas0MaximumDatabasePageFaultStalls/sec.
12/14/201510:57:20AM--Thetesthas0DatabasePageFaultStalls/secsampleshigherthan0.
12/14/201510:57:20AM--C:JetstssResults20KPerformance_2015_12_14_8_52_39.xmlhas478samplesqueried.
All trademarks remain property of their respective holders, and are used only to directly de-
scribe the products being provided. Their use in no way indicates any relationship between
DeepStorage, LLC. and/or our clients with the holders of said trademarks.