SlideShare a Scribd company logo
GE Centricity EMR / CPS
Platform Architecture
Performance – Chris Hays &
Steven Oubre, GEHC
Centricity EMR/CPS – Platform
Architecture Performance
• Overview of various platforms / architectures on
which performance characterizations have been
done.
• Wintel, HP, IBM
• SAN: NetApp, EMC
• Virtualization: VMWare, HyperV, IBM LPAR, HP
• Latest developments show promise for improving
system performance
• latest Intel / processor architecture
• SSD
• Virtualization
• De-valuing Trend (platform simplification is inevitable)
• HP-UX/AIX
Platform Architectures
OS Platforms
• Wintel – is it ready for the “Big Time”, HP & IBM
are big players, $$
• HP/UX – Itanium to *finally* get Nehalem memory
improvements in 2010, $$$$$
• IBM AIX – Power6: very fast, very scalable, $$$$$
• Virtualization – Wintel (VMWare, HyperV)
Storage options
• SAN (traditional, Left Hand, options, options, and
MORE options) $...$$$$$
• Vendors have solutions at a variety of $ points
Interesting Technology
Developments that may improve
Performance and Scalability
• Intel architecture
– Nehalem, Nehalem-EX (>=2 sockets), Westmere
• Solid State Drives (SSD – not your camera
flashcard or USB jump drives)
– Intel, HP, EMC, IBM, Dell…
– You get what you pay for (more IOPS = more $)
• Virtualization
– Platform/Server consolidation opportunities
– Customer implemented already
“King of the Mountain”
• 1998 – HPUX PA-RISC (only EMR-supported 64-
bit platform)
• 2000 – IBM S80 (EMR scaled to 4400 concurrent
users during load simulation testing – pre CCC)
• 2005 – 4x DC Opteron / Intel (CPS only) ~1K users
• 2008 – HPUX Itanium2 / HP DL785(Opteron):
~5000 concurrent CCC/eRx users on 32 cores (full
rack SAN storage)
• 2009 – Intel Nehalem (55xx Xeon series) ~5000
concurrent users on 8 processor cores (local SSD
storage)
Platforms supporting 2000 EMR /
1000 CPS users
• Wintel
– 2x Intel Xeon X5550 qc or better, 5+ X25-E SSD (size according to
db growth), 2+ large SAS drives for archive / backup
• VMWare
– 2x Intel Xeon X5550 qc or better, 5+ X25-E SSD (size according to
db growth), 2+ large SAS drives for archive / backup – additional
Nehalem performance compensates for virtualization overhead.
• HP (EMR only)
– 14x1.66 GHz Itanium2 cores, 24MB L2 cache, 5+ HP SSD (size
according to db growth), 2+ large SAS drives for archive / backup
• IBM (EMR only)
– 8x5GHz Power6 cores, 4MB L2/ 32MB L3 cache, 5+ HP SSD (size
according to db growth), 2+ large SAS drives for archive / backup
Storage IOPS for various SSD and
SAS Drives
• Traditional 15K rpm spindle – 180 iops
• EMC SSD – 2500 iops (1 million+ hours in
production)
• HP SSD – 5K write / 20K read iops
• Intel SSD – 3.5K write /35K read iops
• Lower end SSDs are available, however iops,
performance, and reliability are lower than the
enterprise-rated SSDs
Workload IOPS per 1000 Concurrent
Users
Steady state measurements:
• 2K per second transaction/archive log iops
• 1K per second random read iops
• 200 – 500 per second random write iops
eRx (ePrescribing)
• 90K iops for large eRx formulary set load/reload
– Depending on patient/insurance demographic, may be a lot of these.
What if we fill a Rack with SSDs?
120 x 143GBx15Krpm HDD => Intel X25-E SSD
From Intel’s IDF2009:
• IOPS => 36,000 => 4,200,000
– Per device 200HDD => 2,500 (EMC, HP, Intel are 2,500 or more)
• Sustained BW 12GB/sec => 36GB (3x increase)
– HDDS good at sequential writes
• Watts => 1452 => 288 (5x reduction)
Acoustics
– SSDs significantly quieter: 0 dB SSD versus 3.8 bels (is it on?...)
10 /
GE Title or job number /
09/08/15
Data Set Sizes
Database sizes:
• 5 GB to 620 GB
eRx dataset (ESM 3.1.1 allows scheduling window for updates)
• September 2009: 6000+ formularies
– 30GB data set, 120GB of transaction log/archive
– Typical customer load is 10-20% of this
• December 2009: 24000+ formularies
– 60GB data set, 180GB of transaction log/archive
• Changes monthly, quarterly, annually (constantly in
flux)
Large System Testing Environments
In-house:
• 8 core Power5+
• 8 core Itanium2
• 8 core 55xx series Xeon, 24 core Xeon (74xx
series)
• New in 2010: 32-core Nehalem EX, 12 Core
Nehalem Westmere
Off site:
• IBM Virtual Performance Center: Power6 systems
• HP Labs: Itanium2 / x64 (Intel and AMD)
Simulated workflows for Performance
and Scalability Testing
• EMR (with / without CCC and eRx):
– Front desk/Registration
– Nurse
– Physician
– DTS
• CPS (with / without CCC and eRx):
– Front office / Scheduling / Registration
– Back office / Billing / Visit Mgmt
– Chart (Nurse, Physician)
– DTS
Interesting Data Points
XEON EMR/CPS Workloads
• 1x5520 runs similar utilization to 2x5420 (Oracle,
SQLServer, Loadrunner workloads)
• Xeon 5420 similar per core utilization to Itanium2
(Oracle workload)
• Power5+ IBM 570 in-house is poorest scaling 8 core
platform – loses to rx6600, Xeon 5420, Xeon 5520. It
also costs more than all the others combined.
CCC adds 4x to 10x DB SQL traffic for
measured workflows.
Questions?

More Related Content

What's hot

Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Community
 
Machine Learning Developers - Know your GPUs
Machine Learning Developers - Know your GPUsMachine Learning Developers - Know your GPUs
Machine Learning Developers - Know your GPUs
Amazon Web Services
 
Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in CephCeph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Community
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Community
 
PhegData X - High Performance EBS
PhegData X - High Performance EBSPhegData X - High Performance EBS
PhegData X - High Performance EBS
Hanson Dong
 
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems SpecialistOWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
Paris Open Source Summit
 
Gain Storage Control with SIOC and Take Performance Control with QoS from Sol...
Gain Storage Control with SIOC and Take Performance Control with QoS from Sol...Gain Storage Control with SIOC and Take Performance Control with QoS from Sol...
Gain Storage Control with SIOC and Take Performance Control with QoS from Sol...
NetApp
 
Amazon RDS for Performance-Intensive Production Applications (DAT301) | AWS r...
Amazon RDS for Performance-Intensive Production Applications (DAT301) | AWS r...Amazon RDS for Performance-Intensive Production Applications (DAT301) | AWS r...
Amazon RDS for Performance-Intensive Production Applications (DAT301) | AWS r...
Amazon Web Services
 
Getting it Right: OpenStack Private Cloud Storage
Getting it Right: OpenStack Private Cloud StorageGetting it Right: OpenStack Private Cloud Storage
Getting it Right: OpenStack Private Cloud Storage
NetApp
 
[NetherRealm Studios] Game Studio Perforce Architecture
[NetherRealm Studios] Game Studio Perforce Architecture[NetherRealm Studios] Game Studio Perforce Architecture
[NetherRealm Studios] Game Studio Perforce Architecture
Perforce
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Community
 
An IBM Storage Solution for Small and Mid-size Businesses -- The IBM Storwize...
An IBM Storage Solution for Small and Mid-size Businesses -- The IBM Storwize...An IBM Storage Solution for Small and Mid-size Businesses -- The IBM Storwize...
An IBM Storage Solution for Small and Mid-size Businesses -- The IBM Storwize...
Tony Pearson
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
Ceph Community
 
Hadoop and Hive Development at Facebook
Hadoop and Hive Development at FacebookHadoop and Hive Development at Facebook
Hadoop and Hive Development at Facebook
elliando dias
 
Capacity Planning
Capacity PlanningCapacity Planning
Capacity Planning
MongoDB
 
In-Memory Computing: Myths and Facts
In-Memory Computing: Myths and FactsIn-Memory Computing: Myths and Facts
In-Memory Computing: Myths and Facts
DATAVERSITY
 
AWS Summit Sydney 2014 | AWSome Data Protection with Veeam - Session Sponsore...
AWS Summit Sydney 2014 | AWSome Data Protection with Veeam - Session Sponsore...AWS Summit Sydney 2014 | AWSome Data Protection with Veeam - Session Sponsore...
AWS Summit Sydney 2014 | AWSome Data Protection with Veeam - Session Sponsore...
Amazon Web Services
 
Microsoft azure for sql server professionals
Microsoft azure for sql server professionalsMicrosoft azure for sql server professionals
Microsoft azure for sql server professionals
Armando Lacerda
 
Running Microsoft and Oracle Stacks on Elastic Block Store (STG303) | AWS re:...
Running Microsoft and Oracle Stacks on Elastic Block Store (STG303) | AWS re:...Running Microsoft and Oracle Stacks on Elastic Block Store (STG303) | AWS re:...
Running Microsoft and Oracle Stacks on Elastic Block Store (STG303) | AWS re:...
Amazon Web Services
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Community
 

What's hot (20)

Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 
Machine Learning Developers - Know your GPUs
Machine Learning Developers - Know your GPUsMachine Learning Developers - Know your GPUs
Machine Learning Developers - Know your GPUs
 
Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in CephCeph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
 
PhegData X - High Performance EBS
PhegData X - High Performance EBSPhegData X - High Performance EBS
PhegData X - High Performance EBS
 
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems SpecialistOWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
 
Gain Storage Control with SIOC and Take Performance Control with QoS from Sol...
Gain Storage Control with SIOC and Take Performance Control with QoS from Sol...Gain Storage Control with SIOC and Take Performance Control with QoS from Sol...
Gain Storage Control with SIOC and Take Performance Control with QoS from Sol...
 
Amazon RDS for Performance-Intensive Production Applications (DAT301) | AWS r...
Amazon RDS for Performance-Intensive Production Applications (DAT301) | AWS r...Amazon RDS for Performance-Intensive Production Applications (DAT301) | AWS r...
Amazon RDS for Performance-Intensive Production Applications (DAT301) | AWS r...
 
Getting it Right: OpenStack Private Cloud Storage
Getting it Right: OpenStack Private Cloud StorageGetting it Right: OpenStack Private Cloud Storage
Getting it Right: OpenStack Private Cloud Storage
 
[NetherRealm Studios] Game Studio Perforce Architecture
[NetherRealm Studios] Game Studio Perforce Architecture[NetherRealm Studios] Game Studio Perforce Architecture
[NetherRealm Studios] Game Studio Perforce Architecture
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
 
An IBM Storage Solution for Small and Mid-size Businesses -- The IBM Storwize...
An IBM Storage Solution for Small and Mid-size Businesses -- The IBM Storwize...An IBM Storage Solution for Small and Mid-size Businesses -- The IBM Storwize...
An IBM Storage Solution for Small and Mid-size Businesses -- The IBM Storwize...
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
 
Hadoop and Hive Development at Facebook
Hadoop and Hive Development at FacebookHadoop and Hive Development at Facebook
Hadoop and Hive Development at Facebook
 
Capacity Planning
Capacity PlanningCapacity Planning
Capacity Planning
 
In-Memory Computing: Myths and Facts
In-Memory Computing: Myths and FactsIn-Memory Computing: Myths and Facts
In-Memory Computing: Myths and Facts
 
AWS Summit Sydney 2014 | AWSome Data Protection with Veeam - Session Sponsore...
AWS Summit Sydney 2014 | AWSome Data Protection with Veeam - Session Sponsore...AWS Summit Sydney 2014 | AWSome Data Protection with Veeam - Session Sponsore...
AWS Summit Sydney 2014 | AWSome Data Protection with Veeam - Session Sponsore...
 
Microsoft azure for sql server professionals
Microsoft azure for sql server professionalsMicrosoft azure for sql server professionals
Microsoft azure for sql server professionals
 
Running Microsoft and Oracle Stacks on Elastic Block Store (STG303) | AWS re:...
Running Microsoft and Oracle Stacks on Elastic Block Store (STG303) | AWS re:...Running Microsoft and Oracle Stacks on Elastic Block Store (STG303) | AWS re:...
Running Microsoft and Oracle Stacks on Elastic Block Store (STG303) | AWS re:...
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
 

Viewers also liked

The Internet of Things
The Internet of ThingsThe Internet of Things
The Internet of Things
GE Global Research
 
Helping communities move toward a collaborative care model
Helping communities move toward a collaborative care modelHelping communities move toward a collaborative care model
Helping communities move toward a collaborative care model
GE Healthcare - Specialty Solutions
 
EWMA 2013 - Ep545 - Evidence Based Comparison of Three Advanced Adjunctive Wo...
EWMA 2013 - Ep545 - Evidence Based Comparison of Three Advanced Adjunctive Wo...EWMA 2013 - Ep545 - Evidence Based Comparison of Three Advanced Adjunctive Wo...
EWMA 2013 - Ep545 - Evidence Based Comparison of Three Advanced Adjunctive Wo...
EWMA
 
Centricity Radiology Mobile Access Demo
Centricity Radiology Mobile Access DemoCentricity Radiology Mobile Access Demo
Centricity Radiology Mobile Access Demo
GE Healthcare - Specialty Solutions
 
The power of Centricity RIS-IC integration
The power of Centricity RIS-IC integrationThe power of Centricity RIS-IC integration
The power of Centricity RIS-IC integration
GE Healthcare - Specialty Solutions
 
What's new in imaging
What's new in imagingWhat's new in imaging

Viewers also liked (6)

The Internet of Things
The Internet of ThingsThe Internet of Things
The Internet of Things
 
Helping communities move toward a collaborative care model
Helping communities move toward a collaborative care modelHelping communities move toward a collaborative care model
Helping communities move toward a collaborative care model
 
EWMA 2013 - Ep545 - Evidence Based Comparison of Three Advanced Adjunctive Wo...
EWMA 2013 - Ep545 - Evidence Based Comparison of Three Advanced Adjunctive Wo...EWMA 2013 - Ep545 - Evidence Based Comparison of Three Advanced Adjunctive Wo...
EWMA 2013 - Ep545 - Evidence Based Comparison of Three Advanced Adjunctive Wo...
 
Centricity Radiology Mobile Access Demo
Centricity Radiology Mobile Access DemoCentricity Radiology Mobile Access Demo
Centricity Radiology Mobile Access Demo
 
The power of Centricity RIS-IC integration
The power of Centricity RIS-IC integrationThe power of Centricity RIS-IC integration
The power of Centricity RIS-IC integration
 
What's new in imaging
What's new in imagingWhat's new in imaging
What's new in imaging
 

Similar to Centricity EMRCPS_Platform_Architecture_Performance

Presentation ibm system x values proposition with vm ware
Presentation   ibm system x values proposition with vm warePresentation   ibm system x values proposition with vm ware
Presentation ibm system x values proposition with vm ware
solarisyourep
 
Palestra IBM-Mack Zvm linux
Palestra  IBM-Mack Zvm linux  Palestra  IBM-Mack Zvm linux
Palestra IBM-Mack Zvm linux
Vivaldo Jose Breternitz
 
Running BSD on AWS
Running BSD on AWSRunning BSD on AWS
Running BSD on AWS
Julien SIMON
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
James Saint-Rossy
 
Servers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - ThailandServers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - Thailand
Aruj Thirawat
 
In-Memory and TimeSeries Technology to Accelerate NoSQL Analytics
In-Memory and TimeSeries Technology to Accelerate NoSQL AnalyticsIn-Memory and TimeSeries Technology to Accelerate NoSQL Analytics
In-Memory and TimeSeries Technology to Accelerate NoSQL Analytics
sandor szabo
 
Ibm and Erb's Presentation Insider's Edition Event . September 2010
Ibm and Erb's Presentation Insider's Edition Event .  September 2010Ibm and Erb's Presentation Insider's Edition Event .  September 2010
Ibm and Erb's Presentation Insider's Edition Event . September 2010
Erb's Marketing
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver Cluster
Aaron Joue
 
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Community
 
The Power of HPC with Next Generation Supermicro Systems
The Power of HPC with Next Generation Supermicro Systems The Power of HPC with Next Generation Supermicro Systems
The Power of HPC with Next Generation Supermicro Systems
Rebekah Rodriguez
 
Amazon EC2 Instances, Featuring Performance Optimisation Best Practices
Amazon EC2 Instances, Featuring Performance Optimisation Best PracticesAmazon EC2 Instances, Featuring Performance Optimisation Best Practices
Amazon EC2 Instances, Featuring Performance Optimisation Best Practices
Amazon Web Services
 
Databases love nutanix
Databases love nutanixDatabases love nutanix
Databases love nutanix
NEXTtour
 
Introduction on Amazon EC2
Introduction on Amazon EC2Introduction on Amazon EC2
Introduction on Amazon EC2
Amazon Web Services
 
Amazon Redshift - Bay Area CloudSearch Meetup June 19, 2013
Amazon Redshift - Bay Area CloudSearch Meetup June 19, 2013Amazon Redshift - Bay Area CloudSearch Meetup June 19, 2013
Amazon Redshift - Bay Area CloudSearch Meetup June 19, 2013
Michael Bohlig
 
Dell Storage Management
Dell Storage ManagementDell Storage Management
Dell Storage Management
Dell World
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Community
 
GO HyperScale: Mind-blowing Performance Starting at $128/Desktop
GO HyperScale: Mind-blowing Performance Starting at $128/DesktopGO HyperScale: Mind-blowing Performance Starting at $128/Desktop
GO HyperScale: Mind-blowing Performance Starting at $128/Desktop
Patrick Brennan
 
V mware2012 20121221_final
V mware2012 20121221_finalV mware2012 20121221_final
V mware2012 20121221_final
Web2Present
 
Hyper-V Infrastructure
Hyper-V InfrastructureHyper-V Infrastructure
Hyper-V Infrastructure
Paulo Freitas
 

Similar to Centricity EMRCPS_Platform_Architecture_Performance (20)

Presentation ibm system x values proposition with vm ware
Presentation   ibm system x values proposition with vm warePresentation   ibm system x values proposition with vm ware
Presentation ibm system x values proposition with vm ware
 
Palestra IBM-Mack Zvm linux
Palestra  IBM-Mack Zvm linux  Palestra  IBM-Mack Zvm linux
Palestra IBM-Mack Zvm linux
 
Running BSD on AWS
Running BSD on AWSRunning BSD on AWS
Running BSD on AWS
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
 
Servers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - ThailandServers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - Thailand
 
In-Memory and TimeSeries Technology to Accelerate NoSQL Analytics
In-Memory and TimeSeries Technology to Accelerate NoSQL AnalyticsIn-Memory and TimeSeries Technology to Accelerate NoSQL Analytics
In-Memory and TimeSeries Technology to Accelerate NoSQL Analytics
 
Ibm and Erb's Presentation Insider's Edition Event . September 2010
Ibm and Erb's Presentation Insider's Edition Event .  September 2010Ibm and Erb's Presentation Insider's Edition Event .  September 2010
Ibm and Erb's Presentation Insider's Edition Event . September 2010
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver Cluster
 
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
 
The Power of HPC with Next Generation Supermicro Systems
The Power of HPC with Next Generation Supermicro Systems The Power of HPC with Next Generation Supermicro Systems
The Power of HPC with Next Generation Supermicro Systems
 
Amazon EC2 Instances, Featuring Performance Optimisation Best Practices
Amazon EC2 Instances, Featuring Performance Optimisation Best PracticesAmazon EC2 Instances, Featuring Performance Optimisation Best Practices
Amazon EC2 Instances, Featuring Performance Optimisation Best Practices
 
Databases love nutanix
Databases love nutanixDatabases love nutanix
Databases love nutanix
 
Introduction on Amazon EC2
Introduction on Amazon EC2Introduction on Amazon EC2
Introduction on Amazon EC2
 
Amazon Redshift - Bay Area CloudSearch Meetup June 19, 2013
Amazon Redshift - Bay Area CloudSearch Meetup June 19, 2013Amazon Redshift - Bay Area CloudSearch Meetup June 19, 2013
Amazon Redshift - Bay Area CloudSearch Meetup June 19, 2013
 
Dell Storage Management
Dell Storage ManagementDell Storage Management
Dell Storage Management
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
GO HyperScale: Mind-blowing Performance Starting at $128/Desktop
GO HyperScale: Mind-blowing Performance Starting at $128/DesktopGO HyperScale: Mind-blowing Performance Starting at $128/Desktop
GO HyperScale: Mind-blowing Performance Starting at $128/Desktop
 
V mware2012 20121221_final
V mware2012 20121221_finalV mware2012 20121221_final
V mware2012 20121221_final
 
Hyper-V Infrastructure
Hyper-V InfrastructureHyper-V Infrastructure
Hyper-V Infrastructure
 

Centricity EMRCPS_Platform_Architecture_Performance

  • 1. GE Centricity EMR / CPS Platform Architecture Performance – Chris Hays & Steven Oubre, GEHC
  • 2. Centricity EMR/CPS – Platform Architecture Performance • Overview of various platforms / architectures on which performance characterizations have been done. • Wintel, HP, IBM • SAN: NetApp, EMC • Virtualization: VMWare, HyperV, IBM LPAR, HP • Latest developments show promise for improving system performance • latest Intel / processor architecture • SSD • Virtualization • De-valuing Trend (platform simplification is inevitable) • HP-UX/AIX
  • 3. Platform Architectures OS Platforms • Wintel – is it ready for the “Big Time”, HP & IBM are big players, $$ • HP/UX – Itanium to *finally* get Nehalem memory improvements in 2010, $$$$$ • IBM AIX – Power6: very fast, very scalable, $$$$$ • Virtualization – Wintel (VMWare, HyperV) Storage options • SAN (traditional, Left Hand, options, options, and MORE options) $...$$$$$ • Vendors have solutions at a variety of $ points
  • 4. Interesting Technology Developments that may improve Performance and Scalability • Intel architecture – Nehalem, Nehalem-EX (>=2 sockets), Westmere • Solid State Drives (SSD – not your camera flashcard or USB jump drives) – Intel, HP, EMC, IBM, Dell… – You get what you pay for (more IOPS = more $) • Virtualization – Platform/Server consolidation opportunities – Customer implemented already
  • 5. “King of the Mountain” • 1998 – HPUX PA-RISC (only EMR-supported 64- bit platform) • 2000 – IBM S80 (EMR scaled to 4400 concurrent users during load simulation testing – pre CCC) • 2005 – 4x DC Opteron / Intel (CPS only) ~1K users • 2008 – HPUX Itanium2 / HP DL785(Opteron): ~5000 concurrent CCC/eRx users on 32 cores (full rack SAN storage) • 2009 – Intel Nehalem (55xx Xeon series) ~5000 concurrent users on 8 processor cores (local SSD storage)
  • 6. Platforms supporting 2000 EMR / 1000 CPS users • Wintel – 2x Intel Xeon X5550 qc or better, 5+ X25-E SSD (size according to db growth), 2+ large SAS drives for archive / backup • VMWare – 2x Intel Xeon X5550 qc or better, 5+ X25-E SSD (size according to db growth), 2+ large SAS drives for archive / backup – additional Nehalem performance compensates for virtualization overhead. • HP (EMR only) – 14x1.66 GHz Itanium2 cores, 24MB L2 cache, 5+ HP SSD (size according to db growth), 2+ large SAS drives for archive / backup • IBM (EMR only) – 8x5GHz Power6 cores, 4MB L2/ 32MB L3 cache, 5+ HP SSD (size according to db growth), 2+ large SAS drives for archive / backup
  • 7. Storage IOPS for various SSD and SAS Drives • Traditional 15K rpm spindle – 180 iops • EMC SSD – 2500 iops (1 million+ hours in production) • HP SSD – 5K write / 20K read iops • Intel SSD – 3.5K write /35K read iops • Lower end SSDs are available, however iops, performance, and reliability are lower than the enterprise-rated SSDs
  • 8. Workload IOPS per 1000 Concurrent Users Steady state measurements: • 2K per second transaction/archive log iops • 1K per second random read iops • 200 – 500 per second random write iops eRx (ePrescribing) • 90K iops for large eRx formulary set load/reload – Depending on patient/insurance demographic, may be a lot of these.
  • 9. What if we fill a Rack with SSDs? 120 x 143GBx15Krpm HDD => Intel X25-E SSD From Intel’s IDF2009: • IOPS => 36,000 => 4,200,000 – Per device 200HDD => 2,500 (EMC, HP, Intel are 2,500 or more) • Sustained BW 12GB/sec => 36GB (3x increase) – HDDS good at sequential writes • Watts => 1452 => 288 (5x reduction) Acoustics – SSDs significantly quieter: 0 dB SSD versus 3.8 bels (is it on?...)
  • 10. 10 / GE Title or job number / 09/08/15
  • 11. Data Set Sizes Database sizes: • 5 GB to 620 GB eRx dataset (ESM 3.1.1 allows scheduling window for updates) • September 2009: 6000+ formularies – 30GB data set, 120GB of transaction log/archive – Typical customer load is 10-20% of this • December 2009: 24000+ formularies – 60GB data set, 180GB of transaction log/archive • Changes monthly, quarterly, annually (constantly in flux)
  • 12. Large System Testing Environments In-house: • 8 core Power5+ • 8 core Itanium2 • 8 core 55xx series Xeon, 24 core Xeon (74xx series) • New in 2010: 32-core Nehalem EX, 12 Core Nehalem Westmere Off site: • IBM Virtual Performance Center: Power6 systems • HP Labs: Itanium2 / x64 (Intel and AMD)
  • 13. Simulated workflows for Performance and Scalability Testing • EMR (with / without CCC and eRx): – Front desk/Registration – Nurse – Physician – DTS • CPS (with / without CCC and eRx): – Front office / Scheduling / Registration – Back office / Billing / Visit Mgmt – Chart (Nurse, Physician) – DTS
  • 14. Interesting Data Points XEON EMR/CPS Workloads • 1x5520 runs similar utilization to 2x5420 (Oracle, SQLServer, Loadrunner workloads) • Xeon 5420 similar per core utilization to Itanium2 (Oracle workload) • Power5+ IBM 570 in-house is poorest scaling 8 core platform – loses to rx6600, Xeon 5420, Xeon 5520. It also costs more than all the others combined. CCC adds 4x to 10x DB SQL traffic for measured workflows.