More Related Content Similar to WSI35 - WebSphere Extreme Scale Customer Scenarios and Use Cases (20) More from Hendrik van Run (17) WSI35 - WebSphere Extreme Scale Customer Scenarios and Use Cases1. IBM European WebSphere
Technical Conference
14 – 18 November 2008, Barcelona, Spain
© 2008 IBM Corporation
Conference materials may not be reproduced in whole or in part without the prior written permission of IBM.
WebSphere eXtreme ScaleWebSphere eXtreme Scale
Customer Scenarios and Use CasesCustomer Scenarios and Use Cases
Session Number: WSI35Session Number: WSI35 Hendrik van RunHendrik van Run –– hvanrun@uk.ibm.comhvanrun@uk.ibm.com
2. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
2
AgendaAgenda
What is WebSphere eXtreme Scale?
Basic Scenarios
Customer Use Cases
3. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
3
What is WebSphere eXtreme Scale?What is WebSphere eXtreme Scale?
It can be used as a very
powerful cache that
scales from simple in-
process topologies to
powerful distributed
topologies.
It can be used as a
platform for building
powerful XTP/Data Grid
applications.
It can be used as a form of
in memory database to
manage application state
(and it scales to 1000’s of
servers). This is
sometimes referred to as
Distributed Application
State Management.
A flexible framework for realizing high
performance, scalable and data-intensive
applications
New York San Francisco
London Shanghai
4. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
4
WebSphere eXtreme Scale (WXS)WebSphere eXtreme Scale (WXS)
IBMs DataGrid/XTP platformIBMs DataGrid/XTP platform
XD DataGrid/ObjectGrid is now renamed/relaunched as WebSphere eXtreme
Scale
Data Grids are a new technology being adopted by customers
It virtualizes free memory on a grid of Java virtual Machines (JVMs) in to a single
logical space and makes it accessible as a partitioned key addressable space for
use by applications
It can make the stored data fault tolerant using memory replication policies
The space can be scaled out by adding more JVMs while it’s running without
restarting
It offers predictable SCALING and scaling at predictable COST
5. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
5
Invalidationchatter
Traditional Cache OperationTraditional Cache Operation
Traditional in JVM cache
Cache capacity determined by
individual JVM Size
Invalidation load per server
increases as cluster grows
Cold start servers hit the EIS
even when data is cached in
cluster
Lower performance as load
increases due to invalidation
chatter
No redundancy of cached data
App
App
App
App
EIS
A
A
A
A
App
New Server with
cold cache
Redundant copies of data at
different versions
Invalidation load
increases with
cluster size
High load on EIS
6. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
6
WXS based Cache OperationWXS based Cache Operation
Cluster Coherent cache
Cache capacity
determined by cluster
size, not individual JVM
Size
No invalidation chatter
Cache request handling
handled by entire cluster
and is linearly scalable
Load on EIS is lower
No cold start EIS spikes
Predictable performance
as load increases
Cached data can be
stored redundantly
App
App
App
App
EIS
A
B
D
C
C’
D’
B’
A’
Cache is
4x larger!
Cache cluster can be
co-located with the
application or run in it’s
own tier
7. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
7
Data Access APIsData Access APIs
All data is manipulated and replicated using transactions in an ACID manner
JCache style Map API is supported as ‘assembler’ level API
Recommended in network attached mode
JPA style API using annotated POJOs provides an almost transparent method for
‘persisting’ POJO graphs to the DataGrid
Annotate your POJO
Call EntityManager.persist to store data
Call EntityManager.find/createQuery to retrieve data
Simply invoke setters/navigate Collections of POJOs with your business logic
Commit transaction to automatically write changes back to the DataGrid
JPA style recommended for eXtreme Transaction Processing (XTP) applications
8. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
8
NonNon--invasive Middlewareinvasive Middleware
Single JAR, 13MB
It has no WebSphere ND dependency and works with:
Current and older versions of WebSphere ND and CE
Competitive application servers
Straight J2SE
Spring
Sun* and IBM JVMs
While ObjectGrid is self contained it requires an external framework for installing
applications and start/stop the JVMs hosting those applications
WebSphere XD
WebSphere ND
WebLogic, JBoss
Third party grid management software
* Sun JVM is only supported by WXS when using the IBM ORB (Object Request Broker)
9. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
9
Main Competitive BenefitsMain Competitive Benefits
Embeddable and it doesn’t require a new ‘platform’
Also tightly integrated with all WebSphere ND versions from 6.0 upwards
State of the art programming model using EntityManager for transparent POJO
persistence
Working with Object graphs is directly supported using the EntityManager APIs
State of the art replication technology including industries only single grid multi-
data center capability
Uses just TCP for communication, no multicast or UDP
10. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
10
AgendaAgenda
What is WebSphere eXtreme Scale?
Basic Scenarios
Customer Use Cases
11. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
11
Basic ScenariosBasic Scenarios
OverviewOverview
Scenario 1 – Side Cache
Generic Case
WXS L2 Cache Provider support for Hibernate and OpenJPA
Scenario 2 – Side Cache with Synchronous Loader
Scenario 3 – Side Cache with Synchronous Loader and Write Behind
Scenario 4 – Collocated Application, XTP style
12. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
12
Scenario 1Scenario 1
Side cacheSide cache
Here, the grid is used by the
application as a coherent distributed
cache
Every cache lookup is an RPC to the
server that can hold that key
If the data isn't there then the
applications gets the data normally
and then stores it in the cache for
next time
Applications can use this directly or
use the L2 cache plugins for popular
object relational mappers like
OpenJPA or Hibernate
L2 cache plugin new in WXS 6.1.0.3
Slower than a local HashMap BUT
Faster than the backend
The cache can be huge
All cache clients see the same data
guaranteed
Data is already in object form
Offloads the backend
No more cold caches on JVM start
App
ObjectGrid Client
WS-*
Mediation
EIS
ESB
A
B
D
C
C’
D’
B’
A’
13. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
13
First, whatFirst, what’’s a Loaders a Loader
A Loader can be provided by IBM or written by the customer
It provides a delegate to the backend for the WXS
If data cannot be found in memory then WXS asks the Loader to get the data if
possible
All changes are provided to the Loader by WXS
Automatic Loaders are available for DB2 and other databases
14. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
14
Scenario 2Scenario 2
Side Cache + Synchronous LoaderSide Cache + Synchronous Loader
This is the same as scenario 1 but the application
associates a Loader with each Map in the cache.
The applications looks up a key and if the key isn’t in
the cache then the Loader is invoked to pull it from
the backend. This is a more efficient mechanism
than before, fewer RPCs (two versus three).
Changes are written to the cache and the Loader is
called synchronously to write the changes to the
backend.
App
EIS
ObjectGrid
Loader
ObjectGrid
Loader
ObjectGrid
Loader
AppApp
15. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
15
Scenario 3Scenario 3
Side Cache + Synchronous Loader + Write BehindSide Cache + Synchronous Loader + Write Behind
Same as scenario 2 but:
Usually all the data is preloaded in to the grid
The grid becomes the system of record
Changes are written to the grid and replicated synchronously
Periodically, the changes are flushed to the Loader and so to the backend
• Loader Write Behind capability is new in WXS 6.1.0.3
• If a record is updated multiple times during the period then only the latest
version is written
• If the backend is down then it just tries again later
Writes scale linearly with this approach and the backend load is significantly
reduced as there are fewer larger transactions
Backend availability has no impact on application availability
16. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
16
Scenario 4Scenario 4
Collocated application, XTP styleCollocated application, XTP style
Normally, applications are stateless and leverage
the grid for all their state
This is faster using a database for the state but still
requires an RPC for each data access
If the application logic runs within the same JVMs as
the data AND the requests for data in a particular
partition can be routed to the JVM holding the data
then this hop can be eliminated
This approach results in best application
performance, especially when combined with write
behind
Our HTTP session manager uses this approach
EIS
ObjectGrid
App
Loader
ObjectGrid
App
Loader
ObjectGrid
App
Loader
17. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
17
AgendaAgenda
What is WebSphere eXtreme Scale?
Basic Scenarios
Customer Use Cases
18. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
18
Customer Use CasesCustomer Use Cases
OverviewOverview
Commerce Web site
Retail bank mainframe MIPS reduction
Scalable HTTP Session replication between datacenters
Scalable web profile service
19. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
19
Customer Use CasesCustomer Use Cases
Commerce Web siteCommerce Web site
Disk based caches for rendered HTML pages
Many retail sites use disk based caches for caching rendered catalog content
These disk based caches are typically not shareable and large (~100 GB)
Typically there is one cache PER JVM and they are very expensive to update
The performance of the system quickly becomes limited by disk performance
• Next step is an expensive NAS/SAN solution for each server
The clusters used for these sites are relatively large
• WebSphere Extreme Scale can provide a cache avoiding the I/O bottleneck
Retail web site
JVM
Disk
cache
server
Retail web site
JVM
Disk
cache
server
Retail web site
JVM
Disk
cache
server
Retail web site
JVM
Disk
cache
server
20. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
20
Customer Use CasesCustomer Use Cases
Commerce Web siteCommerce Web site
The Commerce Web site in numbers
The site did 55 pages/s with 10 cacheable snippets (one 10KB + nine 1KB) per page
Each JVM needed ~100 GB of disk store for this cache
To sustain throughput levels they needs 550 random I/Os per second per JVM
• Most disks can manage about 150 I/Os per second per device
• The customer currently uses a set of NAS devices to provide this performance
WebSphere Extreme Scale can service this load with just two Intel cores
WebSphere Extreme Scale solution
Run a WXS JVM on each box
• Collectively this provides the same cache as before
• This cache is now a shared resource (instead of a cache per JVM)
Cost effective way to implement this cache
• It scales with the application as the size of the cluster grows
• Potentially utilise unused CPU resources as the number of cores per CPU
continue to increase
21. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
21
Customer Use CasesCustomer Use Cases
Retail bank mainframe MIPS reductionRetail bank mainframe MIPS reduction
A retail bank has customer profiles stored on a 390 system
The customer profiles include:
Security information
Account summary information
Links to spouses profile
Products currently purchased by the customer
A customer who uses the portal or visits a retail branch typically involves the use of several applications
Each application is a separate (SILO) application
But all applications use the profile information on the 390
Profiles are accessed over an ESB using a common SOA profile service
MQCluster
ServiceBroker
CICS
IMS
TPF
Application 1 MQ
Application 2 MQ
Application 3 MQ
Application 4 MQ
22. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
22
Customer Use CasesCustomer Use Cases
Retail bank mainframe MIPS reductionRetail bank mainframe MIPS reduction
Individual applications used a cache to store the profile
Reduces the load on the mainframe between different parts of an application
These application scoped caches did not help peer applications
Results in unnecessary profile fetches
The customer wanted to eliminate these redundant profile fetches
Better leverage the 390 investmentMQCluster
ServiceBroker
CICS
IMS
TPF
Application 1 MQ
Application 2 MQ
Application 3 MQ
Application 4 MQ
23. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
23
Customer Use CasesCustomer Use Cases
Retail bank mainframe MIPS reductionRetail bank mainframe MIPS reduction
WebSphere Extreme Scale is used as a network attached grid
Holds 8 GB of customer profile data (4 GB + 4 GB replicated data)
A mediation is inserted in the ESB to cache profile fetch service calls
The service name and parameters are used as a key
The profile itself is the value
If the profile isn’t in the cache:
The mediation hits the 390
Stores the result in the grid
An evictor removes entries older than 30 minutes
Application 1 MQ
Application 2 MQ
Application 3 MQ
Application 4 MQ
MQI Listener Partition
Partition
MQI Listener Partition
Partition
MQI Listener Partition
Partition
MQI Listener Partition
Partition
MQCluster
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition MQCluster
ServiceBroker
CICS
IMS
TPF
24. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
24
Customer Use CasesCustomer Use Cases
Scalable HTTP session replication between datacentersScalable HTTP session replication between datacenters
A large company has a web portal that runs in two data centers
Customer is using a virtualized 64 way IBM Power 6 server in each data center
Customer will deploy a cell per data center
The application is deployed to both cells
WXS ObjectGrid Session manager is used as a solution
WXS comes with a Servlet filter out-of-the-box to provide this solution
Filter can be “spliced” into existing web applications
The WXS catalog service is deployed across each data center
All cells share a single catalog service, binding them together as a single grid
Each cell is marked as an individual zone
Replication rules are put in place to place the primaries and asynchronous replicas in
different zones/data centers
25. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
25
Customer Use CasesCustomer Use Cases
Scalable HTTP session replication between datacentersScalable HTTP session replication between datacenters
Multiple cells share a single catalog server
Deploy the application to each cell
Zone rules can be used to influence
placement for the desired availability
Shared catalog can be:
Running in both DMgrs
Running in J2SE JVMs
Apache
HTTPD
Apache
HTTPD
Cell ACell A
A,B
H’,E’
A,B
H’,E’
C,D
G’,F’
C,D
G’,F’
WXS Shared
Catalog Service
WXS Shared
Catalog Service
Cell BCell B
G,H
B’,D’
G,H
B’,D’
E,F
C’,A’
E,F
C’,A’
Apache
HTTPD
Apache
HTTPD
IP SprayerIP Sprayer
Data center 1 Data center 2
26. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
26
Customer Use CasesCustomer Use Cases
Scalable web profile serviceScalable web profile service
Highly scalable customer profile service for large sports media company
This customer profile service is used by a web application
Network attached data grid using write behind in front of an SQL database.
6 dual quad Intel servers serving up user profiles and accepting changes
Asynchronous replication used for highest performance with good availability
Write-behind employed to buffer/aggregate profile changes to database
Total throughput of 130.000 req/s with 6 ms response time
WXS
J2SE JVM
SQL
database
server 1
WXS
J2SE JVM
WXS
J2SE JVM
server 2
WXS
J2SE JVM
WXS
J2SE JVM
server 6
WXS
J2SE JVM
Web application
Asynchronous
replication
Total of 130.000 profile
req/s, each server
handling 22.000 req/s.
Write behind to
aggregate updates
to the database
Total of six dual
processor quad core
intel servers with WXS
running on J2SE JVMs
22.000 req/s 22.000 req/s 22.000 req/s
27. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
27
Customer Use CasesCustomer Use Cases
Scalable web profile serviceScalable web profile service
Each server handles about 250 MB/s in network traffic
Profile requests account for 220 MB/s (22.000 req/s with 10KB record sizes)
Asynchronous replication gives another 30 MB/s
This is even without the network traffic associated with database updates
This requires several Gbps Ethernet cards per server
WXS performance can require a very high bandwidth, even for small servers
WXS
J2SE JVM
server 1
WXS
J2SE JVM
WXS
J2SE JVM
server 2
WXS
J2SE JVM
Asynchronous
replication is
around 30 MB/s
per server
Web application
Each profile service
request is 10 KB in
size, which results in
220 MB/s per server
22.000 req/s 22.000 req/s
28. WebSphere Technical Conference and Transaction & Messaging Technical Conference
© 2008 IBM Corporation
28
ResourcesResources
Fully functional J2SE trial download
http://www.ibm.com/developerworks/downloads/ws/wsdg/learn.html
Wiki based documentation
http://www.ibm.com/developerworks/wikis/display/objectgrid/Getting+started
User's Guide to WebSphere eXtreme Scale (draft IBM Redbooks publication)
http://www.redbooks.ibm.com/abstracts/sg247683.html
Latest features available in WebSphere eXtreme Scale 6.1 Fix Pack 3 (6.1.0.3)
OpenJPA and Hibernate cache plug-in
JPA Loader
Write-behind caching
http://www.ibm.com/developerworks/wikis/display/objectgridprog/ObjectGrid+6.1+
Fix+Pack+3+contents
29. IBM European WebSphere
Technical Conference
14 – 18 November 2008, Barcelona, Spain
© 2008 IBM Corporation
Conference materials may not be reproduced in whole or in part without the prior written permission of IBM.
Questions ?Questions ?
Thank you for attending! Please complete your session evaluationThank you for attending! Please complete your session evaluation, s, session number is WSI35ession number is WSI35..