Mainframe Cost Reduction

  • 37 views
Uploaded on

 

More in: Software , Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
37
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
1
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. BigMemory Reduces Mainframe Costs Big Results for a Top Global Reservation System This white paper documents the deployment of Terracotta’s BigMemory to increase capacity and reduce mainframe use for one of the largest international reservation systems in production today. Among the results of the deployment were a reduction of 500 million daily mainframe transactions (80 percent of daily load), 50 percent faster response times, a 20x increase in capacity and 99.99 percent uptime. The Challenge The customer was faced with the challenge of expanding capacity to support rapidly growing traffic while simultaneously protecting core business functions, providing additional value-added services and significantly reducing costs. The existing production system relied on an IBM® System z® mainframe to manage all business- critical transactional data. The mainframe was capable of a maximum of 10,000 transactions per second (TPS), where each transaction translated into a business request (read or write) for a blob of data. The average payload of each request was 50 kilobytes (KB). Adding more capacity to the mainframe was cost-prohibitive for new initiatives. The customer initiated development of a new middleware architecture that would run on inexpensive commodity hardware and scale independently of the mainframe, yielding a higher return for new initiatives and lowering capital expenditure for the core business. A major part of the proposed middleware architecture consisted of a common data service layer that would store critical business data in ultra-fast machine memory, backed by the mainframe as the system of record. TABLE OF CONTENTS 1 The Challenge 2 Customer Requirements 2 Initial Architecture 2 Solution Architecture with Terracotta BigMemory 3 BigMemory’s In-Memory Data Management Layer 4 Terracotta Server Array 4 Conclusion BUSINESS WHITE PAPER Get There Faster
  • 2. Customer Requirements Scalability The service must scale to meet business growth requirements while keeping operational and development costs to a minimum. Availability The service must meet the cross-enterprise Service Level Agreement (SLA) of 99.99 percent uptime. Performance The service must match the transactional capacity of the mainframe. Operations The service should provide a rich monitoring and management tool set. Initial Architecture The architecture prior to the introduction of the Terracotta BigMemory data layer consisted of clusters of multiple applications connected to a back-end mainframe via MQSeries® for TPF. Solution Architecture with Terracotta BigMemory The solution architecture used Terracotta BigMemory to replace the mainframe for more than 99 percent of the read and write transactions. The data access layer was re-implemented as a scalable in-memory service behind a message queue. The in-memory service is available enterprise-wide, providing a common, scalable means to offload mainframe usage with predict- able performance and latency. Data lookups are read from the in-memory store, faulting to the mainframe only on a cache miss. Data updates are written directly to the in-memory store and written asynchronously to the mainframe via a durable write-behind queue. Business White Paper | BigMemory Reduces Mainframe Costs 16 application servers 3,500 TPS Travel Agent Network 100’s of application servers 4,500 TPS Web Services Cluster 12 application servers 1,000 TPS Major Travel Website MQ/TPF IBM Series z Mainframe IBM Series z Mainframe Figure 1: Initial architecture without Terracotta’s distributed cache Get There Faster2
  • 3. Business White Paper | BigMemory Reduces Mainframe Costs The customer’s 500-millisecond SLA requires that cache lookups happen very fast. To minimize latency, the in-memory service uses a layered caching strategy that keeps hot data in memory as close to upstream applications as possible. The top layer (“L1 Cache Layer” in Figure 3) is a scalable cluster of Java® processes on commodity hardware that implements the cache service’s message-oriented get/put API. The L1 cache layer is backed by a scalable and highly available Terracotta server array (“L2 Cache Layer” in Figure 3) that also runs on commodity hardware. BigMemory’s In-Memory Data Management Layer Each L1 node uses the Ehcache library to address cached data. The Ehcache library transparently keeps a hot set of cache data in memory for low-latency access. For operations on a cache element not already in memory, Ehcache automatically requests that cache entry from the Terracotta server array. The L1 layer is fault tolerant and highly available. Should an L1 node fail, its unanswered cache requests will be handled by another L1 node. All in-memory data is backed by BigMemory’s Terracotta server array, which is fault tolerant and highly available. The L1 layer is also independently scalable as L1 nodes may be added to meet increasing service load. 16 application servers 3,500 TPS Durable write- behind queue Lookup on Cache miss Travel Agent Network IBM Series z Mainframe 100’s of application servers 4,500 TPS Web Services Cluster 12 application servers 1,000 TPS Major Travel Website MOM/MQ MQ/TPF Data Service API Figure 2: Solution architecture with a scalable cache service using Terracotta BigMemory L2CacheLayerL1CacheLayer Commodity Server Stripe BigMemory Java Application App Server BigMemory Java Application App Server BigMemory Java Application App Server BigMemory Java Application App Server scaleup BigMemory BigMemory BigMemory BigMemory Active Server Commodity Server Mirror Server Terracotta Server Array BigMemory scale out TCP TCP TCP TCP Durability Mirroring Striping Developer Console Plug-in Monitoring Operations Center MOM/MQ Interface Figure 3: Detail of BigMemory’s service architecture Get There Faster 3
  • 4. Get There Faster Find out how to power up your Digital Enterprise at www.SoftwareAG.com Business White Paper | BigMemory Reduces Mainframe Costs Terracotta Server Array The Terracotta server array (L2) is an array of Java server processes on commodity hardware that provides durability, mirroring, striping and scalability to the in-memory service. Like the L1 layer, each L2 node maintains an in-memory hot set of data for low-latency access with a disk-backed store for durability and access to very large data sets. The L2 in-memory service uses BigMemory provide an in-process—but off-heap—in-memory data store that is not subject to garbage collection. This allows each L2 node to store hundreds of gigabytes of data in memory on a single Java Virtual Machine (JVM® ) without suffering long garbage collection pauses that would violate the customer’s SLA. BigMemory consolidates the hardware footprint of the in-memory service by allowing hundreds of GBs of data in memory on a single server. BigMemory is highly available and independently scalable by virtue of its striping and mirroring characteristics. Two (or more) mirrored L2 nodes constitute a “stripe” in the BigMemory server array. Each stripe is fault tolerant and highly available, since, should any of the mirrored L2 nodes within that stripe go offline, its service load will automatically fail over to another mirror node within that stripe. The L2 layer is independently scalable, as L2 stripes may be added to meet increasing service load. Conclusion After extensive and rigorous testing to ensure it would meet the customer’s stringent perfor- mance and reliability requirements, Terracotta BigMemory was deployed into production on customer-facing applications. After proving its performance and stability in a limited production environment, the customer is now using BigMemory across a wide range of customer applica- tions, offloading 80 percent of requests from the mainframe and yielding a cost savings of mil- lions of dollars per year. The metrics below tell the before and after story. Initial Architecture Solution Architecture with Terracotta Throughput ~10K TPS >12K TPS Uptime 99.99% 99.99% SLA 3 seconds 500ms SLA Adherence 99.98% 99.999% Infrastructure IBM Series z mainframe with per-transaction cost 6 commodity blades Mainframe transactions per day >500MM <1000 ABOUT SOFTWARE AG Software AG helps organizations achieve their business objectives faster. The company’s big data, integration and business process technologies enable customers to drive operational efficiency, modernize their systems and optimize processes for smarter decisions and better service. Building on over 40 years of customer-centric innovation, the company is ranked as a “leader” in 15 market categories, fueled by core product families Adabas-Natural, Alfabet, Apama, ARIS, Terracotta and webMethods. Learn more at www.SoftwareAG.com. © 2014 Software AG. All rights reserved. Software AG and all Software AG products are either trademarks or registered trademarks of Software AG. Other product and company names mentioned herein may be the trademarks of their respective owners. SAG_Terracotta_BigMemory_Reduces_Mainframe_Costs_4PG_WP_Jan14