The document summarizes a technical white paper produced jointly by SAP and IBM that describes a proof of concept project testing the performance of SAP Convergent Invoicing software handling large data volumes. The project used an IBM enterprise architecture and demonstrated meeting key performance indicators for a telecommunications scenario, including uploading 1.5 billion items, billing 2.5 million customers, and invoicing in under 18 hours. Using IBM Easy Tier storage software reduced the processing time by over 30% and IBM Storwize V7000 storage eliminated performance bottlenecks.
The IBM z13 - January 14, 2015 - IBM Latin America Hardware Announcement LG15...Anderson Bassani
IBM announces the new IBM z13 system, which delivers up to 40% more total capacity than the prior zEC12 system. Key features of the z13 include support for up to 10TB of memory, new FICON Express16S channels for storage connectivity, simultaneous multithreading to improve Linux and zIIP workload performance, and vector processing to accelerate analytics workloads. The z13 also provides improved security, availability, and manageability. Existing zEnterprise EC12 and zEnterprise 196 systems can be upgraded to the new z13 configuration.
This document provides an overview of IBM Capacity Management Analytics (CMA). CMA is a solution that helps customers manage capacity across their IT infrastructure through features like systems management and optimization, software cost analysis, capacity planning and forecasting, and problem identification. The document outlines the various components and uses cases of CMA and how it can help customers optimize resources, manage costs, plan future capacity needs, and identify potential problems.
Automotive data integration: An example of a successful project structureETLSolutions
An automotive manufacturer implemented a successful data integration project with ETL Solutions to integrate data from 200 dealers across 16 dealer management systems. The project was completed in less than 7 months and has been running successfully for 6 years. One data feed now provides over 90% of updates to the manufacturer's marketing database. The project structure allowed for flexible, independent data extraction from each dealer with reusable transformation code.
Ramco Cement Limited is the 5th largest cement producer in India with a total production capacity of 16.5 MPTA across 9 plants.
Ramco implemented a business intelligence system to collect operational and transactional data across plants and from customers/dealers. This helped identify inefficiencies, improve forecasting and dealer performance, and increase profits.
Critical success factors for Ramco's BI system included identifying issues across business areas, providing dynamic solutions, visualizing goals/KPIs, standardizing processes, and increasing accessibility of real-time data.
To further enhance marketing analytics, Ramco can integrate sales and advertising data by region, target high-potential areas, measure billboard effectiveness, and
Data integration case study: Automotive industryETLSolutions
Our Automotive consultants use our data integration software to integrate data from the varied systems used by Automotive dealers. Read on to find out how we have streamlined communications across a major manufacturer's network.
The document discusses how ERP systems are essential for e-commerce and are being integrated with new technologies. It describes how application service providers allow smaller companies affordable access to ERP without large upfront costs. Emerging standards like Web services, XML, and RFID are allowing easier system integration and data sharing between businesses and their ERP systems.
The IBM z13 - January 14, 2015 - IBM Latin America Hardware Announcement LG15...Anderson Bassani
IBM announces the new IBM z13 system, which delivers up to 40% more total capacity than the prior zEC12 system. Key features of the z13 include support for up to 10TB of memory, new FICON Express16S channels for storage connectivity, simultaneous multithreading to improve Linux and zIIP workload performance, and vector processing to accelerate analytics workloads. The z13 also provides improved security, availability, and manageability. Existing zEnterprise EC12 and zEnterprise 196 systems can be upgraded to the new z13 configuration.
This document provides an overview of IBM Capacity Management Analytics (CMA). CMA is a solution that helps customers manage capacity across their IT infrastructure through features like systems management and optimization, software cost analysis, capacity planning and forecasting, and problem identification. The document outlines the various components and uses cases of CMA and how it can help customers optimize resources, manage costs, plan future capacity needs, and identify potential problems.
Automotive data integration: An example of a successful project structureETLSolutions
An automotive manufacturer implemented a successful data integration project with ETL Solutions to integrate data from 200 dealers across 16 dealer management systems. The project was completed in less than 7 months and has been running successfully for 6 years. One data feed now provides over 90% of updates to the manufacturer's marketing database. The project structure allowed for flexible, independent data extraction from each dealer with reusable transformation code.
Ramco Cement Limited is the 5th largest cement producer in India with a total production capacity of 16.5 MPTA across 9 plants.
Ramco implemented a business intelligence system to collect operational and transactional data across plants and from customers/dealers. This helped identify inefficiencies, improve forecasting and dealer performance, and increase profits.
Critical success factors for Ramco's BI system included identifying issues across business areas, providing dynamic solutions, visualizing goals/KPIs, standardizing processes, and increasing accessibility of real-time data.
To further enhance marketing analytics, Ramco can integrate sales and advertising data by region, target high-potential areas, measure billboard effectiveness, and
Data integration case study: Automotive industryETLSolutions
Our Automotive consultants use our data integration software to integrate data from the varied systems used by Automotive dealers. Read on to find out how we have streamlined communications across a major manufacturer's network.
The document discusses how ERP systems are essential for e-commerce and are being integrated with new technologies. It describes how application service providers allow smaller companies affordable access to ERP without large upfront costs. Emerging standards like Web services, XML, and RFID are allowing easier system integration and data sharing between businesses and their ERP systems.
Profitability & Cost Management Cloud Service: Have It Your WayAlithya
This document provides an overview of Oracle's Profitability and Cost Management Cloud Service (PCMCS). It begins with an introduction to PCMCS and its embedded analytics capabilities. It then discusses the performance ledger applications for complex computations and flexible allocations. The document outlines features like application management and reporting. Finally, it reviews the PCMCS roadmap and recent releases of the HPCM product, including the new Management Ledger model type.
Taking the Next Step Forward in DB2 - Why BMC and CDB are merging technologies.BMC Software
The combination of BMC and CDB technologies changes the game for DB2 customers by delivering improvements in application availability and cost optimization. Read more about this acquisition: http://newsroom.bmc.com/phoenix.zhtml?c=253321&p=irol-newsArticle&ID=2009032
Combined COPA allows organizations to analyze profit-related transactions (such as the invoicing of a customer or consumption through delivery), both in the form of value fields and also in the form of accounts to which posting takes place in financial accounting.
This document provides an overview of enterprise resource planning (ERP) solutions and SAP R/3. It describes the evolution of ERP from integrated systems for individual business functions to integrated solutions for entire organizations and supply chains. It also outlines the objectives, components, benefits, and major vendors of ERP systems. Specifically for SAP R/3, it details its 3-tier architecture, functional modules, integration capabilities, and evolution to mySAP and SAP NetWeaver platforms.
The document summarizes a presentation on profitability management using Oracle's Hyperion Profitability and Cost Management (HPCM) solution. It provides an agenda for the presentation including an introduction to profitability management needs, how HPCM enables profitability management, a case study, and Q&A. It also provides an overview of the consulting partner Edgewater Ranzal and their experience delivering EPM and BI projects including HPCM. [END SUMMARY]
i Boss for Cnf is a specialized Enterprise Resource Planning (ERP) system catering the 3PL, CNF, Logistics industry. Being modular, the ERP caters to all the departments running on a centralized database. The built in business processes following standard practices, enhances the products capability in efficiently managing the company's operations, thus improving their bottom line. The user friendliness of this package facilitates the users to efficiently use the ERP system, thus generating the right kind of MIS reports for the management. Its feature rich integrated functions formulates the ERP to be the most preferred system.
The product has been developed considering security, scalability and customizability aspects in accordance to the technological compliance for today and tomorrow's needs.
The system can be hosted centrally with provision to access through internet from any location, thus offering a real time view on the data transaction. The multi level configurable security facilitates to link the Head office, Branch offices
and the sites seamlessly either through Internet or company's Intranet.
'I BOSS' is very useful for Large, Mid-size Cnf, 3PL Operators.
This document discusses the role of information technology (IT) in supply chain management (SCM). It defines SCM and its objectives such as creating value for customers and profitability. It explains that IT helps managers understand customer demands, inventory levels, production needs, and delivery logistics. Enterprise resource planning (ERP) systems and customer relationship management (CRM) software are key IT tools that improve coordination between suppliers, manufacturers, and customers to achieve efficient SCM.
This document provides an overview and agenda for an SAP MM training course. The course aims to prepare participants to understand the basic structure, procedures, and processes of SAP's Material Management module. The agenda covers topics such as an introduction to SAP modules, the components and processes within SAP MM like purchasing, inventory management, and invoice verification. It also lists where to find additional details on SAP best practices and provides the benefits of taking the course, such as gaining knowledge to work as an SAP MM consultant. The target audience includes procurement and inventory professionals, auditors, and fresh graduates.
Rajesh Kumar Rout has over 4 years of experience in SAP FI/CO implementation and configuration. He has worked on 7 SAP projects from blueprinting to go-live. His skills include configuration of FI master data, GL, AR, AP, and New General Ledger. He has expertise in SAP ECC versions 4.6c, 4.7, 5.0 and 6.0. Rout provides full-time production support for the FI/CO modules of a large pharmaceutical company. His responsibilities include incident and problem management, testing, custom development and configuration changes.
DHL Global Forwarding implemented an IBM Cognos TM1 system to enable its finance organization to more easily create reliable budgets and forecasts. The IBM solution provides real-time insight into complex financial data. It allows fast access to important changing business data. Key benefits include a reliable system with minimal maintenance, easy reporting, increased access to consolidated data from multiple sources, and rapid re-forecasting capabilities.
The document discusses how e-business impacts supply chain performance and different industries. It provides examples of how e-business has been applied in the PC, book, grocery, and MRO supplies industries. Key impacts of e-business include reduced costs through lower inventory and improved responsiveness through 24/7 access and faster product introduction. The degree of benefit depends on the industry and how easy e-business is to implement for a particular company's supply chain.
Telecom italia oss transformation roadmap marco daccò venice 2010Marco Daccò
The document summarizes Telecom Italia's transformation of its service fulfillment processes and underlying technology. It describes Telecom Italia's background and the drivers for change, including new technologies, market needs, and regulatory pressures. It outlines the evolution, including adopting standards-based architectures, rationalizing legacy systems, and implementing a service-oriented architecture. Key benefits included improved time to market, customer experience, and cost reductions. Accenture played a role in all phases of the transformation projects.
This document provides a step-by-step guide on using Business Transaction Events (BTEs) as an enhancement technique in SAP's Financial Accounting module. It describes what BTEs are, the difference between BTEs and BADIs, the two types of BTE interfaces, and provides an example of how to configure a BTE to copy an assignment field with a custom value when accounting documents are posted for a specific company code. The document outlines finding the relevant BTE, copying the sample function module, writing ABAP code to update the field, saving and activating the function module, and assigning it to the appropriate event, country and application.
Since its introduction in version 11.1.1.x Oracle Hyperion Profitability Cost Management (HPCM) has allowed companies to gain a deeper level insight into their business performance than ever before. Join us to hear how clients currently tackle profitability questions and how HPCM will help clients provide actionable insights into cost and profitability. We will discuss standard HPCM model functionality, discuss how HPCM drives organizational performance by discovering drivers of cost and profitability, and discuss how you can empower your users with visibility and flexibility to improve resource alignment.
During this webinar we will review the following:
What Profitability means
How clients today answer profitability questions
HPCM models (three models)
Use Cases - How Companies are using HPCM
How HPCM fits into the EPM product suite
Look into HPCM standard model build details
Quick Demo of HPCM model
Features that are available inside the HPCM model
Product Spotlight: Oracle Profitability and Cost ManagementInnovusPartners
- Oracle Hyperion Profitability and Cost Management (HPCM) is an Oracle EPM system module that allows users to compute profitability for business segments, customers, and products in an integrated and consistent manner.
- It provides a pre-built framework for profitability modeling, graphical traceability maps, and genealogy reporting to show the flow of costs and revenues between stages.
- HPCM helps address challenges in measuring profitability and costs like disparate models, time lags, insufficient data, and high maintenance costs through its packaged functionality, user-driven rules, and tight integration with other Hyperion modules.
This document provides an overview of profitability and cost management solutions. It begins with a disclaimer and agenda. It then discusses how profitability analysis can expose hidden costs and how traditional tools like spreadsheets are insufficient for effective profitability analysis. The solution overview shows how Oracle EPM connects management processes. It also outlines a world class profitability and cost management process. Key components for implementing best practices are identified. The document then discusses delivering world class profitability through activities like creating meaningful cost models, examining profit and cost details, identifying cost causality, and evaluating scenarios. Customer success stories and the HPCM value proposition are presented. Finally, a simple example of a bikes manufacturing company seeking profitability insights is provided.
Maersk Line sought to automate financial processes in SAP to increase efficiency and reduce errors. They were maintaining over 2000 Excel/Access scripts which created governance issues. Winshuttle provided a solution to standardize processes with centralized templates while leveraging existing Excel skills. Maersk realized a 15% productivity increase and moved 50 employees to higher tasks. Transaction times decreased by 90% and errors were reduced.
Kai Wähner – Real World Use Cases for Realtime In-Memory Computing - NoSQL ma...NoSQLmatters
Kai Wähner – Real World Use Cases for Realtime In-Memory Computing
NoSQL is not just about different storage alternatives such as document store, key value store, graphs or column-based databases. The hardware is also getting much more important. Besides common disks and SSDs, enterprises begin to use in-memory storages more and more because a distributed in-memory data grid provides very fast data access and update. While its performance will vary depending on multiple factors, it is not uncommon to be 100 times faster than corresponding database implementations. For this reason and others described in this session, in-memory computing is a great solution for lifting the burden of big data, reducing reliance on costly transactional systems, and building highly scalable, fault-tolerant applications.The session begins with a short introduction to in-memory computing. Afterwards, different frameworks and product alternatives are discussed for implementing in-memory solutions. Finally, the main part of this session shows several different real world uses cases where in-memory computing delivers business value by supercharging the infrastructure.
William B. Preston is an experienced Executive Project Manager with over 30 years of experience leading complex IT projects at IBM. He has a proven track record of managing global teams and large budgets to deliver projects on time and under budget. His areas of expertise include CRM systems, data management, and Agile development methodologies. He holds certifications in Project Management from IBM and PMI.
Profitability & Cost Management Cloud Service: Have It Your WayAlithya
This document provides an overview of Oracle's Profitability and Cost Management Cloud Service (PCMCS). It begins with an introduction to PCMCS and its embedded analytics capabilities. It then discusses the performance ledger applications for complex computations and flexible allocations. The document outlines features like application management and reporting. Finally, it reviews the PCMCS roadmap and recent releases of the HPCM product, including the new Management Ledger model type.
Taking the Next Step Forward in DB2 - Why BMC and CDB are merging technologies.BMC Software
The combination of BMC and CDB technologies changes the game for DB2 customers by delivering improvements in application availability and cost optimization. Read more about this acquisition: http://newsroom.bmc.com/phoenix.zhtml?c=253321&p=irol-newsArticle&ID=2009032
Combined COPA allows organizations to analyze profit-related transactions (such as the invoicing of a customer or consumption through delivery), both in the form of value fields and also in the form of accounts to which posting takes place in financial accounting.
This document provides an overview of enterprise resource planning (ERP) solutions and SAP R/3. It describes the evolution of ERP from integrated systems for individual business functions to integrated solutions for entire organizations and supply chains. It also outlines the objectives, components, benefits, and major vendors of ERP systems. Specifically for SAP R/3, it details its 3-tier architecture, functional modules, integration capabilities, and evolution to mySAP and SAP NetWeaver platforms.
The document summarizes a presentation on profitability management using Oracle's Hyperion Profitability and Cost Management (HPCM) solution. It provides an agenda for the presentation including an introduction to profitability management needs, how HPCM enables profitability management, a case study, and Q&A. It also provides an overview of the consulting partner Edgewater Ranzal and their experience delivering EPM and BI projects including HPCM. [END SUMMARY]
i Boss for Cnf is a specialized Enterprise Resource Planning (ERP) system catering the 3PL, CNF, Logistics industry. Being modular, the ERP caters to all the departments running on a centralized database. The built in business processes following standard practices, enhances the products capability in efficiently managing the company's operations, thus improving their bottom line. The user friendliness of this package facilitates the users to efficiently use the ERP system, thus generating the right kind of MIS reports for the management. Its feature rich integrated functions formulates the ERP to be the most preferred system.
The product has been developed considering security, scalability and customizability aspects in accordance to the technological compliance for today and tomorrow's needs.
The system can be hosted centrally with provision to access through internet from any location, thus offering a real time view on the data transaction. The multi level configurable security facilitates to link the Head office, Branch offices
and the sites seamlessly either through Internet or company's Intranet.
'I BOSS' is very useful for Large, Mid-size Cnf, 3PL Operators.
This document discusses the role of information technology (IT) in supply chain management (SCM). It defines SCM and its objectives such as creating value for customers and profitability. It explains that IT helps managers understand customer demands, inventory levels, production needs, and delivery logistics. Enterprise resource planning (ERP) systems and customer relationship management (CRM) software are key IT tools that improve coordination between suppliers, manufacturers, and customers to achieve efficient SCM.
This document provides an overview and agenda for an SAP MM training course. The course aims to prepare participants to understand the basic structure, procedures, and processes of SAP's Material Management module. The agenda covers topics such as an introduction to SAP modules, the components and processes within SAP MM like purchasing, inventory management, and invoice verification. It also lists where to find additional details on SAP best practices and provides the benefits of taking the course, such as gaining knowledge to work as an SAP MM consultant. The target audience includes procurement and inventory professionals, auditors, and fresh graduates.
Rajesh Kumar Rout has over 4 years of experience in SAP FI/CO implementation and configuration. He has worked on 7 SAP projects from blueprinting to go-live. His skills include configuration of FI master data, GL, AR, AP, and New General Ledger. He has expertise in SAP ECC versions 4.6c, 4.7, 5.0 and 6.0. Rout provides full-time production support for the FI/CO modules of a large pharmaceutical company. His responsibilities include incident and problem management, testing, custom development and configuration changes.
DHL Global Forwarding implemented an IBM Cognos TM1 system to enable its finance organization to more easily create reliable budgets and forecasts. The IBM solution provides real-time insight into complex financial data. It allows fast access to important changing business data. Key benefits include a reliable system with minimal maintenance, easy reporting, increased access to consolidated data from multiple sources, and rapid re-forecasting capabilities.
The document discusses how e-business impacts supply chain performance and different industries. It provides examples of how e-business has been applied in the PC, book, grocery, and MRO supplies industries. Key impacts of e-business include reduced costs through lower inventory and improved responsiveness through 24/7 access and faster product introduction. The degree of benefit depends on the industry and how easy e-business is to implement for a particular company's supply chain.
Telecom italia oss transformation roadmap marco daccò venice 2010Marco Daccò
The document summarizes Telecom Italia's transformation of its service fulfillment processes and underlying technology. It describes Telecom Italia's background and the drivers for change, including new technologies, market needs, and regulatory pressures. It outlines the evolution, including adopting standards-based architectures, rationalizing legacy systems, and implementing a service-oriented architecture. Key benefits included improved time to market, customer experience, and cost reductions. Accenture played a role in all phases of the transformation projects.
This document provides a step-by-step guide on using Business Transaction Events (BTEs) as an enhancement technique in SAP's Financial Accounting module. It describes what BTEs are, the difference between BTEs and BADIs, the two types of BTE interfaces, and provides an example of how to configure a BTE to copy an assignment field with a custom value when accounting documents are posted for a specific company code. The document outlines finding the relevant BTE, copying the sample function module, writing ABAP code to update the field, saving and activating the function module, and assigning it to the appropriate event, country and application.
Since its introduction in version 11.1.1.x Oracle Hyperion Profitability Cost Management (HPCM) has allowed companies to gain a deeper level insight into their business performance than ever before. Join us to hear how clients currently tackle profitability questions and how HPCM will help clients provide actionable insights into cost and profitability. We will discuss standard HPCM model functionality, discuss how HPCM drives organizational performance by discovering drivers of cost and profitability, and discuss how you can empower your users with visibility and flexibility to improve resource alignment.
During this webinar we will review the following:
What Profitability means
How clients today answer profitability questions
HPCM models (three models)
Use Cases - How Companies are using HPCM
How HPCM fits into the EPM product suite
Look into HPCM standard model build details
Quick Demo of HPCM model
Features that are available inside the HPCM model
Product Spotlight: Oracle Profitability and Cost ManagementInnovusPartners
- Oracle Hyperion Profitability and Cost Management (HPCM) is an Oracle EPM system module that allows users to compute profitability for business segments, customers, and products in an integrated and consistent manner.
- It provides a pre-built framework for profitability modeling, graphical traceability maps, and genealogy reporting to show the flow of costs and revenues between stages.
- HPCM helps address challenges in measuring profitability and costs like disparate models, time lags, insufficient data, and high maintenance costs through its packaged functionality, user-driven rules, and tight integration with other Hyperion modules.
This document provides an overview of profitability and cost management solutions. It begins with a disclaimer and agenda. It then discusses how profitability analysis can expose hidden costs and how traditional tools like spreadsheets are insufficient for effective profitability analysis. The solution overview shows how Oracle EPM connects management processes. It also outlines a world class profitability and cost management process. Key components for implementing best practices are identified. The document then discusses delivering world class profitability through activities like creating meaningful cost models, examining profit and cost details, identifying cost causality, and evaluating scenarios. Customer success stories and the HPCM value proposition are presented. Finally, a simple example of a bikes manufacturing company seeking profitability insights is provided.
Maersk Line sought to automate financial processes in SAP to increase efficiency and reduce errors. They were maintaining over 2000 Excel/Access scripts which created governance issues. Winshuttle provided a solution to standardize processes with centralized templates while leveraging existing Excel skills. Maersk realized a 15% productivity increase and moved 50 employees to higher tasks. Transaction times decreased by 90% and errors were reduced.
Kai Wähner – Real World Use Cases for Realtime In-Memory Computing - NoSQL ma...NoSQLmatters
Kai Wähner – Real World Use Cases for Realtime In-Memory Computing
NoSQL is not just about different storage alternatives such as document store, key value store, graphs or column-based databases. The hardware is also getting much more important. Besides common disks and SSDs, enterprises begin to use in-memory storages more and more because a distributed in-memory data grid provides very fast data access and update. While its performance will vary depending on multiple factors, it is not uncommon to be 100 times faster than corresponding database implementations. For this reason and others described in this session, in-memory computing is a great solution for lifting the burden of big data, reducing reliance on costly transactional systems, and building highly scalable, fault-tolerant applications.The session begins with a short introduction to in-memory computing. Afterwards, different frameworks and product alternatives are discussed for implementing in-memory solutions. Finally, the main part of this session shows several different real world uses cases where in-memory computing delivers business value by supercharging the infrastructure.
William B. Preston is an experienced Executive Project Manager with over 30 years of experience leading complex IT projects at IBM. He has a proven track record of managing global teams and large budgets to deliver projects on time and under budget. His areas of expertise include CRM systems, data management, and Agile development methodologies. He holds certifications in Project Management from IBM and PMI.
The document discusses IBM's Z strategy and digital transformation model. It highlights how IBM Z continues to drive the global economy by processing billions of daily transactions. It also outlines IBM's digital transformation model for clients, which includes exposing APIs to enable apps and data, evolving to automate delivery pipelines, optimizing with analytics, and predicting and responding to service interruptions. The model is meant to help clients address digital transformation needs, leverage existing IBM Z assets to accelerate transformation, and achieve business and technical goals.
1. The document discusses cloud, DevOps, IT4IT, and consumption-based pricing models for SAP, SAP HANA, S/4HANA, BW on HANA, and SaaS. It addresses customer challenges around flexibility, high availability, cost effectiveness, and agility.
2. The provider aims to maximize business value for customers by enhancing agility and flexibility while focusing on operational excellence and efficiency. This will help customers reduce business costs through a leaner IT environment, shift from Capex to Opex, and reduce security risks and costs.
3. Key principles of the consumption-based model include paying only for what is used, elastic scalability up and down with no
Build end-to-end solutions with BlueMix, Avi Vizel & Ziv Dai, IBMCodemotion Tel Aviv
The document discusses IBM's cloud platform Bluemix. It provides an overview of Bluemix, describing it as an open platform for developing and hosting applications that simplifies tasks associated with managing infrastructure at internet scale. Bluemix is built on IBM's Cloud Operating Environment architecture using Cloud Foundry as an open source PaaS. It enables developers to rapidly build, deploy, and manage cloud applications while tapping into available services and runtimes provided by IBM and other ecosystem partners. The document outlines some key Bluemix concepts and components such as applications, services, organizations/spaces, and buildpacks.
The document discusses whether a telecom operator should deploy a Software-as-a-Service (SaaS) billing solution. It notes that telecom services and technologies have increased in complexity, creating challenges for outdated legacy billing systems. Some mid-size and small operators have already adopted SaaS billing solutions to reduce costs and quickly launch new services. The document argues that SaaS billing adoption will likely increase over time for large operators as well, as the solutions provide benefits around flexibility, scalability and total cost of ownership. It concludes that telecom operators should evaluate their existing infrastructure, finances and needs to determine if a transition to SaaS billing is appropriate for their situation.
Posts face pressures to increase revenue and reduce costs. IBM offers solutions to help with demand planning, sort centre management, transportation management, and cost reduction across the postal logistics supply chain. The solution forecasts demand, optimizes resource allocation, tracks mail and resources in real-time, and supports improved decision making. Process changes may also be needed to fully realize benefits from new technology solutions.
Accelerate Your Signature Banking Applications with IBM Storage OfferingsPaula Koziol
Signature Users can cut application run and response times by as much as 50% by applying the latest IBM Storage offerings. Hear about an example Signature User’s experience and benefits with IBM Flash. Also, hear about IBM’s direction with the IBMi processor and answer questions you may have in upgrading your IT infrastructure. Current data growth, analytics, and real-time access needs have changed the storage landscape for our clients, particularly in banking. IBM’s multi-billion dollar investments in storage are making a significant impact on the speed, efficiency, and management of these needs. Offerings such as all-flash systems and software defined storage have especially become attractive to our banking clients who are both accelerating the speed of existing applications, such as core banking – or, creating new applications demanding real-time access, such as cybersecurity and cognitive in payments. Learn how others in the Financial Services industry are addressing core banking, payments, and risk & compliance applications using IBM Storage offerings. In addition to Signature, other core banking examples applying flash storage within Fiserv include: Premier, Precision, and XP2. Many of the same business benefits experienced within the banking industry could apply to you and your clients. Learn how you can easily implement these proven capabilities with your Signature application now.
Maximo and a roadmap for your IoT journeyHelen Fisher
For IBM customers, the Internet of Things (IoT) enables businesses to improve operations, rapidly connect devices and to lower costs. This is why IBM Maximo Asset Management now sits neatly in the Watson IoT portfolio. There are many business cases out there today for linking IoT and Maximo, IBM are not, however, diverting from their core value statements. Maximo is still about understanding asset availability, preventing failures, maximising resources, increasing reliability, understanding inventory needs and costs, and plant safety. Check out the key investment areas for 2016 and beyond.
Open Source and the New Economics of IT - Ingres CIO Doug HarrAlfresco Software
http://blogs.alfresco.com/wp/webcasts
Open source ECM is proven to :
* Lower Total Cost of Ownership
* Eliminate licensing fees and vendor lock-in
* Deliver faster proofs-of-concept
* Provide a complete solution for managing all enterprise content
Many companies are already leveraging open source ECM to take control of their ever growing business content at a fraction of the cost of proprietary ECM market solutions and without the danger of vendor lock-in.
The Ingres ECM Bundle for Alfresco enables innovative document management, team collaboration, and knowledge management applications.
Basing the ECM solution on Ingres Database guarantees unique high availability features that make compliance with auditing requirements an easier task, and cost much less.
Ingres CIO Doug Harr shares examples on how he uses content management solutions from Alfresco.
He also discusses the significant trends affecting the IT market today.
Embracing The New Economics of IT by adopting open source ECM will help companies to:
* better maintain their systems during the economic downturn,
* keep essential projects alive, and
* pursue innovation that can help guarantee a competitive advantage when conditions improve.
Guruprasad Srinivasamurthy has over 15 years of experience in testing services for the telecom, banking, and investment domains. He has worked on projects for clients like Infosys, Bank of America, Rogers, Motorola, and Del Tree. Currently he is a project manager at Infosys working on a data-less key implementation for Fidelity Investments.
IBM Z Cost Reduction Opportunities. Are you missing out?Precisely
Large companies continue to use mainframes for their most business-critical IT workloads. For these companies, finding ways to get more bang for the mainframe buck, in terms of both costs and performance, is always a high priority. Several converging trends in recent years have made it more challenging than ever to achieve the needed organizational performance at the best possible price point. IT leaders in mainframe departments are seeking out ways to speed processing, especially mundane processing tasks such as sorting, copying, merging, compression, and report generation.
Whether you are looking to get more value from your mainframe investment with enhanced performance, improved efficiency, or modernization, Precisely has multiple solutions for customer running IBM Z Systems that can have a dramatic impact on cost and efficiency.
Watch this on-demand webinar to learn about:
• Optimizing mainframe sort workloads
• Leveraging your zIIP processors
• Modernizing your database environment
• Improving visibility into mainframe processing
apidays LIVE LONDON - Old meets New - Managing transactions on the edge of th...apidays
The document discusses challenges in integrating new technologies like APIs, microservices, and cloud services with existing core systems. It emphasizes the need for consistency across the organization to integrate old and new systems effectively. Some key challenges discussed include transaction management across systems, data consistency, operations, and integrated reporting. The document also provides examples of integration patterns and technical enablers that can help with impedance matching between new and old systems.
How to Revamp your Legacy Applications For More Agility and Better Service - ...NRB
With a series of new tools available on the Mainframe like Operational & Decision Management tools, Real Time Scoring, … revamp the existing legacy applications (without rewriting them) by bridging them to the wealth of new capabilities available on the IBM Mainframe environment
Sigma Infosolutions leveraged its expertise on Jasper BI Suite and Reporting technologies to develop application along with its web engine for a healthcare solution provider company in North America. The web application is an automated reporting engine which allows the users to monitor, analyze and manage, forecast and report the performance of various high-level business objectives. The automated engine leverages Jasper Reporting, Dashboards and Analyzer tools for additional analysis and visualization. Built on Jasper BI Suite, the application extends numerous customization capabilities to users with analytical front-end.
Next Gen ADM: The future of application services. IBM
Rapid technology advances are driving higher expectations around speed, efficiency and resilience. Expectations for how technology should help meet business goals are rising. To meet increasing expectations around agility, time to value and cost optimization, Businesses are seeking new ways to manage apps. Born-digital companies are setting new standards for speed, efficiency and resilience. We will discuss how companies can optimize the core, unlock legacy and unleash digital to thrive in the new normal.
This document discusses how automation can improve business processes by reducing inefficiencies. It describes how one company, SwiftAnt, helps clients migrate their legacy integration systems to new platforms using zero capital expenditure models. This allows clients to realize benefits like 60% lower total cost of ownership for electronic data interchange systems. SwiftAnt also discusses how it uses a structured agile process and focused service offerings to help clients successfully implement electronic data interchange and overcome integration challenges.
This CV summarizes Marc de Leijer's professional experience and qualifications. He has over 15 years of experience in technical and functional analysis, programming, testing, and application engineering primarily in the banking industry. His technical skills include Mainframe, Windows, Java, SQL, and various programming languages. He has held roles as a functional analyst, technical analyst, programmer, and application engineer at KBC and BNP Paribas Fortis.
Learn about Success stories and recommendations from IBM clients and find out how organizations are taking bold, new approaches to dramatically improve economics and innovation through IT efficiency.
For more information on IBM System z, visit http://ibm.co/PNo9Cb.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
This white paper summarizes the results of benchmark testing of Microsoft BizTalk Server 2006 and SQL Server 2005 running on Unisys ES7000/one Enterprise Servers. The testing achieved unprecedented throughput levels, with the Latency Application scenario reaching 1,156 orchestrations per second. This level of performance far surpassed any previously recorded by Microsoft for BizTalk Server. The results demonstrate the scalability of BizTalk Server on the ES7000 platform and that further optimizations could allow even higher performance.
Similar to SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario (20)
This IBM Redpaper provides a brief overview of OpenStack and a basic familiarity of its usage with the IBM XIV Storage System Gen3. The illustration scenario that is presented uses the OpenStack Folsom release implementation IaaS with Ubuntu Linux servers and the IBM Storage Driver for OpenStack. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn how all flash needs end to end Storage efficiency. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about vSphere Storage API for Array Integration on the IBM Storwize family. IBM Storwize V7000 Unified combines the block storage capabilities of Storwize V7000 with file storage capabilities into a single system for greater ease of management and efficiency. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about IBM FlashSystem 840 and its complete product specification in this Redbook. FlashSystem 840 provides scalable performance for the most demanding enterprise class applications. IBM FlashSystem 840 accelerates response times with IBM MicroLatency to enable faster decision making. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about the IBM System x3250 M5,.The x3250 M5 offers the following energy-efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to a green environment, energy-efficient planar components help lower operational costs. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210746104/IBM-System-x3250-M5
This Redbook talks about the product specification of IBM NeXtScale nx360 M4. The NeXtScale nx360 M4 server provides a dense, flexible solution with a low total cost of ownership (TCO). The half-wide, dual-socket NeXtScale nx360 M4 server is designed for data centers that require high performance but are constrained by floor space. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210745680/IBM-NeXtScale-nx360-M4
The IBM System x3650 M4 HD is a (1) 2-socket 2U rack-optimized server that supports up to 32 internal drives and features an innovative design for optimal performance, uptime, and dense storage. It offers (2) excellent reliability, availability, and serviceability for improved business environments. The server is (3) designed for easy deployment, integration, service, and management.
Here are the product specification for IBM System x3300 M4. This product can be managed remotely.The x3300 M4 server contains IBM IMM2, which provides advanced service-processor control, monitoring, and an alerting function. The IMM2 lights LEDs to help you diagnose the problem, records the error in the event log, and alerts you to the problem. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System x iDataPlex dx360 M4. IBM System x iDataPlex is an innovative data center solution that maximizes performance and optimizes energy and space efficiency. The iDataPlex solution provides customers with outstanding energy and cooling efficiency, multi-rack level manageability, complete flexibility in configuration, and minimal deployment effort. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210744055/IBM-System-x-iDataPlex-dx360-M4
The IBM System x3500 M4 server provides powerful and scalable performance for business applications in an energy efficient tower or rack design. It features the latest Intel Xeon E5-2600 v2 or E5-2600 processors with up to 24 cores, 768GB RAM, 32 hard drives, and 8 PCIe slots. Comprehensive systems management tools and redundant components help ensure high availability, while its small footprint and 80 Plus Platinum power supplies reduce data center costs.
Learn about system specification for IBM System x3550 M4. The x3550 M4 offers numerous features to boost performance, improve scalability, and reduce costs. Improves productivity by offering superior system performance with up to 12-core processors, up to 30 MB of L3 cache, and up to two 8 GT/s QPI interconnect links. For more information on System x, visit http://ibm.co/Q7m3iQ.
Learn about IBM System x3650 M4. The x3650 M4 is an outstanding 2U two-socket business-critical server, offering improved performance and pay-as-you grow flexibility along with new features that improve server management capability. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741926/IBM-System-x3650-M4
Learn about the product specification of IBM System x3500 M3. System x3500 M3 has an energy-efficient design which works in conjunction with the IMM to govern fan rotation based on the readings that it delivers. This saves money under normal conditions because the fans do not have to spin at high speed. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741626/IBM-System-x3500-M3
Learn about IBM System x3400 M3. The x3400 M3 offers numerous features to boost performance and reduce costs, x3400 M3 has the ability to grow with your application requirements with these features. Powerful systems management features simplify local and remote management of the x3400 M3. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System 3250 M3 which is a single-socket server that offers new levels of performance and flexibility
to help you respond quickly to changing business demands. Cost-effective and compact, it is well suited to small to mid-sized businesses, as well as large enterprises. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210740347/IBM-System-x3250-M3
Learn about IBM System x3200 M3 and its specifications. The System x3200 M3 features easy installation and management with a rich set of options for hard disk drives and memory. The efficient design helps to save energy and provide a better work environment with less heat and noise. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210739508/IBM-System-x3200-M3
Learn about the configuration of IBM PowerVC. IBM PowerVC is built on OpenStack that controls large pools of server, storage, and networking resources throughout a data center. IBM Power Virtualization Center provides security services that support a secure environment. Installation requires just 20 minutes to get a virtual machine up and running. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about Ibm POWER7 Virtualization Performance. PowerVM Lx86 is a cross-platform virtualization solution that enables the running of a wide range of x86 Linux applications on Power Systems platforms within a Linux on Power partition without modifications or recompilation of the workloads. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
http://www.scribd.com/doc/210734237/A-Comparison-of-PowerVM-and-Vmware-Virtualization-Performance
This reference architecture document describes deploying the VMware vCloud Enterprise Suite on the IBM PureFlex System hardware platform. Key points:
- The vCloud Suite software provides components for managing and delivering cloud services, while the IBM PureFlex System provides an integrated hardware platform in a single chassis.
- The reference architecture focuses on installing the vCloud Suite management components as virtual machines on an ESXi host to manage consumer resources.
- The IBM PureFlex System provides servers, networking, and storage in a single chassis that can then be easily scaled out. This standardized deployment accelerates provisioning of cloud infrastructure.
- Deployment considerations cover systems management using IBM Flex System Manager, server, networking, storage configurations
Learn how x6: The sixth generation of EXA Technology is fast, agile and Resilient for Emerging Workloads from Alex Yost. Vice President, IBM PureSystems and System x
IBM Systems and Technology Group. x6 drives cloud and big data for enterprises by achieving insight faster thereby outperforming competitors. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210715795/X6-The-sixth-generation-of-EXA-Technology
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
How to Get CNIC Information System with Paksim Ga.pptx
SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
1. Technical White Paper
SAP and IBM Demonstrate Capability
of Handling High Billing Volume in a
Telecommunications Scenario
Testing Performance and Scalability of
the SAP®
Convergent Invoicing Package
with IBM Workload-Optimized Solutions
and IBM Easy Tier
Participating Groups
• SAP Value Prototyping, Center of
Excellence, SAP Germany
• IBM SAP International Competence
Center, IBM Germany
• IBM Research and Development,
IBM R & D Germany
2. 2
SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
Table of Contents
5 Huge Billing Volumes:
A Challenge to the
Telecommunications Industry
5 Scope of the Proof of Concept
6 System Maintenance Not in
Project Scope
6 Component Overview of SAP
Landscape Used in Project
6 SAP Convergent Invoicing,
Version 6
7 IBM DB2 LUW 9.7 Optimized
for SAP Software
8 IBM eX5 Enterprise Systems
9 IBM Storwize V7000
10 IBM Easy Tier
11 Design of the SAP Landscape
11 SAP System Setup
12 DB2 Best Practices
25 Scenario Description
26 Scenario Execution
28 Results and Achievements
31 Best Practices and
Conclusions
48 Resources
3. 3
SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
Telecommunications Scenario
The company has 50 million active
customers, each placing 30 calls a
day, producing a total of 1.5 billion
billable items (BITs) per day, where
each BIT represents one phone call,
one SMS, or one other unit of service
(for example, a ringtone or music
download).
Executive Summary
This technical white paper – jointly pro-
duced by SAP and IBM – describes a
project to test the performance of the
SAP®
Convergent Invoicing package.
This software is used in various service
industries, such as telecommunications,
electronic toll collection, transportation,
postal services, and Internet-based
retail business. The test scenario was
built around the requirements of a large
telecommunications company.
The requirements of the company in the
scenario helped establish the following
key performance indicators (KPIs) for
the test. The following tasks had to be
accomplished in less than 18 hours:
• Upload of 1.5 billion BITs, with a mini-
mum of 100,000 BITs uploaded per
second
• Billing of 2.5 million business partners,
which includes the aggregation of
2.275 billion BITs
• Invoicing of 2.5 million business part-
ners (customers)
The performance project demonstrated
that all KPIs could be met by using the
following hardware: IBM System x X5
server, IBM Storwize V7000, IBM DB2,
and SUSE Linux Enterprise Server.
4. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
4
This technical white paper describes the:
• Data environment chosen for billing
and invoicing and the scenarios tested
• Underlying IT infrastructure, SAP sys-
tem setup, and design reasons
• Database approach, design, and tun-
ing recommendations as implemented
on the IBM DB2 database
Project Team
Dilip Radhakrishnan SAP Project manager
Peter Jäger SAP Project coach SAP
Markus Fehling IBM Project coach IBM
Storage specialist
Gerrit Graefe SAP Development architect, order to cash
Michael Stafenk SAP Value prototyping, DB2 expert
Ingo Dahm SAP Value prototyping, Linux expert
Torsten Fellhauer SAP Value prototyping, network expert
Elke Hartmann-Bakan IBM DB2 specialist
Holger Hellmuth IBM DB2 specialist
Jörn Klauke IBM DB2 specialist
Thomas Rech IBM DB2 specialist
Maik Gasterstädt IBM Storage specialist
Summary of Results
IBM Easy Tier1
was able to shorten
the processing time for all three
tasks (upload, billing, and invoicing)
from 23 hours to 16.5 hours, a reduc-
tion of over 30%. In addition, by using
storage virtualization, IBM Storwize
V7000 eliminated any storage perfor-
mance bottlenecks.
1 IBM Easy Tier is a software function within the IBM storage systems, designed to increase the IOPS performance. For more information, refer to the section
about IBM Easy Tier on p. 10.
5. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
5
Huge Billing Volumes:
A Challenge to the
Telecommunications Industry
Telecommunications companies face
the challenge of keeping detailed
information about millions of daily calls
and SMS messages – for both report-
ing and billing purposes. For example,
companies might need to recalculate
offered and applied tariffs based on the
buying behavior and usage pattern of
consumers.2
Large telecommunications
companies therefore frequently have to
aggregate billions of BITs every month
when they bill their customers. This high
transaction volume applies not only to
the telecommunications industry but
also to others such as:
• Web shops with customers download-
ing millions of music titles per day
• Postal services managing millions of
letters and packages
• Toll collection agencies tracking thou-
sands of cars passing each day along
hundreds of roads
• Transportation companies shipping
tens of thousands of containers
across the world
To process billions of records or trans-
actions per day, enterprises need a
high-performance IT infrastructure
that allows batch jobs to run quickly.
Consequently, companies are looking
to exploit technologies that promise to
significantly reduce the runtime of batch
jobs.
IBM Easy Tier can play an important
role here. The goal of this performance
project was to prove that IBM Easy Tier
can help reduce the overall batch
runtime.
Scope of the Proof of Concept
The main objective of the project was to
prove the capability of SAP Convergent
Invoicing to handle large data volumes
using an IBM enterprise-class, Intel
processor–based architecture, and to
demonstrate how much the customer
might benefit from intelligent storage
system architectures, such as IBM Easy
Tier. Accordingly, the project chose the
kind of data volume a large telecommu-
nications company might be expected
to handle:
• 50 million business partners (custom-
ers) in the system
• 30 billable items per business partner
per day, where each BIT represents
one call or one SMS message
• 2.5 million bills per day
The project operated under the
assumption that all business partners
(customers) would receive a monthly
bill and that the telecommunications
company performs billing runs only on
weekdays in order to keep weekend
system load low to allow for mainte-
nance. This assumption lead to the
project team using the following data
volumes:
• Upload of 1.5 billion BITs per day
• Billing of 2.5 billion BITs per day,
distributed evenly over 2.5 million busi-
ness partners (customers)
• Invoicing of 2.5 million bills per day
2 This happens, for example, with mobile phone customers when the first free phone calls (or SMS messages) are not managed directly by the telecommunications company’s own network. The company will receive the
billable items from roaming partners with some delay. The order of incoming BITs might not be in the same order as the calls were made. In this case a recalculation is necessary to determine which call is free and
which is not. Depending on the result, the price per call might need to be recalculated as well..
6. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
6
These three tasks had to be processed
in less than 18 hours, allowing time for
further steps in the order-to-cash sce-
nario, such as payment, dunning runs,
or data extractions from the business
warehouse.
The proof of concept focused on
performance tuning of the upload
and billing part, because these parts
consume more than 90% of the entire
batch runtime. To make this test as real-
istic as possible, the project team tuned
performance with an SAP production
system in mind. This meant that every
tuning setting was verified as if it were
applicable to a real SAP production
environment. In other words, there was
no artificial benchmarking.
System Maintenance Not in
Project Scope
The project scope did not include system
maintenance, such as SAP software
upgrades or integrated backups, and
the paper does not cover this aspect. In
addition, the concepts of high availability
(HA) and disaster recovery (DR) are not
treated.
Component Overview of SAP
Landscape Used in Project
This section provides a brief overview
of the components used in the perfor-
mance test.
SAP Convergent Invoicing, Version 6
SAP Convergent Invoicing enables an
enterprise to pull information from sev-
eral billing streams and individually rated
service events and – from various rating
or charging systems – to consolidate
the information into a single invoice.
SAP Convergent Invoicing provides
a single view of customer data, with
historical items stored in the contract
accounts receivable and payable func-
tionality of the SAP software. Examples
of historical data include overdue open
items, disputed charges, or payments
made. All of this information can be
included in the final invoice in an easy-
to-understand format.
Figure 1. SAP Convergent Invoicing
Billing:
Billable items
Storage and
processing
BIT
BIT
BIT
BITLegacy billing system
Legacy rating system
Billion events
each day
Customer
invoicing
Partner
invoicing
Multichannel
bill
presentment
• Receivables
management
• Payment processing
• Credit management
• Partner settlement
Invoices
Open Invoices
Contract
accounts
receivable
and payable
SAP Customer Financial
Management
SAP Convergent Invoicing
SAP®
Convergent
Charging
Invoice
creation
7. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
7
SAP solutions, improve performance,
and help ensure a cohesive combina-
tion of application and database work.
IBM DB2 9, optimized for SAP software,
is an example.
Figure 2 illustrates the evolving road
map of DB2 optimized for the SAP soft-
ware, looking back through four past
releases and forward into the future.
The road map shows how both com-
panies introduce new functionality in a
planned way with smooth migrations
from one version to the next.
IBM DB2 optimized for SAP software
is the only SAP-supported database
available that operates on all SAP-
supported hardware environments from
Linux, Microsoft Windows, and UNIX to
IBM System i and IBM System z. DB2
provides the widest choice of support
for server, storage, and virtualization
technology for SAP deployments. Plus,
it can be shipped and integrated with
SAP applications as a single product.
IBM and SAP experts are dedicated to
working closely together to help ensure
that IBM DB2 sets the standard for all
other databases in the SAP ecosystem.
In addition, IBM and SAP teams are
performing joint projects that focus on a
range of areas, including performance,
benchmarking, functionality over first
of a kind (the project was breaking new
ground with these tests), best practices,
or combinations thereof.
IBM DB2 LUW 9.7 Optimized for
SAP Software
SAP applications generate a vast
amount of data in day-to-day opera-
tions, so no infrastructure component
is more important than the database.
IBM has pioneered the development
of data management technologies that
reduce the total cost of ownership for
SAP NetWeaver®
2004
Streamlined admin •
Streamlined install •
2005
2006
2007
2008
2009
2010
2011
2012
2013
SAP NetWeaver 7.0
Embedded database •
TCO: self-tuning •
Minimal admin •
SAP NetWeaver 7.0 EHP 1
Database performance warehouse •
Integrated workload management •
SAP NetWeaver 7.0 SR3
Turnkey compression and HA solution •
Integrated MDC advisor •
Deferred table creation •
SAP NetWeaver 7.0 and higher
Integrated near-line storage •
Integration of DB2 pureScale •
MDC advisor stage 2 •
Version 8.2.2
• Automatic storage admin
• Deployment optimized for SAP
Version 9.1
• Compression
• Storage limits removed
• Selected autonomic/TCO features
Version 9.5
• Integrated FlashCopy
• Threaded architecture
• DPF scaling improvements
• Integrated and automatic HA and DR
Version 9.7
• Full 360-degree monitoring
• Near-0 storage admin
• Extending online operations
• Even deeper deep compression
DB2 9.8 pureScale
• OLTP scale out
• Continuous availability
• Seamless OS and hardware
maintenance
Figure 2. DB2 Optimized for the SAP Road Map
MDC = Multidimensional clustering
TCO = Total cost of ownership
DPF = Database partition feature
8. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
8
For SAP and IBM customers, this tight
collaboration means real tangible value
in terms of performance, attractive
license and maintenance fees, easy
usability, and innovative technology that
can result in real savings. Specifically,
IBM DB2 provides real value by:
• Improving the SAP system response
time by up to 40%, protecting existing
hardware investment3
• Reducing, via compression, SAP
data storage needs by up to 70%,
dramatically saving on energy
costs, administration, and hardware
investment4
• Enabling automated SAP features that
can reduce database administration
time by 30% and free up resources to
better manage the business5
The proof of concept was meant to
be based on a combination of perfor-
mance, throughput, and best practices
with data volumes that had never been
tested before for that particular appli-
cation setup. Therefore, the layout and
setup of the database engine took
those aspects into account. It was
essentially a compromise between best
practices and performance require-
ments. To effectively handle this large
amount of data, the team applied DB2
compression, which reduced the calcu-
lated storage needs by approximately
60% – from 50 TB to 20 TB.
IBM eX5 Enterprise Systems
The IBM eX5 product portfolio – repre-
senting the fifth generation of servers
built on Enterprise X-Architecture – was
used as the SAP application server
in the proof of concept. IBM servers
with IBM eX5 technology are a major
component in ever-changing IT infra-
structures; they offer significant new
capabilities and features that address
the key requirements for customers with
SAP solution landscapes.
The IBM System x server portfolio
provides an ideal platform for SAP
applications that run virtualized in a
private cloud environment. With mul-
tiple workloads running on the same
server, performance remains important,
but reliability and availability become
more critical than ever. Enterprise serv-
ers with IBM eX5 technology are a key
component in a dynamic infrastructure
and offer significant new capabilities
and features that address the following
key requirements for SAP virtualization
solutions:
• Maximum memory with unique expan-
sion capabilities
• Fast and integrated data storage
options
• Logical partitioning of the IBM System
x server (FlexNode)
The ability to modify the memory
capacity independently of the proces-
sors, and the new high-speed local
storage options, mean this system
can be highly utilized, yielding the best
return on application investment. These
systems enable enterprises to grow
their processing, I/O, and memory
dimensions, provision what they need
now, and expand the system to meet
future requirements.
Memory Access for eX5 (MAX5)
MAX5 is the name of the memory scal-
ability subsystems – memory expansion
that can be added to eX5 servers.
MAX5 for the rack-mounted systems
3 SAP IT case study, GK12-4329-00 (12/07).
4 Refer to www.ibm.com/solutions/sap/us/en/landing/J233701A22235G06.html.
5 IWB case study, SPC03025-CHEN-01 (04/08).
9. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
9
(System x3690 X5, System x3850 X5,
and System x3950 X5) is in the form
of a 1U device that attaches below the
server.
IBM System x3690 X5
The x3690 X5 is positioned for SAP
large application servers and SAP
distributed applications. Often, it’s not
the capacity of processors that limits
virtualized systems for SAP solutions.
Instead, SAP virtualization solutions
depend more on the memory capac-
ity of the host systems. With MAX5
memory expansion the overall systems
can scale up without adding additional
servers or licenses.
The IBM System x3690 X5 is a scalable
2U, two-socket rack-optimized server.
The x3690 X5 is a system with the
same benefits known from the flagship
system x3850 X5.
See the following Web page:
www-03.ibm.com/systems/x
/hardware/enterprise.
The IBM System x3690 X5 has the fol-
lowing main features:
• Two Intel Xeon E7 2800/4800/8800
series (up to 10 core) or two Intel
Xeon 7500 or 6500 families (up to
8-cores)
• Max 1 TB RAM with MAX5 technology
IBM System x3850 X5
IBM System x enterprise servers are
the ideal platform for business-critical
and complex SAP applications, such
as database processing, customer
relationship management, and enter-
prise resource planning, as well as
highly consolidated, virtualized server
environments.
With multiple workloads running on
the same server, performance remains
important but reliability and availabil-
ity become more critical than ever.
Servers with IBM eX5 technology
are a major component in a dynamic
infrastructure and offer significant new
capabilities and features that address
key requirements for customers with
SAP landscapes.
The IBM System x3850 X5 has the fol-
lowing main features:
• Four Xeon E7 2800/4800/8800
series (6 core/8 core/10 core) or Xeon
6500/7500 series
• Scalable to eight sockets by connect-
ing two x3850 X5 servers together
• Up to 3 TB RAM with MAX5 technology
IBM Storwize V7000
Storwize V7000 is a powerful midrange
disk system, designed to be easy to use
and to enable rapid deployment without
additional resources. Storwize V7000
system is virtual storage that offers
greater efficiency and flexibility through
built-in solid-state drive (SSD) optimiza-
tion and thin provisioning technologies.
Storwize V7000 advanced functions
also enable the nondisruptive migration
of data from existing storage, simplify-
ing implementation and minimizing
disruption to users. Storwize V7000 also
enables the virtualization and reuse of
existing disk systems, supporting a
greater potential return on investment
(ROI).
See the following Web page:
www-03.ibm.com/systems/storage
/disk/storwize_v7000/index.html.
10. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
10
IBM Easy Tier
IBM Easy Tier is software functionality
within the IBM storage systems, and is
available for IBM Storwize V7000 as well
as IBM System Storage SAN Volume
Controller and IBM System Storage
DS8000 series.
Easy Tier is designed to decrease the
I/O response time and thereby increase
the input/output operations per second
(IOPS) performance. Easy Tier deter-
mines the appropriate tier of storage,
based on data access requirements,
and then automatically and nondisrup-
tively moves data to the appropriate tier
at the subvolume or sub-LUN (logical
unit number) level; typically between
SSDs and hard disk drives (HDDs). This
feature is designed to reduce, if not
eliminate, the amount of manual effort
involved.
The most critical workload for stor-
age systems is online transaction
processing (OLTP), more precisely
the random-read part of this work-
load. Because the workload is mostly
random, the to-be-read data must
be located physically; the data is not
stored in any cache. This results in a
lengthy response time in the case of
HDDs. Here, SSDs have a much bet-
ter response time; because they do
not have any mechanical parts, the
response time is just a fraction of the
HDD response time, even under load.
Because of performance aspects, the
best solution might be to store the
entire SAP database on SSDs. Even
though SSDs are much more expensive
compared to HDDs, nevertheless the
price per performance is cheaper for
SSDs. Easy Tier actually combines both
technologies, achieving low price per
capacity with HDDs and low price per
performance with SSDs.
Basically, Easy Tier monitors the per-
formance requirements of a virtual disk
(VDisk, LUN); it measures IOPS per
large block.
11. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
11
If a high clipping level is reached, the
data blocks are marked as “hot” and
moved from the lower, slower tier (HDD)
to the higher, faster tier (SSD). In addi-
tion, after the data has been cooled
(the IOPS requirements per block have
decreased) and the low clipping level
has been reached, the data is migrated
back from SSD to HDD.
Value of IBM Easy Tier to SAP
The workload of systems supporting
the SAP ERP application is defined by
OLTP. Typically, not all data in the SAP
system’s database will be accessed
during a given time frame (for example,
24 hours). Statistically, there are con-
tiguous areas accessed, and some of
them will be hot. The change rate of
the hot areas is not within minutes, but
most likely will remain over a longer
period of time – for example, 24 hours.
Here, Easy Tier is able to move these
hot areas from HDD to SSD, and as a
result, the SAP transaction time will be
reduced.
Design of the SAP Landscape
This section describes the setup of the
SAP landscape used for the perfor-
mance test.
SAP System Setup
The system landscape consisted of two
different SAP systems, with the SAP IDs
ETG and ETL. The ETG system simu-
lated the rating engine, which usually
generates the billable items (BITs), was
not part of the performance evaluation,
and is not described in this paper. The
ETG system was installed on four rack
server blades: three blades were used
because the application server, the
SAP database (DB), and SAP central
instance (CI) were installed on the fourth
blade.
From ETG, the billable items were sent
to the ETL system, which was the main
test system (DB and CI). This setup
guaranteed that the creation of BITs
would not influence the throughput of
the upload phase in the ETL target sys-
tem. Before the test started, the team
ensured that the injector system (ETG)
was not the limiting factor of the upload.
Figure 3. IBM Easy Tier
Easy Tier managed storage pool
SSD arrays
“Hot” blocks
migrate up
HDD arrays
Logical volume
Storwize V7000
“Cold” blocks
migrate down
12. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
12
The test landscape consisted of one
server – the IBM system x3850 X5,
which hosted the database and the
SAP central instance. The IBM system
x3850 X5 (7145-AC1) consisted of:
• Four 8-core Xeon 7560 series 2.26
GHz processors
• 148 GB memory
• Two HBA Emulex 4 Gbps
• Four internal 146 GB SAS HDDs
Three IBM system x3690 X5 (7148-AC1)
systems were used as application serv-
ers, each with:
• Two 8-core Xeon 7560 series 2.26
GHz processors
• 128 GB memory
• Two NICs (network interface cards)
• Four internal 146 GB SAS HDDs
The team chose this architecture
because it is standard in many cus-
tomer environments where the
database server and CI are in a high-
availability cluster while the application
servers are not.
DB2 Best Practices
During an installation of SAP software,
implementation teams install and
parameterize DB2 in an optimal way to
guarantee high performance and ease
of use. However, for the critical parame-
ters, customers can change the settings
to values that best fit their needs. There
is a set of features and functionalities
that can be activated or adapted during
installation, such as DB2 compression,
instance memory setting, or deferred
table creation.
DB2 provides an automatic storage
feature, allowing automatic growth in
the size of the database across disk
and file systems. DB2 also offers a high-
availability and disaster-recovery feature
(HADR) together with Tivoli System
Automation for Multiplatforms. Other
settings, such as the DB2 aggregated
registry variable DB2_WORKLOAD, self-
tuning memory management (STMM),
and automatic and real-time statistics
(RTS), are parameterized during SAP
installation to values proven to be optimal
for most of the SAP workload; these
values can be changed later to better
suit specific workload requirements.
Figure 4. Test System Architecture
SAP ETG system SAP ETL system
LAN
SAN
Total 100,000
SAPS
IBM x3690 X5
ETL App
IBM x3690 X5
ETL App
IBM x3690 X5
ETL App
IBM BladeCenter H
ETG system
Total 30,000 SAPS
Blade 1 CI and DB
Blade 2 App
Blade 3 App
Blade 4 App
IBM
Storwize V7000
168 HDDs
32 SSDs
IBM x3850 X5
ETL DB and CI
50,000 SAPS
SAPS = SAP®
Application Performance Standard
13. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
13
This proof-of-concept project worked
mainly with the default values rec-
ommended by SAP, but specific
parameters were modified to improve
performance for the current SAP work-
load and are described in this section.
DB2 Version
The proof of concept used DB2 9.7
Fixpack 3 throughout the project.
Linux ext3 file System
The recommendation was to use a file
system type ext3 under Linux.
IBM DB2 Storage Optimization Feature
(DB2 Compression)
With massive amounts of data from SAP
Convergent Invoicing, the IBM DB2 stor-
age optimization feature contributed
to massive storage savings. The SAP
team calculated database size savings
of approximately 60% with DB2 com-
pression – from about 50 TB down to
20 TB.
The largest tables (/1FE/0LTxxxIT,
/1FE/0LTxxxIT00, BALDAT) comprise
around 97% (19 TB) of the total data-
base size (19.6 TB). It was therefore
important to build an optimal com-
pression dictionary and compress the
tables based on those patterns to
achieve the best compression results.
The team also used row and index
compression, including temporary
table compression. For more informa-
tion about how this was done, see the
following article on the SAP Developer
Network site: Best Practice Using
DB2 Compression Feature in SAP
Environment, available online at
www.sdn.sap.com/irj/scn/index?rid=
/library/uuid/a02d282d-9074-2d10
-5496-ec2c65028a83.
SAPDATA Layout
The database was expected to grow
up to 50 TB. As a result, the data
placement needed to be considered
very carefully in order to provide opti-
mal read-write performance from the
database. Today’s storage systems
are built on multilayer abstraction
levels and the relationship between
file systems and disks – as seen by
the operating system – often does
not reflect the physical conditions.
Therefore, a one-to-one relationship
between SAPDATA file systems and
Linux logical volume was chosen,
without using the Linux logical volume
manager. The team configured a
total of 32 SAPDATA file systems – on
average, one for each available CPU in
the system.
DB2 Parallel I/O
In addition, the team wanted to increase
query performance and optimize the
I/O resource utilization by applying the
DB2_PARALLEL_IO registry variable.
By default, a DB2 database system
places only one prefetch request at a
time to a table space container. This
is done with the understanding that
multiple requests to a single device are
serialized anyway. If a container resides
on an array of disks, there is an oppor-
tunity to start multiple prefetch requests
simultaneously, without serialization.
The parameter DB2_PARALLEL_IO
enables the DB2 system to start prefetch
requests in parallel for a single con-
tainer, which might help to increase I/O
throughput. In this proof-of-concept
project, the parallelism parameter was
set to the value of 2, doubling the num-
ber of I/O servers from 32 to 64. The
DB2_PARALLEL_IO registry variable
also computes the prefetch size for
each table space if the PREFETCHSIZE
option is set to AUTOMATIC, the default
for SAP systems.
14. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
14
Based on the formula shown in Figure 5,
a prefetch size of 128 pages was cal-
culated, based on the settings used
in the proof of concept. In that way, for
example, it was very likely that all 128
physical disks would be used during a
table scan, resulting in an optimized I/O
resource utilization.
Figure 5. DB2 Prefetch Configuration
64 prefetcher
Container 1
Table A
Container 2
Table A
Container 32
Table A
Optimal prefetch size for improved query performance
1 2 3 4 5 6 7 8 125 126 127 128
Disk group SAPDATA (MDG) was built up with 128 HDDs
Extensize * (number of containers) * (physical disks*) = Prefetch size
2 * 32 * 2 = 128
-> with these settings a full table scan keeps the 128 disks busy
* Formula – how does DB2 determine the number
of physical disks per container?
– DB2_PARALLEL_IO not specified then # of
physical disks per
container defaults to 1
– DB2_PARALLEL_IO=* # of physical disks per
container defaults to 6.
– DB2_PARALLEL_IO=*:2 setting used in this PoC
db2set DB_PARALLEL_IO=’*’:2
* = for all tablespaces
2 = level of I/O parallelism is two
Old Value
Number of I/O servers (NUM_IOSERVERS) = AUTOMATIC (32)
DB2_PARALLEL_IO - was not set
New Value
Number of I/O servers (NUM_IOSERVERS) = AUTOMATIC (64)
[i] DB2_PARALLEL_IO=*:2
Key SAP Tables and Indexes
The definitions and storage of the key
tables in SAP Convergent Invoicing were
modified to offer optimal insert, update,
and delete performance and better man-
ageability of large tables. The tables were
split and the “append on” mode set. For
more information about the “append on”
table definition, see p. 16.
Table Space Definitions
For maintenance reasons, the team
moved the billing tables and their
indexes to separate table spaces.
Tables: Tablespace name = ETL#XBD
Indexes: Tablespace name = ETL#XBI
15. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
15
Creating table spaces The table spaces were created by the following commands:
CREATE LARGE TABLESPACE “ETL#XBD”
IN DATABASE PARTITION GROUP SAPNODEGRP_ETL
PAGESIZE 16384 MANAGED BY AUTOMATIC STORAGE
AUTORESIZE YES
INITIALSIZE 320 M
MAXSIZE NONE
EXTENTSIZE 64
PREFETCHSIZE AUTOMATIC
BUFFERPOOL IBMDEFAULTBP
OVERHEAD 7.500000
TRANSFERRATE 0.060000
NO FILE SYSTEM CACHING
DROPPED TABLE RECOVERY OFF;
CREATE LARGE TABLESPACE “ETL#XBI”
IN DATABASE PARTITION GROUP SAPNODEGRP_ETL
PAGESIZE 16384 MANAGED BY AUTOMATIC STORAGE
AUTORESIZE YES
INITIALSIZE 320 M
MAXSIZE NONE
EXTENTSIZE 64
INCREASESIZE 128 M
PREFETCHSIZE AUTOMATIC
BUFFERPOOL IBMDEFAULTBP
OVERHEAD 7.500000
TRANSFERRATE 0.060000
NO FILE SYSTEM CACHING
DROPPED TABLE RECOVERY OFF;
The settings for the following table space parameters were modified against the
SAP default table space definitions:
• INITIALSIZE 320 MB – the initial size of the table space. Since 32 SAPDATA
directories were used, each container had an initial size of 10 MB.
• EXTENTSIZE 64 – every extent contains 64 pages. This avoids the effort of
allocating the pages to tables more often and was useful in this scenario.
• INCREASESIZE 128 MB – decreases the effort of allocating space from the file
system to the table space more often and thus improves performance and avoids
file system fragmentation.
16. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
16
Moving Tables and Indexes to
New Table Spaces
After table space creation, the tables were moved to the new table spaces with the
help of the online table move tool: sysproc.admin_move_table available since
DB2 9.7:
call sysproc.admin_move_table(‘SAPSR3’,’/1FE/0LT023IT’,
‘ETL#XBD’, ‘ETL#XBI’, ‘ETL#XBD’, ‘’, ‘’, ‘’, ‘’, ‘’,
‘MOVE’);
Optimizing Compression Ratio,
Performing Table Maintenance
During the table move above, DB2 triggered the automatic dictionary creation (ADC)
after a certain amount of data (approximately 20 MB) had been inserted into the new
table.
In this project, the compression rate of the dictionary created by ADC was low,
because of the low filling level of the used tables. To gain a much higher compres-
sion rate, the team executed a billing run to fill up the tables, and then reorganized
the tables, including the re-creation of the dictionary, resetdictionary.
Afterward a manual runstats was performed for each of the tables:
reorg table SAPSR3.”/1FE/0LT000IT” resetdictionary;
runstats on table SAPSR3.”/1FE/0LT000IT”;
Defining Profile Statistics To keep the performance impact low during the automatic data collection of the
large tables with the runstats utility, a statistics profile was registered for each
of these tables. As result, only 1% of the data was read by the autorunstats
command.
runstats on table sapsr3.”/1FE/0LT000IT” tablesample system(1)
set profile only;
Table Definition “append on” To provide better insert performance, the team modified table definitions with the
“append on” option. When this option is used, DB2 simply sticks the new row at
the end of the table, makes no attempt to search for available space, and makes no
effort to preserve any kind of clustering order. Note that reuse of space made avail-
able by delete or update activity, which changes row size, does not occur until the
table is reorganized.
alter table ‘SAPSR3’,’/1FE/0LT000IT’ append on
17. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
17
DB2 Log Files and Log Buffer (DB
Parameters)
The DB2 online log files were placed on a separate fast SSD disk to provide high
write performance. Based on the high insert, update, and delete rates, the number
and the size of the log files were increased to the following values:
Old values:
Log file size (4KB) (LOGFILSIZ) = 16380
Number of primary log files (LOGPRIMARY) = 20
Number of secondary log files (LOGSECOND) = 80
Log buffer size (4KB) (LOGBUFSZ) = 1024
New values:
Log file size (4KB) (LOGFILSIZ) = 128000
Number of primary log files (LOGPRIMARY) = 150
Number of secondary log files (LOGSECOND) = 50
Log buffer size (4KB) (LOGBUFSZ) = 16384
The team increased the log buffer size because they had seen log buffer overflows
within a database uptime of less than 24 hours, resulting from the large amount
of data manipulating language (DML) (insert, update, and delete) statements per
transaction. This implies that DB2 had to do multiple physical write I/Os to facilitate
commit processing of a single transaction, which resulted in a performance degra-
dation of response time and throughput.
Because performance degradation can occur when secondary logs are used,
the recommendation – based on the project results – was to set the number of
secondary log files to 0. There was overhead in allocating and formatting the sec-
ondary log files. For best performance, primary log space had to be allocated in
sufficient quantity, such that the allocation of secondary logs was unnecessary.
Proactive Page Cleaning This alternate method differs from the default behavior in that page cleaners
behave more proactively in choosing which dirty pages get written out at any
given point in time. This method doesn’t respect the database parameter
chngpgs_thresh.
18. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
18
So this technique is more aggressive
and spreads out the page cleaning
work by writing more frequently, but to
fewer pages at a time. It also improves
the way DB2 agents find free pages in
the buffer pool. This allows the page
cleaners to use less disk I/O bandwidth
over a longer time.
Test runs with the billing workload
demonstrated that as well as a run-
time improvement of up to 6%, Disk
Write KB/Sec was reduced by over
60% (see Figure 6), whereas the Disk
Read KB/Sec and the IO/sec slightly
increased. To activate alternate page
cleaning, the DB2 registry variable
DB2_USE_ALTERNATE_PAGE_
CLEANING had to be set to on.
STMM and Instance Memory
In order to set selected memory areas
in DB2 to a specific size, it’s possible to
switch off STMM or to use STMM. This
can be an option in environments where
the memory requirements are known
and the workload rarely changes. For
more information, refer to the following
article on the SAP Developer Network
(SDN) site: The Evolution of the Memory
Model in IBM DB2 for Linux, UNIX, and
Windows. You can find the SDN site and
the article online at:
www.sdn.sap.com/irj/sdn/go/portal
/prtroot/docs/library/uuid
/b0aabcc9-afc1-2a10-5091-b5cd-
a33036b0.
In this proof on concept, the team used
the standard settings for STMM, with
the instance memory set to a fixed
value (default for SAP installations) and
let STMM adapt the remaining memory.
The memory allocated by DB2 is
controlled by a single parameter,
INSTANCE_MEMORY, while two other
important parameters, DATABASE_
MEMORY and APPL_MEMORY, control
the allocation of database-level memory
and application-level memory (within the
limits provided by INSTANCE_MEMORY).
Nearly all other memory configuration
parameters for the different memory
heaps used by DB2 now support an
AUTOMATIC setting.
Without STMM, extensive monitoring
and adjustments would have had to
be performed for the different memory
areas (at each stage during the scal-
ing) to achieve optimal performance for
each of the various workload profiles.
I/Opersec.
Figure 6. Summary Disk Throughput Without and With Alternate Page Cleaning
Disk throughput – average
Billing runs
0
20
40
60
80
100
120
140
0
2,000
4,000
6,000
8,000
10,0000
120,000
140,000
160,000
KB/sec
Thousands
Disk read KB/s Disk write KB/s I/O per sec
Without alternate page cleaning With alternate page cleaning
19. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
19
DB2 Instance Memory
(DBM Parameter)
The database server had a total of 148 GB of main memory, with the SAP two-tier
architecture database and central instance residing on one server. The DB2 instance
was set to 32 GB of memory, which is approximately 22% of the available memory.
The team performed tests by tripling the memory but did not see any runtime
improvement, and therefore kept the value of 32 GB for all the payload runs.
Size of instance shared memory (4KB) (INSTANCE_MEMORY) -> 8192000
STMM With STMM turned on, the monitoring and adjustment tasks were done by DB2,
which automatically configured most of the memory settings and adjusted them
at runtime to optimize performance. STMM did not require any DBA intervention to
tune the memory parameters based on workload change.
STMM tuned the following memory consumers within the database instance memory:
• Database locking LOCKLIST and MAXLOCKS
• Package cache size PCKCACHESZ
• Sort memory SHEAPTHRES_SHR and SORTHEAP
• Buffer pools
20. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
20
5,000,000
5,500,000
6,000,000
6,500,000
7,000,000
7,500,000
8,000,000
31/10
31/10
31/10
31/10
31/10
1/11
1/11
1/11
1/11
2/11
2/11
2/11
2/11
3/11
3/11
3/11
3/11
4/11
16/11
17/11
17/11
17/11
17/11
18/11
18/11
18/11
18/11
19/11
19/11
19/11
19/11
20/11
20/11
4KBPages
Days
DB memory
Buffer pool
STMM settings during the billing payload run (ET4):6
Self tuning memory
(SELF_TUNING_MEM) ON
Size of database shared memory (4KB) (DATABASE_MEMORY)
AUTOMATIC(7044530)
Max storage for lock list (4KB)
(LOCKLIST) AUTOMATIC(464480)
Percent of lock lists per application
(MAXLOCKS) AUTOMATIC(97)
Package cache size (4KB)
(PCKCACHESZ) AUTOMATIC(131566)
Sort heap thres for shared sorts
(4KB)(SHEAPTHRES_SHR) AUTOMATIC(916456)
Sort list heap (4KB)
(SORTHEAP) AUTOMATIC(183291)
db2 select BPNAME, NPAGES, PAGESIZE FROM SYSCAT.
BUFFERPOOLS
BPNAME NPAGES PAGESIZE
IBMDEFAULTBP -2 16384
runstats and Reorganization
Following best practice, the default
value was kept on for AUTO_RUNSTATS
and off for reorganization. With
AUTO_RUNSTATS (a periodic back-
ground process) and real-time statistics
(AUTO_STMT_STATS) set to on, the
catalog statistics were kept current so
that the optimizer determined the best
access path to the data for optimal
performance. The runstats profile
definition for the large tables ensured
that the impact of AUTO_RUNSTATS
on system performance was negligible.
(For more information, see the section
about defining profile statistics on p. 16.)
In the following description of the test
project, the term evaluation refers to
the processing that took place when
automatic statistics collection checked
whether or not specific tables required
statistics to be updated, deleted,
or added, and then scheduled a
runstats activity for the out-of-date
tables.
The first evaluation occurred within
two hours of database activation.
Subsequent evaluations occurred
approximately every two hours after
that, as long as the database remained
active.
Figure 7. STMM Alignment of the Database Memory and Buffer Pool
Period of 20 days
6 For more information about all memory parameter settings for run ET4, see p. 41.
Figure 7 shows an example of how
STMM has aligned the database mem-
ory and the buffer pool over a period of
20 days. The regulation of the memory
within the present time frame was
caused by a test series of billing and
invoicing runs. The graph demonstrates
clearly how STMM can adapt the mem-
ory on the fly, based on the workload.
The graph also shows that for the days
from November 18 to 20, the team exe-
cuted a series of billing runs and that,
for example, the buffer pool memory
grew and shrunk by around 3 GB.
STMM = Self-tuning memory management
21. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
21
After the load phase, the team reorganized DB2 and ran the runstats utility for all
relevant SAP tables to gain a stable state of the database. This state was saved via
FlashCopy as the “golden backup” and was later used to restore the database to
the defined state for the next series of test runs.
db2 reorg table SAPSR3.”/1FE/0LT000IT” resetdictionary
db2 runstats on table sapsr3.”/1FE/0LT000IT” for detailed
indexes all
Automatic maintenance (AUTO_MAINT) ON
Automatic runstats (AUTO_RUNSTATS) ON
Automatic statement statistics (AUTO_STMT_STATS) ON
Automatic reorganization (AUTO_REORG) OFF
• Monitoring based on DB2 Workload
Management Service classes
• Easy navigation and guided procedures
• Uniform data collection with the DBA
cockpit and database performance
warehouse (DPW) in SAP enhance-
ment package 2 for SAP NetWeaver
• Monitoring of IBM DB2 pureScale
(SAP transport available)
New DBA Cockpit for DB2 from SAP
The DBA cockpit for DB2 from SAP is
an integral part of all SAP solutions and
covers the complete administration and
monitoring of local and remote data-
bases. The team used the DBA cockpit
during the proof of concept to carry out
monitoring and performance analysis.
(For more information, refer to the IBM
e-book SAP DBA Cockpit – Flight Plans
for DB2 LUW Administrator on the IBM
Web site.) The new DBA cockpit from
SAP is a Web Dynpro–based user inter-
face and has been available since the
release of SAP enhancement package
1 for the SAP NetWeaver®
technology
platform.
This proof of concept was able to fully
exploit the new cockpit. The following
list provides an overview of the new
monitoring features of IBM DB2 9.7 that
have been added to the cockpit:
• History-based, back-end data collection
• Time-spent monitoring with drill-down
capabilities
• New event monitors (for example, for
locks)
• New object metrics (for example:
index access statistics, and database
container read and write times)
22. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
22
Figure 8. DBA Cockpit from SAPThe time-spent and historical data were
extremely useful in identifying bottle-
necks and interpreting runtime behavior.
SUSE Linux Enterprise Server V11 Tuning
The team carried out the following:
• Chose the SUSE Linux Enterprise
Server (SLES) V11.1 for this proof of
concept, and applied all available
patches at project start (April 2011).
• Explicitly checked the patch level of
the dm-multipath driver.
• Configured /etc/multipath.conf
as follows:
defaults {
polling_interval 30
failback immediate
no_path_retry 5
rr_min_io 100
path_checker tur
user_friendly_names yes
}
devices {
device {
vendor “IBM”
product “2145”
prio alua
path_grouping_policy group_by_prio
}
}
23. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
23
• Used two dual-port EMULEX HBA
cards, with the latest device driver
from EMULEX. The EMULEX admin-
istration tool OneCommand Manager
was installed as well; through this tool
the default LUN queue length was
changed from default 30 to 64 per
LUN.
• Installed a dedicated SAN switch
(IBM 48 port model, 4 GB) installed –
noncritical.
• Formatted the file systems as ext3
type; at project start, ext4 was not
certified by SAP, and was in “experi-
mentation” mode.
• Installed the base OS as well as the
swap partition on internal disks –
noncritical.
• Created a physical volume on every
LUN through pvcreate.
• Stored and installed all SAP and DB2
data on the Storwize V7000 system.
The team used a one-to-one relation-
ship between SAPDATA file systems
and LUNs, did not use the Linux
Logical Volume Manager, and chose
the file system layout shown in Figure 9.
During the installation of Linux and DB2,
the team ran several workload tests;
when more files were used, the setup
seemed to perform better. As a result,
the team decided to use 32 file systems,
as many as the number of installed
processors.
File system
mount point:
Internal HDD
V7000 MDG1
V7000 MDG2
V7000 MDG3
Directory:
/
/sap/usr
/db2
/db2etl
/sapmnt
/ETL
/ETL
/sapdata1
/sapdata2
/sapdata...
/db2dump
/sapdata32
/log_dir
Figure 9. File System Layout of Test System
24. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
24
Storwize V7000 Setup
Figure 10 shows the basic Storwize
V7000 configuration.
The manage disk group (MDG) SAPDATA
was build up with 128 HDD, 2.5 inch,
450 GB capacity at 10,000 RPMs. The
16 arrays were configured as redundant
array of independent disks (RAID) 5,
7+1. Within Storwize V7000, an internal
RAID array was called managed disk
(MDisk). In addition, two SSD-managed
disks were put into this MDG, with a
RAID 5, 5+1 configuration; each single
SSD has a capacity of 300 GB.
The DB2 log files were put into a
dedicated MDG, a RAID 10, 2+2 con-
figuration, providing maximum I/O
performance with minimum response
time. A MDG with just one array was
configured to store the EXE data, and
a RAID 10, 1+1 with two HDDs was
chosen, providing 450 GB of usable
storage.
The MDG was configured with the
default extent size of 256 MB (non-
critical parameter); all SAPDATA VDisks
were configured as space-efficient;
each SAPDATA VDisk had a size of 800
GB. The EXE and LOG VDisks were
configured with thick provisioning. In the
test environment, the LOG VDisk had a
size of 200 GB.
MDG SAPDATA, Easy Tier
16 HDD arrays: RAID 5, 7+1
3 SSD arrays: RAID 5, 5+1
MDG LOG
1 SSD array
RAID 10, 2+2
MDG EXE
1 HDD array
RAID 10, 1+1
32 VDisks
HDD MDisk SSD MDisk
1 VDisk
SSD MDisk HDD MDisk
3 VDisks
total HDDs:142, spare:14
total SSDs: 24, spare: 2
Figure 10. Storwize V7000 Configuration
Switching Between HDD, SSD, and
Easy Tier
The Storwize V7000 allowed changing
the physical storage setup or physical
storage layout while keeping the VDisk
online. This flexibility was utilized to
physically move the data (extents) of
VDisks between managed disks of type
SSD and HDD.
During the upload test, the team used
the following storage configuration to
measure the performance differences:
• HDD-only
• SSD only – this setup was used during
the first load phases, until the maxi-
mum usable SSD capacity of 4.3 TB
was reached
• HDD and SSD combined with Easy Tier
During the billing and invoice scenarios,
the team tested HDD (only) and Easy
Tier, due to capacity limitation on SSD.
The command lsvdiskextent com-
mand was used to identify the number
of extents to be moved, and then the
migrateexts command to move a
specific number of extends from one
MDisk to another one. These commands
provided the capability to move data from
SSD to HDD and to switch between the
configurations: HDD, SSD, and Easy Tier.
Using Storwize V7000 FlashCopy
as Backup
The team used the Storwize V7000
space-efficient FlashCopy functionally
as data protection. For all VDisks, the
team created thin-provisioned VDisks
with the corresponding sizes, put them
into a single consistency group, and
issued the startfcmap command.
25. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
25
Command sequence for backup: # mkfcconsistgrp -name BACKUP
# mkfcmap -source SAPDATAxx -target SAPDATAxx_BACKUP -name
SAPDATAxx_BACKUP -consistgrp BACKUP ...
# startfcconsistgrp -prep -name BACKUP
Command sequence for restore: # mkfcconsistgrp -name RESTORE
# mkfcmap -source SAPDATAxx_BACKUP -target SAPDATAxx -name
SAPDATAxx_RESTORE -consistgrp RESTORE ...
# startfcconsistgrp -prep -restore -name RESTORE
Only the command sequences for the
SAPDATA VDisks were listed (step 2
in the list); the VDisks LOG and EXE
needed to be put into the consistency
groups as well.
Scenario Description
To review, the proof-of-concept project
took into account the following require-
ments of a large telecommunications
company: 50 million active customers
producing a total of 1.5 billion BITs per
day. Each BIT represents one phone
call or SMS. Since every customer gets
a monthly invoice, the phone company
has to send out bills to 2.5 million cus-
tomers every workday.
These typical requirements resulted in
the main KPIs of this test. In less than
18 hours, the performance test had to
accomplish the following tasks:
• Upload of 1.5 billion BITs, with a minimum
of 100,000 BITs uploaded per second
• Billing of 2.5 million business partners,
which included the aggregation of
2.275 billion BITs
• Invoicing of 2.5 million business partners
The main objective of the test was to
prove 1) that the solution was capable of
handling this volume within the 18-hour
period, and 2) scalability. Even though
the team was able to prove capability and
scalability during this proof-of-concept
project, further performance improve-
ment in other scenarios would depend
on the hardware components used.
For the test, the team used 20 different
BIT classes and tested two different
scenarios. In scenario 1, all the activities
were done sequentially, mainly to cap-
ture data individually for each activity.
In the more realistic scenario 2, billing
and invoicing still ran sequentially, but
there was, at the same time, a constant
stream of upload, running in parallel.
The steps in scenario 1 were executed
in sequence.
Figure 11. Test Scenario 1
Time
Upload Billing Invoicing
Concurrent
processes
26. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
26
In scenario 2, billing and invoicing were
run sequentially but in parallel with
upload.
Scenario Execution
The following section describes how
the different runs were started.
Starting the SAP Batch Jobs
The tests were mainly started by
launching the appropriate SAP trans-
actions, FKKBIX_MA for billing and
FKKINV_MA for invoicing. Only for the
creation and transfer of the BITs did the
team need to provide a special report,
which ran in the ETG system.
All three activities were based on the
mass activity framework that permits
an easy way to launch multiple jobs in
parallel and to distribute them over the
available servers. Though possible, no
batch job ran on the central instance;
this had the benefit of clearly separat-
Figure 12. Test Scenario 2
Time
Upload
Billing Invoicing
Concurrent
processes
ing database load and SAP application
server load. The load that the enqueue
server puts onto the central instance is,
in this case, negligible.
The upload activity was started on
the ETG system. The launched batch
jobs created BITs using a report, which
was generated together with the cor-
responding BIT class to permit easy
testing for customers. The BITs thus
created were then sent via remote func-
tion call (RFC) to the corresponding
interface of the BIT class in the ETL
system, where they were written to the
database. Because synchronous BAPI®
programming interface calls were used,
the number of open RFC connections
could not exceed the number of batch
jobs in ETG, which was 150 for all tests.
Billing as well as invoicing was started
directly in the ETL system. The job dis-
tribution of scenario 1 was as follows:
Server Number of jobs during billing Number of jobs
ETL App 1 25 42
ETL App 2 25 41
ETL App 3 25 42
27. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
27
The upload used 150 parallel threads
on ETG; the called ETL system used a
load balancing mechanism to distribute
the RFC calls equally over its applica-
tion servers.
Server Number of jobs during billing Number of jobs
ETL App 1 17 14
ETL App 2 17 13
ETL App 3 17 14
ETL_CI 0 10
For scenario 2, the number of jobs was reduced because here the upload of BITs
continued during billing and invoicing:
In this scenario, the upload on ETG
used only 34 concurrent threads. The
distribution of the RFC calls on the
receiving ETL system was done in the
same way as in scenario 1.
The mass activity framework permits
“through events” to perform special
actions when jobs are launched or
finished, and this was used to automati-
cally start and stop some monitors:
• STAD records written at the end of the
run (SAP)
• SM37 information written at the end of
the run (SAP)
• /SDF/MON capturing information dur-
ing the run (SAP)
• nmon (OS, all servers)
• db2stat (DB2 LUW, DB server only)
• vmstat -xyz during the run (OS, all
servers)
• iostat -xyz during the run (OS, all
servers)
Clearly, some of these monitors provide
redundant information, but as each tool
provides the information in a certain
context, it makes the evaluation easier.
DB2 Monitoring
For DB2 monitoring, the team used
the db2top utility in batch mode, the
database snapshot monitor, and the
DBA cockpit. Furthermore, the team
collected the database and database
manager configuration.
db2top
DB2 Snapshot
(Before and After Each Run)
db2 get snapshot for database
on <SID>
Configuration Information
db2 get dbm cfg
db2 get db cfg for <SID>
db2set -all
During the proof of concept, SAP
released a new ad hoc data collection
tool for DB2 to collect a history of DB
KPIs over a certain period. For perfor-
mance monitoring, the team used this
tool, which is provided as an attach-
ment to SAP Note 1603507 in the SAP
Notes tool. The data collection tool has
a smaller footprint than the db2top
utility and therefore generates less per-
formance overhead.
db2top -d <sid> -i <interval>
-m <duration> -b <suboption>
-o <output file>
b: batch mode
suboption: t = tablespaces
d = database
m = memory
b = bufferpool
28. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
28
OS Monitoring
For OS monitoring (processor, disk,
network, and so on), the team used the
IBM tool nmon, which was started with
the SAP batch jobs. See the following
section for detailed results. (For more
information about nmon, see
www.ibm.com/developerworks/aix
/library/au-analyze_aix/.)
Storwize V7000 Monitoring
IBM provides the PERL-based moni-
toring tool svcmon, providing very
detailed performance analyses about
the Storwize V7000 system. This tool
runs continuously and creates reports
for a specified duration. Results for the
performance test are detailed in the
section that follows. (For more informa-
tion about svcmon, see
www-03.ibm.com/support/techdocs
/atsmastr.nsf/WebIndex/PRS3177.)
Results and Achievements
This section provides details of the run-
times of the three different SAP batch
jobs that were needed to process all
billable items. The first part describes
the results of scenario 1, where upload,
billing, and invoicing run in sequence.
The second part presents the results of
scenario 2.
Upload
The upload was tested in three different
configurations of the storage system:
• HDD
All data files have been placed on
HDD, and log files on SSD.
• SSD
All data and log files have been placed
on SSD.
• Easy Tier
Easy tiering was switched on for data
files, HDD, and SSD within one MDG,
log files on SSD within different MDGs.
Each of the tests was run twice, and
the following table gives the best results
achieved for each test.
Test Total runtime Throughput
(BITs per second)
HDD 4:37:00 90,253
SSD 4:02:59 102,888
Easy Tier 4:02:59 96,587
Figure 13 illustrates that the number of
BITs loaded per time interval did not
deteriorate over time.
Figure 13. Scalability of Upload Run
0
200,000,000
400,000,000
600,000,000
800,000,000
1,000,000,000
1,200,000,000
1,400,000,000
1,600,000,000
00:00:00 00:28:48 00:57:36 01:26:24 01:55:12 02:24:00 02:52:48 03:21:36 03:50:24 04:19:12
Runtime [hh:mm:ss]
Scalability BITs uploaded (SSD)
BITsuploaded
29. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
29
Billing
After the upload was run, the database
size was 20 TB – too large to be stored
on SSD only. Therefore the data file
(SAPDATA) was stored either on HDD-
only (and test runs are called HDD) or
with Easy Tier enabled (hot data stored
on SSD, and these runs are called ET).
During these runs, the log data was
stored on a separate SSD-managed
disk group. The runtimes were:
Test Total Throughput Throughput
runtime (Bills per second) (BITs per second)
HDD 17:49:25 39 38,572
ET 11:27:20 61 60,015
Figure 14. Scalability of Billing Run
0
500,000
1,000,000
1,500,000
2,000,000
2,500,000
3,000,000
00:00:00 02:24:00 04:48:00 07:12:00 09:36:00 12:00:00
Contractaccountsbilled
Runtime [hh:mm:ss]
Scalability objects processed ET
It is worth noting that there was a much
higher throughput when the ET configu-
ration was used, compared to HDD. The
number of objects processed scaled in
a linear way with the runtime of the job,
and there was no measurable degrada-
tion in throughput.
Another important aspect was the lin-
ear scalability according to the number
of concurrent jobs. This was required
to increase the throughput whenever
needed. Figure 15 shows that this
requirement was fulfilled in this case.
Billspersecond
Figure 15. Scalability with Respect to Concurrent Jobs
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
0 10 20 30 40
Scaling behavior invoicing
50
80.00
Linear (Scaling behavior invoicing)
Concurrent jobs
Scaling behavior billing
30. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
30
Invoicing
Invoicing was tested with HDD and Easy
Tier for the same reasons as stated in
the billing section. The resulting run-
times for each scheme were as follows:
Test Total runtime Throughput
(invoices per second)
HDD 0:35:49 1,163
ET 0:35:27 1,175
As expected, the runtimes with Easy
Tier were slightly better than those with
HDD only. The small difference here –
compared to the two other steps – can
be explained by the fact that the billing
documents and the resulting invoic-
ing documents – compared to the size
of all the BITs together – were rather
small. Therefore, a lot of the informa-
tion required for invoicing was very
likely still in the cache of the database
system; as a result, the different access
times between HDDs and SSDs are not
important here.
As Figure 16 shows, the application scaled with the number of objects processed.
Figure 16. Scalability of Invoicing Run
0
500,000
1,000,000
1,500,000
2,000,000
2,500,000
3,000,000
00:00:00 00:07:12 00:14:24 00:21:36 00:28:48 00:36:00
ContractAccountsInvoiced
Runtime [hh:mm:ss]
Scalability objects processed ET
To reach still higher throughput figures, it was important that the number of
invoices created per second scaled with the number of concurrent jobs. Figure 17
shows how this was the case.
Figure 17. Scalability of Invoicing with Respect to Concurrent Jobs
0.00
100.00
200.00
300.00
400.00
500.00
600.00
700.00
0 10 20 30 40
Scaling behavior invoicing
Linear (Scaling behavior invoicing)
Invoicespersecond
Concurrent jobs
Scalability behavior invoicing
31. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
31
Total Runtime
The three runtime results totaled
16:21:37 hours. IBM Easy Tier reduced
the total runtime by 6:40:37 hours or
29%.
Test Runtime (ET) Runtime Runtime (HDD)
factors
Upload 04:18:50 7 04:37:00
Billing 11:27:20 19 17:49:25
Invoicing 00:35:27 1 00:35:49
∑ 16:21:37 27 23:02:14
Scenario 2
Upload was run during billing and
invoicing to prove this was a possible
option. This scenario was measured
only with the IBM Easy Tier option, and
the results were as follows:
Test Runtime (ET)
Billing 13:23:47
Invoicing 01:53:09
∑ 15:16:58
Upload is not included in this table as
it ran concurrently with the two other
activities. It required less time to fin-
ish and didn’t influence the total time
required for the processing.
Compared with scenario 1, scenario 2
needed one hour less – due to reduced
concurrency of parallel jobs of the same
activity. Usually one activity writes to
a certain set of tables, while the other
activity reads from them. For example,
assuming activity A wants to write to a
table, it has to set locks on certain enti-
ties like data pages. If another job of the
same activity wants to access the same
data page, it has to wait. But if there
are fewer jobs, the likelihood of a wait
goes down and wait time is reduced.
As activity B only wants to read from
the data blocks, it is not affected by the
write lock. So the overall throughput is
higher.
Best Practices and Conclusions
System X5 and SLES 11
During the proof of concept, the IBM
System X5 servers that were used
turned out not to be the limiting fac-
tor. In fact, the processor and memory
utilization was typically in the area of
between 60% and 70% for the ETG
database server. Most likely, an increase
in processor capacity or memory would
not have led to an additional, significant
reduction in runtime.
According to the nmon data, the 1 GB
Ethernet link of the database server
was almost at the limit of its bandwidth.
By bonding two Ethernet cards, the IP
response time could not be reduced
– this is a limitation of the 1 GB architec-
ture. Instead, a 10 GB Ethernet network
is recommended.
Storwize V7000
The proof of concept demonstrated
that the IBM Storwize V7000 is capable
of both handling the given workload
and achieving all KPIs. The detailed
illustrations of performance data in this
section show that when the Storwize
V7000 system was under load, not
all hot data could be placed on SSDs
with the configuration used (24 SSDs).
For the given KPIs, the team recom-
mended a total of three SSD MDGs for
SAPDATA, in a RAID 5, 7+1 configura-
tion, leading to 26 SSDs, including two
spares.
32. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
32
If the team had required further runtime
reduction, they would have had to install
more SSDs – for example: four MDisk
arrays with a RAID 5, 7+1 configura-
tion, plus two spare drives, leading to
34 SSDs in total. It is not necessary to
place LOG files on SSDs. HDDs would
do the work just as well if the team
were to recommend an HDD RAID 10,
8+8 configuration on a separate MDG.
However, during the performance test,
the team was not able to use this con-
figuration, because all available HDDs
were needed to build the required
capacity of 50 TB (online DB and
backup).
The team used the I/O simulation tool
ndisk before the proof of concept was
started – to verify this performance
assumption about the Storwize V7000
system. (For more information about the
nstress tool kit and ndisk, see
www.ibm.com/developerworks/wikis
/display/WikiPtype/nstress.)
Value of Storwize V7000 Virtualization
and Easy Tier
The combination of virtualization and
Easy Tier functionality of Storwize
V7000 eased storage administration
significantly. After setting up the storage
pools, the team needed to tune nothing
else. Also, the combination of these two
functionalities eliminated, by design,
the possibility of storage performance
bottlenecks.
In addition, Storwize V7000 storage
virtualization functionally allowed
changing the physical data placement
of a VDisk (LUN), while keeping the
VDisk (LUN) online. This was done with
just a single command.
Detailed Performance Results
The following section shows the detailed
performance data for the billing run. In
order to display more detailed perfor-
mance data, the information is drawn
from only a 20-minute time span –
selected from a total of more than
10 hours.
HDD-Only Run “Billing”
The following figures show the per-
formance data for the HDD-only run
(SAPDATA placed on HDD).
33. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
33
Figure 18. DB Server File Systems I/O Performance
0
20
40
60
80
100
120
140
160
180
00:00:24
00:01:24
00:02:24
00:03:24
00:04:24
00:05:24
00:06:24
00:07:24
00:08:24
00:09:24
00:10:24
00:11:24
00:12:24
00:13:24
00:14:24
00:15:24
00:16:24
00:17:24
00:18:24
00:19:24
MB/s
SAPDATA read
SAPDATA write
SAPDATA total
LOG write
File system I/O
Figure 19. DB Server CPU Utilization
0
10
20
30
40
50
60
70
80
90
100
Idle%
Wait%
Sys%
User%
CPU workload
%utilization
Figure 20. DB Server Ethernet Performance
0
10
20
30
40
50
60
70
80
MB/s
IP write
IP read
Ethernet I/O
35. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
35
Figure 24. V7000 VDisk I/O Response Times
0
5
10
15
20
25
30
35
msec
LOG write
SAPDATA10 read
SAPDATA10 write
SAPDATA11 read
SAPDATA11 write
SAPDATA12 read
SAPDATA12 write
VDISK I/O response times
Figure 25. V7000 MDisk I/O Response Times
0
10
20
30
40
50
60
70
80
90
100
msec
SDD MD LOG write
HDD MD1 read
HDD MD1 write
HDD MD2 read
HDD MD2 write
HDD MD3 read
HDD MD3 write
MDISK I/O response times
The write and read processes compete
against the HDD resource. If the non-
volatile RAM (NV RAM) gets filled up, the
write gains a higher priority, resulting in
a lower read performance. It is no sur-
prise that all performance figures show
the same utilization characteristics,
either measured through nmon on the
OS or measured through svcmon on
the storage system.
So as not to overload the number of
graphics, the figures above included
just three printouts for SAPDATA
VDisks; likewise, only 3 SAPDATA
MDisks out of 11 are included.
36. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
36
Figure 27. DB Server CPU Workload
0
10
20
30
40
50
60
70
80
90
100
%uƟlizaƟon
Idle%
Wait%
Sys%
User%
CPU workload
Figure 28. DB Server Ethernet I/O Performance
0
10
20
30
40
50
60
70
80
90
100
MB/s
IP write
IP read
Ethernet I/O
Figure 26. DB Server File Systems I/O Performance
0
50
100
150
200
250
300
21:00:52
21:01:52
21:02:52
21:03:52
21:04:52
21:05:52
21:06:52
21:07:52
21:08:52
21:09:52
21:10:52
21:11:52
21:12:52
21:13:52
21:14:52
21:15:52
21:16:52
21:17:52
21:18:53
21:19:53
MB/s
SAPDATA read
SAPDATA write
SAPDATA total
LOG write
File systems I/O
Easy Tier Run “Billing”
The following figures show the perfor-
mance data for the Easy Tier-only run
(SAPDATA placed on HDD and SSD).
38. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
38
Figure 33. V7000 MDisk I/O response times
0
10
20
30
40
50
60
msec
SSD LOG write
SSD MD1 read
SSD MD1 write
SSD MD2 read
SSD MD2 write
SSD MD3 read
SSD MD3 write
HDD MD4 read
HDD MD4 write
HDD MD5 read
HDD MD5 write
HDD MD6 read
HDD MD6 write
MDisk I/O response times
In contrast to the HDD billing run, the
resource utilization of Easy Tier was
almost constant and did not waver.
The read and write processes were
not competing against the storage
resource. Compared to HDD, SSD
technology had no need to spend time
positioning the head over the requested
track. Regardless of what data block
needed to be accessed, the data
access time was always constant with
SSD – less than a millisecond for both
reads and writes.
If data was stored on SSD, then no stor-
age cache was involved, including NV
RAM. Data was read and written to SSD
directly, leading to more available cache
capacity for the remaining data on HDD.
As a result, the resource utilization over
time was constant, and I/O throughput
increased from approximately 140 MB
for HDD-only to 250 MB for the Easy
Tier configuration. There was also a sig-
nificant reduction in I/O response time
for VDisks and MDisks.
From a DB2 perspective, the buffer
pool performance of the average total
physical read time improved impres-
sively by over 60%, due to the better I/O
throughput and response time of the
Easy Tier implementation, compared to
HDD.
Figure 32. V7000 VDisk I/O response times
0
1
2
3
4
5
6
7
msec
LOG write
SAPDATA10 read
SAPDATA10 write
SAPDATA11 read
SAPDATA11 write
SAPDATA12 read
SAPDATA12 write
VDisk response times
39. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
39
This report focuses on the buffer pool
read times because they resulted in
read requests directly to disk, whereas
the write requests were handled by the
cache of the storage subsystem, which
stayed the same for the HDD and Easy
Tier runs. Thus, the write time did not
vary among the tests, whereas the read
times did so heavily, as shown in Figure 34.
MDisk Heat Distribution
The following table shows the heat dis-
tribution of the SAPDATA MDisks, and
the amount of data that was “hot” and
“cool.” This data was collected during
the ET billing run with the IBM Storage
Tier Advisor Tool. (For more information,
see www-304.ibm.com/support
/docview.wss?uid=ssg1S4000935.)
Figure 34. DB2 Buffer Pool Physical Reads
(in Milliseconds) During Billing Runs on HDD and Easy Tier
DB2 buffer pool physical reads (msec) – billing runs
avg_total_reads avg_async_reads avg_sync_reads
0
5.00
10.00
15.00
20.00
25.00
30.00
msec
Billing – HDD Billing – ET
24.96
21.41
25.04
9.97
1.19
12.53
Volume ID Configured size Capacity on SSD Heat distribution
0x0027 665.8 G 168.3 G 481.8 G 184.0 G
0x0026 649.5 G 165.8 G 469.0 G 180.5 G
0x0025 647.5 G 161.5 G 470.0 G 177.5 G
0x0024 648.3 G 171.0 G 461.8 G 186.5 G
0x0023 649.8 G 163.3 G 471.0 G 178.8 G
0x0022 651.3 G 170.0 G 465.5 G 185.8 G
0x0021 647.5 G 156.5 G 475.8 G 171.8 G
0x0020 652.0 G 181.8 G 455.3 G 196.8 G
0x001f 653.5 G 162.5 G 475.5 G 178.0 G
0x001e 647.5 G 168.8 G 463.5 G 184.0 G
0x001d 647.5 G 163.3 G 469.3 G 178.3 G
0x001c 650.8 G 171.3 G 463.5 G 187.3 G
0x001b 647.5 G 161.8 G 469.0 G 178.5 G
0x001a 650.3 G 163.8 G 472.0 G 178.3 G
0x0019 649.5 G 148.8 G 485.0 G 164.5 G
0x0018 647.5 G 171.8 G 459.3 G 188.3 G
0x0017 647.5 G 162.0 G 469.5 G 178.0 G
0x0016 650.3 G 166.8 G 467.8 G 182.5 G
40. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
40
0x0015 648.3 G 162.0 G 470.3 G 178.0 G
0x0014 649.3 G 169.8 G 465.3 G 184.0 G
0x0013 649.0 G 153.3 G 480.8 G 168.3 G
0x0012 649.0 G 174.5 G 459.3 G 189.8 G
0x0011 649.0 G 173.5 G 460.5 G 188.5 G
0x0010 647.5 G 180.3 G 450.3 G 197.3 G
0x000f 647.5 G 174.0 G 459.3 G 188.3 G
0x000e 647.5 G 175.3 G 457.5 G 190.0 G
0x000d 649.8 G 171.3 G 463.0 G 186.8 G
0x000c 647.5 G 110.5 G 522.0 G 125.5 G
0x000b 647.5 G 165.0 G 466.3 G 181.3 G
0x000a 647.5 G 165.8 G 465.8 G 181.8 G
0x0009 647.5 G 164.8 G 467.5 G 180.0 G
0x0008 651.0 G 63.8 G 570.5 G 80.5 G
DB2 Configuration Parameters
for Run ET4
DB2 Registry (db2set -all)
[e] DB2DBDFT=ETL
[i] DB2_RESTORE_GRANT_ADMIN_AUTHORITIES=YES [DB2_WORKLOAD]
[i] DB2_BLOCKING_WITHHOLD_LOBLOCATOR=NO [DB2_WORKLOAD]
[i] DB2_AGENT_CACHING_FMP=OFF [DB2_WORKLOAD]
[i] DB2_TRUST_MDC_BLOCK_FULL_HINT=YES [DB2_WORKLOAD]
[i] DB2_CREATE_INDEX_COLLECT_STATS=YES [DB2_WORKLOAD]
[i] DB2_ATS_ENABLE=YES [DB2_WORKLOAD]
[i] DB2_RESTRICT_DDF=YES [DB2_WORKLOAD]
[i] DB2_DUMP_SECTION_ENV=YES [DB2_WORKLOAD]
[i] DB2_OPT_MAX_TEMP_SIZE=10240 [DB2_WORKLOAD]
[i] DB2_USE_FAST_PREALLOCATION=OFF
[i] DB2_WORKLOAD=SAP
[i] DB2_TRUNCATE_REUSESTORAGE=IMPORT [DB2_WORKLOAD]
[i] DB2_MDC_ROLLOUT=DEFER [DB2_WORKLOAD]
[i] DB2_ATM_CMD_LINE_ARGS=-include-manual-tables [DB2_WORKLOAD]
[i] DB2_SKIPINSERTED=YES [DB2_WORKLOAD]
[i] DB2_VIEW_REOPT_VALUES=YES [DB2_WORKLOAD]
[i] DB2_OBJECT_TABLE_ENTRIES=65532 [DB2_WORKLOAD]
[i] DB2_OPTPROFILE=YES [DB2_WORKLOAD]
[i] DB2_USE_ALTERNATE_PAGE_CLEANING=ON
[i] DB2_IMPLICIT_UNICODE=YES [DB2_WORKLOAD]
[i] DB2STMM=APPLY_HEURISTICS:YES [DB2_WORKLOAD]
[i] DB2_INLIST_TO_NLJN=YES [DB2_WORKLOAD]
41. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
41
[i] DB2_MINIMIZE_LISTPREFETCH=YES [DB2_WORKLOAD]
[i] DB2_REDUCED_OPTIMIZATION=4,INDEX,JOIN,NO_TQ_FACT,NO_HSJN_BUILD_FACT,
STARJN_CARD_SKEW,NO_SORT_MGJOIN,CART OFF,CAP OFF [DB2_WORKLOAD]
[i] DB2NOTIFYVERBOSE=YES [DB2_WORKLOAD]
[i] DB2TERRITORY=1
[i] DB2_INTERESTING_KEYS=YES [DB2_WORKLOAD]
[i] DB2_EVALUNCOMMITTED=YES [DB2_WORKLOAD]
[i] DB2_LOGGER_NON_BUFFERED_IO=ON
[i] DB2_EXTENDED_OPTIMIZATION=NLJOIN_KEYCARD,IXOR [DB2_WORKLOAD]
[i] DB2_ANTIJOIN=EXTEND [DB2_WORKLOAD]
[i] DB2COMPOPT=327685,131776 [DB2_WORKLOAD]
[i] DB2ATLD_PORTS=60000:65000
[i] DB2ENVLIST=INSTHOME SAPSYSTEMNAME dbs_db6_schema DIR_LIBRARY LD_LIBRARY_PATH
[i] DB2COMM=TCPIP [DB2_WORKLOAD]
[i] DB2_PARALLEL_IO=*:2
[g] DB2FCMCOMM=TCPIP4
[g] DB2SYSTEM=coe-he-16
[g] DB2INSTDEF=db2etl
Database Manager Configuration
(db2 get dbm cfg)
Database Manager Configuration
Node type Enterprise Server
Edition
Database manager configuration release level 0x0d00
CPU speed (millisec/instruction) (CPUSPEED) 2,68E-01
Communications bandwidth (MB/sec) (COMM_BANDWIDTH) 1,00E+08
Max number of concurrently active databases (NUMDB) 8
Federated Database System Support (FEDERATED) NO
Transaction processor monitor name (TP_MON_NAME)
Default charge-back account (DFT_ACCOUNT_STR)
Java Development Kit installation path (JDK_PATH) /db2/db2etl/sqllib/
Diagnostic error capture level (DIAGLEVEL) 3
Notify Level (NOTIFYLEVEL) 3
Diagnostic data directory path (DIAGPATH) /db2/ETL/db2dump
Size of rotating db2diag & notify logs (MB) DIAGSIZE) 1000
Default database monitor switches
Buffer pool (DFT_MON_BUFPOOL) ON
Lock (DFT_MON_LOCK) ON
Sort (DFT_MON_SORT) ON
Statement (DFT_MON_STMT) ON
Table (DFT_MON_TABLE) ON
Timestamp (DFT_MON_TIMESTAMP) ON
Unit of work (DFT_MON_UOW) ON
Monitor health of instance and databases (HEALTH_MON) OFF
SYSADM group name (SYSADM_GROUP) DBETLADM
SYSCTRL group name (SYSCTRL_GROUP) DBETLCTL
42. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
42
SYSMAINT group name (SYSMAINT_GROUP) DBETLMNT
SYSMON group name (SYSMON_GROUP) DBETLMON
Client Userid-Password Plugin (CLNT_PW_PLUGIN)
Client Kerberos Plugin (CLNT_KRB_PLUGIN)
Group Plugin (GROUP_PLUGIN)
GSS Plugin for Local Authorization (LOCAL_GSSPLUGIN)
Server Plugin Mode (SRV_PLUGIN_MODE) UNFENCED
Server List of GSS Plugins (SRVCON_GSSPLUGIN_LIST)
Server Userid-Password Plugin (SRVCON_PW_PLUGIN)
Server Connection Authentication (SRVCON_AUTH) NOT_SPECIFIED
Cluster manager (CLUSTER_MGR)
Database manager authentication (AUTHENTICATION) SERVER_ENCRYPT
Alternate authentication (ALTERNATE_AUTH_ENC) NOT_SPECIFIED
Cataloging allowed without authority (CATALOG_NOAUTH) NO
Trust all clients (TRUST_ALLCLNTS) YES
Trusted client authentication (TRUST_CLNTAUTH) CLIENT
Bypass federated authentication (FED_NOAUTH) NO
Default database path (DFTDBPATH) /db2/ETL
Database monitor heap size (4KB) (MON_HEAP_SZ) AUTOMATIC(90)
Java Virtual Machine heap size (4KB) (JAVA_HEAP_SZ) 2048
Audit buffer size (4KB) (AUDIT_BUF_SZ) 0
Size of instance shared memory (4KB) (INSTANCE_MEMORY) 8192000
Backup buffer default size (4KB) (BACKBUFSZ) 1024
Restore buffer default size (4KB) (RESTBUFSZ) 1024
Agent stack size (AGENT_STACK_SZ) 1024
Sort heap threshold (4KB) (SHEAPTHRES) 0
Directory cache support (DIR_CACHE) NO
Application support layer heap size (4KB) (ASLHEAPSZ) 16
Max requester I/O block size (bytes) (RQRIOBLK) 65000
Query heap size (4KB) (QUERY_HEAP_SZ) 1000
Workload impact by throttled utilities(UTIL_IMPACT_LIM) 10
Priority of agents (AGENTPRI) SYSTEM
Agent pool size (NUM_POOLAGENTS) AUTOMATIC(100)
Initial number of agents in pool (NUM_INITAGENTS) 5
Max number of coordinating agents (MAX_COORDAGENTS) AUTOMATIC(200)
Max number of client connections (MAX_CONNECTIONS) AUTOMATIC(MAX_
COORDAGENTS)
Keep fenced process (KEEPFENCED) NO
Number of pooled fenced processes (FENCED_POOL) AUTOMATIC(MAX_
COORDAGENTS)
Initial number of fenced processes (NUM_INITFENCED) 0
Index re-creation time and redo index build (INDEXREC) RESTART
Transaction manager database name (TM_DATABASE) 1ST_CONN
Transaction resync interval (sec) (RESYNC_INTERVAL) 180
SPM name (SPM_NAME)
SPM log size (SPM_LOG_FILE_SZ) 256
SPM resync agent limit (SPM_MAX_RESYNC) 20
SPM log path (SPM_LOG_PATH)
43. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
43
TCP/IP Service name (SVCENAME) sapdb2ETL
Discovery mode (DISCOVER) SEARCH
Discover server instance (DISCOVER_INST) ENABLE
SSL server keydb file (SSL_SVR_KEYDB)
SSL server stash file (SSL_SVR_STASH)
SSL server certificate label (SSL_SVR_LABEL)
SSL service name (SSL_SVCENAME)
SSL cipher specs (SSL_CIPHERSPECS)
SSL versions (SSL_VERSIONS)
SSL client keydb file (SSL_CLNT_KEYDB)
SSL client stash file (SSL_CLNT_STASH)
Maximum query degree of parallelism (MAX_QUERYDEGREE) ANY
Enable intra-partition parallelism (INTRA_PARALLEL) NO
Maximum Asynchronous TQs per query (FEDERATED_ASYNC) 0
No. of int. communication buffers(4KB)(FCM_NUM_BUFFERS) AUTOMATIC(4096)
No. of int. communication channels (FCM_NUM_CHANNELS) AUTOMATIC(2048)
Node connection elapse time (sec) (CONN_ELAPSE) 10
Max number of node connection retries (MAX_CONNRETRIES) 5
Max time difference between nodes (min) (MAX_TIME_DIFF) 60
db2start/db2stop timeout (min) (START_STOP_TIME) 10
Database Configuration (db2 get db cfg for etl)
Database Configuration for Database ETL
Database configuration release level 0x0d00
Database release level 0x0d00
Database territory en_US
Database code page 1208
Database code set UTF-8
Database country/region code 1
Database collating sequence IDENTITY_16BIT
Number compatibility OFF
Varchar2 compatibility OFF
Date compatibility OFF
Database page size 16384
Dynamic SQL Query management (DYN_QUERY_MGMT) DISABLE
Statement concentrator (STMT_CONC) OFF
Discovery support for this database (DISCOVER_DB) ENABLE
Restrict access NO
Default query optimization class (DFT_QUERYOPT) 5
Degree of parallelism (DFT_DEGREE) ANY
Continue upon arithmetic exceptions (DFT_SQLMATHWARN) NO
Default refresh age (DFT_REFRESH_AGE) 0
Default maintained table types for opt (DFT_MTTB_TYPES) SYSTEM
Number of frequent values retained (NUM_FREQVALUES) 10
Number of quantiles retained (NUM_QUANTILES) 20
Decimal floating point rounding mode (DECFLT_ROUNDING) ROUND_HALF_EVEN
Backup pending NO
44. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
44
All committed transactions have been written to disk NO
Rollforward pending NO
Restore pending NO
Multi-page file allocation enabled YES
Log retain for recovery status NO
User exit for logging status NO
Self tuning memory (SELF_TUNING_MEM) ON
Size of database shared memory (4KB) (DATABASE_MEMORY) AUTOMATIC(6993738)
Database memory threshold (DB_MEM_THRESH) 10
Max storage for lock list (4KB) (LOCKLIST) AUTOMATIC(422112)
Percent. of lock lists per application (MAXLOCKS) AUTOMATIC(97)
Package cache size (4KB) (PCKCACHESZ) AUTOMATIC(128005)
Sort heap thres for shared sorts (4KB) (SHEAPTHRES_SHR) AUTOMATIC(113636)
Sort list heap (4KB) (SORTHEAP) AUTOMATIC(22727)
Database heap (4KB) (DBHEAP) AUTOMATIC(3745)
Catalog cache size (4KB) (CATALOGCACHE_SZ) 2560
Log buffer size (4KB) (LOGBUFSZ) 16384
Utilities heap size (4KB) (UTIL_HEAP_SZ) 10000
Buffer pool size (pages) (BUFFPAGE) 10000
SQL statement heap (4KB) (STMTHEAP) AUTOMATIC(8192)
Default application heap (4KB) (APPLHEAPSZ) AUTOMATIC(256)
Application Memory Size (4KB) (APPL_MEMORY) AUTOMATIC(40000)
Statistics heap size (4KB) (STAT_HEAP_SZ) AUTOMATIC(4384)
Interval for checking deadlock (ms) (DLCHKTIME) 10000
Lock timeout (sec) (LOCKTIMEOUT) 3600
Changed pages threshold (CHNGPGS_THRESH) 40
Number of asynchronous page cleaners (NUM_IOCLEANERS) AUTOMATIC(31)
Number of I/O servers (NUM_IOSERVERS) 64
Index sort flag (INDEXSORT) YES
Sequential detect flag (SEQDETECT) YES
Default prefetch size (pages) (DFT_PREFETCH_SZ) AUTOMATIC
Track modified pages (TRACKMOD) ON
Default number of containers 1
Default tablespace extentsize (pages) (DFT_EXTENT_SZ) 2
Max number of active applications (MAXAPPLS) AUTOMATIC(408)
Average number of active applications (AVG_APPLS) AUTOMATIC(3)
Max DB files open per application (MAXFILOP) 61440
Log file size (4KB) (LOGFILSIZ) 128000
Number of primary log files (LOGPRIMARY) 150
Number of secondary log files (LOGSECOND) 50
Path to log files /db2/ETL/log_dir
/NODE0000/
Overflow log path (OVERFLOWLOGPATH)
Mirror log path (MIRRORLOGPATH)
First active log file
Block log on disk full (BLK_LOG_DSK_FUL) YES
Block non logged operations (BLOCKNONLOGGED) NO
Percent max primary log space by transaction (MAX_LOG) 0
45. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
45
Num. of active log files for 1 active UOW(NUM_LOG_SPAN) 50
Group commit count (MINCOMMIT) 1
Percent log file reclaimed before soft chckpt (SOFTMAX) 300
Log retain for recovery enabled (LOGRETAIN) OFF
User exit for logging enabled (USEREXIT) OFF
HADR database role STANDARD
HADR local host name (HADR_LOCAL_HOST)
HADR local service name (HADR_LOCAL_SVC)
HADR remote host name (HADR_REMOTE_HOST)
HADR remote service name (HADR_REMOTE_SVC)
HADR instance name of remote server
(HADR_REMOTE_INST)
HADR timeout value (HADR_TIMEOUT) 120
HADR log write synchronization mode (HADR_SYNCMODE) NEARSYNC
HADR peer window duration (seconds) (HADR_PEER_WINDOW) 0
First log archive method (LOGARCHMETH1) OFF
Options for logarchmeth1 (LOGARCHOPT1)
Second log archive method (LOGARCHMETH2) OFF
Options for logarchmeth2 (LOGARCHOPT2)
Failover log archive path (FAILARCHPATH)
Number of log archive retries on error (NUMARCHRETRY) 5
Log archive retry Delay (secs) (ARCHRETRYDELAY) 20
Vendor options (VENDOROPT)
Auto restart enabled (AUTORESTART) ON
Index re-creation time and redo index build (INDEXREC) SYSTEM (RESTART)
Log pages during index build (LOGINDEXBUILD) OFF
Default number of loadrec sessions (DFT_LOADREC_SES) 1
Number of database backups to retain (NUM_DB_BACKUPS) 12
Recovery history retention (days) (REC_HIS_RETENTN) 60
Auto deletion of recovery objects (AUTO_DEL_REC_OBJ) OFF
TSM management class (TSM_MGMTCLASS)
TSM node name (TSM_NODENAME)
TSM owner (TSM_OWNER)
TSM password (TSM_PASSWORD)
Automatic maintenance (AUTO_MAINT) ON
Automatic database backup (AUTO_DB_BACKUP) OFF
Automatic table maintenance (AUTO_TBL_MAINT) ON
Automatic runstats (AUTO_RUNSTATS) ON
Automatic statement statistics
(AUTO_STMT_STATS)
ON
Automatic statistics profiling (AUTO_STATS_PROF) OFF
Automatic profile updates (AUTO_PROF_UPD) OFF
Automatic reorganization (AUTO_REORG) OFF
Auto-Revalidation (AUTO_REVAL) DEFERRED
Currently Committed (CUR_COMMIT) ON
CHAR output with DECIMAL input (DEC_TO_CHAR_FMT) NEW
Enable XML Character operations (ENABLE_XMLCHAR) YES
WLM Collection Interval (minutes) (WLM_COLLECT_INT) 0
46. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
46
Monitor Collect Settings
Request metrics (MON_REQ_METRICS) BASE
Activity metrics (MON_ACT_METRICS) NONE
Object metrics (MON_OBJ_METRICS) BASE
Unit of work events (MON_UOW_DATA) NONE
Lock timeout events (MON_LOCKTIMEOUT) WITHOUT_HIST
Deadlock events (MON_DEADLOCK) WITHOUT_HIST
Lock wait events (MON_LOCKWAIT) NONE
Lock wait event threshold (MON_LW_THRESH) 5000000
Number of package list entries (MON_PKGLIST_SZ) 32
Lock event notification level (MON_LCK_MSG_LVL) 1
SMTP Server (SMTP_SERVER)
SQL conditional compilation flags (SQL_CCFLAGS)
Section actuals setting (SECTION_ACTUALS) NONE
Connect procedure (CONNECT_PROC)
SAP Convergent Invoicing Key Tables
Load tables: /1FE/0LT0xxIT
Billing tables: /1FE/0LT0xxIT00
Other large tables DFKKCOH, DFKKCOHI
DFKKINVDOC_H, DFKKINVDOC_I, DFKKINVDOC_P, DFKKINVDOC_S
DFKKKO, DFKKOP, DFKKOPK
DFKKSUMC
DFKKINVBILL_H, DFKKINVBILL_I, DFKKINVBILL_S
48. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
48
Resources
SAP for Telecommunications and SAP Convergent Invoicing:
www.sap.com/industries/telecom/businessprocesses/invoicing/index.epx
nmon Linux OS performance monitoring tool:
www.ibm.com/developerworks/aix/library/au-analyze_aix
svcmon performance monitoring tool for Storwize V7000 / SVC:
www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3177
ndisk (nstress) I/O simulation tool:
www.ibm.com/developerworks/wikis/display/WikiPtype/nstress
www.ibm.com/developerworks/mydeveloperworks/blogs/svcmon/?lang=ja
IBM Storage Advisor Tool:
www-304.ibm.com/support/docview.wss?uid=ssg1S4000935
DB2 9.7 Information Center:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
DB2 9.7 Information Center (db2top):
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=%2Fcom.
ibm.db2.luw.admin.cmd.doc%2Fdoc%2Fr0025222.html
DB2 for Linux, UNIX, and Microsoft Windows on SAP Developer Network (SDN):
www.sdn.sap.com/irj/sdn/db6
IBM APAR IC76792:
ibm.com/support/docview.wss?uid=swg21502430
IBM Developer Works - Distributed DBA: Storage, I/O, and DB2
(DB2_PARALLEL_IO):
ibm.com/developerworks/data/library/dmmag/DBMag_2009_Issue1/DBMag_
Issue109_DistributedDBA/index.html
49. SAP and IBM Demonstrate Capability of Handling High Billing Volume in a Telecommunications Scenario
49
IBM Developer Works – DB2 tuning tips for OLTP applications:
ibm.com/developerworks/data/library/techarticle/anshum/0107anshum.
html#logbuffersize
IBM E-Book SAP DBA Cockpit – Flight Plans for DB2 LUW Administrator:
http://public.dhe.ibm.com/common/ssi/ecm/en/imm14052usen/IMM14052USEN.PDF
Article on SAP SDN: The Evolution of the Memory Model in IBM DB2 LUW by
Johannes Heinrich:
sdn.sap.com/irj/scn/index?rid=/library/uuid/b0aabcc9-afc1-2a10-5091-b5cda33036b0
SAP Notes:
Note 1351160 – DB6: Using DB2 9.7 with SAP Software
Note 1329179 – DB6: DB2 V9.7 Standard Parameter Settings
Note 1603507 – DB6: AdHoc Analysis Tool for DB6