SlideShare a Scribd company logo
1 of 7
Download to read offline
Mastering the Odyssey of Scale from Nano to Peta: The Smart Use of
                                                                                                          High Performance Computing (HPC) Inside IBM®
                                                                                                          Sponsored by IBM
                                                                                                          Srini Chari, Ph.D., MBA
                                                                                                          June, 2009
                                                                                                          mailto:chari@cabotpartners.com
                 Cabot Partners Group, Inc. 100 Woodcrest Lane, Danbury CT 06810, www.cabotpartners.com




                                                                                                          Introduction – HPC Enhances IBM’s Bottom Line
                                                                                                          In today’s climate, companies must innovate with flexibility and speed in response to customer demand,
                                                                                                          market opportunity, regulatory changes, and competition. High performance computing helps companies
                                                                                                          achieve the speed, agility, insights, and sustained competitive advantage to deliver innovative products,
                                                                                                          increase revenues, and improve operational performance. Scientists, engineers, and analysts in many smart
                                                                                                          enterprises rely on HPC to solve challenging problems in engineering, manufacturing, finance, risk
                                                                                                          analysis, revenue management, and in the life and earth sciences. IBM is no exception. IBM not only
                                                                                                          delivers a rich portfolio of HPC solutions spanning systems, software, and services to the marketplace but
                                                                                                          also uses these HPC solutions to work smarter across its diverse technology, systems, software, and services
                                                                                                          businesses. HPC is being increasingly used at IBM to globally integrate, share, and analyze petabytes of
                                                                                                          information to develop next generation nanometer processors, build energy efficient systems and data
                                                                                                          centers, maximize revenues, and streamline supply and demand chains.
                                                                                                          Through in-depth interviews across several IBM businesses, this article highlights the substantial business
                                                                                                          value and ROI obtained by IBM’s leading-edge internal use of HPC across it diverse businesses. Globally,
                                                                                                          HPC is helping IBM master the odyssey of scale, become smarter and more innovative, improve operational
                                                                                                          efficiencies, reduce costs and complexity, and increase revenues and profits. HPC already directly benefits
                                                                                                          IBM’s bottom line today and will become even more crucial in the future with the increasing
                                                                                                          miniaturization of processor technology, emphasis on energy-efficient systems designs, globalization of
                                                                                                          collaborative product development, and the increasing pressure to derive actionable insight and efficiencies
                                                                                                          from the data deluge. In doing so, it will further improve IBM’s internal business processes ranging from
                                                                                                          development, manufacturing, supply chain, marketing, and sales.
                                                                                                          The Key Elements of the IBM HPC Solutions Portfolio
                                                                                                          IBM offers a wide array of HPC solutions through its
                                                                                                          multi-core processor systems, large storage systems,
                                                                                                          support for a broad range of operating systems,
                                                                                                          visualization, innovative applications, middleware and
                                                                                                          partner ISVs with proven expertise and deep industry
                                                                                                          presence. IBM has the leading portfolio1 of HPC
                                                                                                          architectures, systems, and software ranging from the
                                                                                                          System x® Cluster 1350TM, Blades, iDataPlex®, System p®,
                                                                                                          and Blue Gene® with support for a range of operating
                                                                                                          systems including Linux®, AIX®, and Windows® together
                                                                                                          with cluster management software, a high-performance
                                                                                                          shared-disk clustered file system - General Parallel File
                                                                                                          System (GPFSTM), and optimized scientific and engineering
                                                                                                          libraries. In addition, IBM Research has developed leading
                                                                                                          edge algorithms, advanced mathematical assets, methods

   Cabot                                                                                                  and capabilities to develop predictive analytics and
                                                                                                          business optimization to create new solutions for unique
                                                                                                          challenges2.
Partners
Optimizing Business Value
                                                                                                          1
                                                                                                              The IBM Deep Computing Portfolio, http://www-03.ibm.com/systems/deepcomputing/index.html
                                                                                                          2
                                                                                                              IBM Business Analytics and Optimization, http://www-935.ibm.com/services/us/gbs/bus/html/bcs_centeroptimization.html
                                                                                                                                                                                                                                     1
Significant HPC Initiatives inside IBM
IBM is a powerful example of how HPC can benefit organizations. The following internal initiatives
indicate the wide range of activities through which IBM uses HPC to innovate, enhance productivity, and
stay ahead of the competition:

       Collaboratively design, test, validate, and manufacture rapidly shrinking nanometer integrated chips
        to differentiate its systems from the competition.
       Deliver significant innovations in 'green' and next generation data centers to reduce the total cost of
        ownership and package systems with increasing computing density to deliver energy-efficient
        petascale enterprise systems for the data center.
       Collaborate between globally dispersed product and solution development teams involving secure
        and rapid exchange of several petabytes of sensitive data and applications.
       Leverage business analytics and optimization to gain insight from vast data available on customers
        and suppliers to refine internal manufacturing, sales, and marketing processes.

Scalable HPC Solutions Vital to the Design and Manufacture of Next Generation IBM
Nanoscale Integrated Chips
HPC solutions are crucial to differentiate IBM in the semiconductor and systems space. A wide range of
Electronic Design Automation (EDA) solutions enable IBM to collaboratively design, test, validate, and
manufacture rapidly shrinking nanometer integrated chips leveraging advanced research, process
technologies, and global research and development teams. These integrated chips are used in the IBM
System z, System p, and other OEM Application Specific Integrated Circuits (ASICs) such as the game
chips for the Sony PlayStation 3, the Nintendo Wii, and others.
Technology challenges in nanoscale chip design and manufacturing: Power, cooling, and other
fundamental physics constraints have prevented the high-volume microprocessor industry from scaling up
clock frequencies to achieve higher levels of performance on single threads. The good news is that, in recent
years, multi-core processors and System-on-a-Chip (SoC) designs have become the norm to continue to
scale processor performance. However, this drives up transistor count significantly – nearly a billion per die
today - forcing the trend to designs with feature sizes of 32nm or below and density scaling that reduces the
pitch, wire widths and spacing.
 For over three decades, semiconductor density scaling has been achieved through optical scaling with
optical lithography exposure equipment using shorter wavelengths or larger numerical lens apertures to
accurately print designs on the die. Today’s ultraviolet (UV) exposure tools can be made to work well down
to 32nm. Today and in the future, computational scaling techniques such as Optical Proximity Correction
(OPC) and Optical Rules Checking (ORC) developed by IBM’s Computational Lithography function within
its Semiconductor Research and Development Center (SRDC) in collaboration with partners such as Mentor
Graphics will be required for semiconductor process flows. Together with these computational lithography
techniques, more advanced circuit timing and noise analyses will also be required to minimize defects,
improve yields, verify designs, and maximize performance under a range of typical operating conditions.
These advanced computational techniques for circuit design and semiconductor process modeling require
HPC systems and software to develop and evaluate effective designs in a timely and cost-effective manner.

Smart HPC techniques for semiconductor circuit design: Static Timing Analysis (STA) is an
approximation method of computing the expected timing of a digital circuit to verify that all paths operate
correctly at a target clock frequency without requiring circuit simulation from first principles. With fixed
delays, STA works well for designs of 45nm or more and it has become a mainstay of processor design over
the last few decades. However, with increased miniaturization, process and environmental variability effects
increase and adversely affect yields. To better qualify designs, engineers use multi-corner (multi-parameter)
STA to analyze the impact of these variations. Typically, STA is repeated for 100s-1000s of corners to
cover the range of operation. But this could be overly optimistic or pessimistic resulting in lower yields or
performance. Further, with increasing miniaturization, crosstalk noise affects delay significantly. Noise-
                                                                                                              2
aware Statistical STA (SSTA) replaces fixed delays with distributions that accurately model crosstalk noise
effects. It is a more sophisticated approach to maximize expected performance and yields.

Multi-corner STA and SSTA considerably drive up computing requirements. Fortunately, with collaborative
partnerships, IBM has developed smart, optimized scalable parallel HPC applications for multi-core parallel
and clustered systems that minimize cache misses, leverage fast interconnects and IBM’s General Parallel
File System (GPFS). These applications have been deployed on System x and p clusters and Blue Gene. A
64X speedup was obtained on Blue Gene for an early STA industry benchmark leveraging unique aspects of
Blue Gene such as the fast communication ring, the co-processor mode to enable pipelining of
communication with calculation, and dedicated partition access/scheduling. With such dramatic
improvements in turn around time for timing analysis, IBM global teams can now iterate on semiconductor
designs several times per day as opposed to once every several days. Furthermore, using the GPFS based
Global Storage Architecture (GSA), these global design teams can collaborate better on design models and
can share large data sets more effectively. This significantly improves designer productivity and reduces
time to market.

Smart high performance computational lithography for semiconductor manufacturing: A collection of
math-based approaches came to the forefront of photolithography as the industry grappled with the
challenges associated with the transition to 22 nanometer CMOS process technology and beyond. Used
widely within IBM, Optical proximity correction (OPC) is a photolithography enhancement technique
commonly used to compensate for image errors due to diffraction or process effects arising from the
limitations of UV (193nm wavelength) to resolve ever-finer details of patterns on the photomasks that are
used to etch semiconductor layers and create the building blocks of the transistors and other elements that
make up integrated circuits. OPC anticipates the
                                                          193 nm Optical Lithography Evolution
irregularities of shape and size of the projected                                                    2 Masks +      Optimized Mask(s)
                                                                                 Mask + Correction
image and applies corrective compensation to the                Mask = Design
                                                                                 + Assists = Design
                                                                                                    Correction =
                                                                                                       Design
                                                                                                                       + Optimized
                                                                                                                     Source = Design
photo mask images, which then produce a light                         90 nm          65 & 45 nm      32 & 28 nm     22 nm & Beyond
beam that more closely approximates the intended
shapes. The computational effort behind these             Design
                                                        (M1 SRAM)
methods is immense. The calculations required for                                                                        20




adjusting OPC geometries (computationally
                                                                                                                                   28




                                                           Mask
manipulating 100s of billions of nanometer edges
                                                                                                                              23




to fit design shapes) in order to account for
variations in focus and exposure for a state-of-the-      Wafer /
                                                         Contour
art integrated circuit could take over 100 CPU-
years of computer time. What’s also needed is a          Designs Get More Regular, Resolution Enhancement Techniques Get More Complex
smarter approach to computational scaling
embodied in Source-Mask-Optimization (SMO) – a unique approach developed by IBM and its partners to
extend current optical lithography to 22nm. Illustrated in the adjoining figure, SMO illuminates both the
mask and the wafer resulting in highly non-intuitive mask designs and detailed source designs using
mathematical optimization techniques and HPC developed by IBM Research.

The IBM computational lithography function is the largest user of HPC inside IBM and currently this HPC
use is growing at about 300% annually. Prior to 2004 there was very little use of HPC. Today, System x
clusters with about 20,000 nodes are used almost on a daily basis. The largest System p SMP systems are
also used routinely. Blade systems with IBM’s Cell processors have been also used to accelerate several
very compute intensive processes. In addition, Blue Gene access is through the joint (IBM, RPI, and New
York State) research endeavor – the Center for Computational Nanotechnology Initiative (CCNI). Early
results on the Blue Gene are very encouraging. A virtual patterning flow problem3 that takes about eight
months on a serial commercial simulator runs in under 12 hours on the Blue Gene using the advanced
parallel algorithms developed by the IBM Computational Lithography group.
3
 D. Chidambarrao, “Computational Materials Engineering in the Semiconductor Industry”, Presented at CICME, NMAB, NRC,
May, 2007.
                                                                                                                                        3
The smart and growing use of HPC for nanoscale chip design and manufacturing: With HPC, the
ultimate goal is to virtualize the full semiconductor development process. Doing so will reduce cost by
requiring fewer silicon experiments and improving time to market for next generation semiconductor
technologies. This will need near first principle predictive models for circuit design and process modeling
building upon the current work in timing analysis and computational lithography. IBM’s HPC solutions
portfolio is the backbone required to support such growing numerically intensive computing tasks.

Innovative HPC Solutions to Differentiate Energy Efficient IBM Petascale Systems
In recent years, IBM has made significant innovations in "green" and next generation data centers4 to reduce
the Total Cost of Ownership (TCO), reflecting not just capital costs but operational and maintenance costs
as well. Advanced Computational Fluid Dynamics (CFD), Thermal, and Electromagnetic HPC modeling
help IBM package systems with increasing computing density to deliver energy-efficient petascale
enterprise systems for the data center.

Challenges in the design and development of energy-efficient computationally dense systems: With the
increasing need for improving computing density for System x, the problem of energy efficiency becomes
very critical for IBM to differentiate its offerings in the marketplace. Since all IBM competitors use the
same basic industry standard building blocks for these x86 systems, the optimal placement of these
components (processors, memory systems, and interconnect hardware) to yield the best performing system
has become a more complex problem. Not only does the system have to perform to yield the maximum
performance but the electric power consumed should be maximized to be “useful” electric power for the
provisioning and operation of the entire data center. This implies that the power units and cooling systems
must operate at very high efficiencies and the power consuming components in the server (processors,
memory, disks, etc.) must be cooled effectively with energy efficient fans blowing forced air. Currently
forced air cooling through fans is the industry norm.

The smart use of HPC: The effectiveness of forced air is very sensitive to the packaging volumetric
geometry. Two simultaneous objectives must be carefully met. First, the pressure drop between the inlet and
exit must be minimized to operate the fan at the lowest possible RPM. Secondly, the airflow must be guided
so that the heat generating components are cooled effectively and any leakages minimized. This becomes a
three dimensional computational fluid dynamics (CFD) problem that must be modeled accurately. Lastly,
the component placement should be meet all the electrical needs of ensuring system
performance at various levels ranging from the processors, memory, server nodes, and the
entire data center. This requires the solution of a CFD problem at the board, enclosure,
and data center levels. This simulation has to be done for various configurations at all
levels to yield the best possible thermo-mechanical-electric design. This comprehensive
approach often yields configurations and placements that help IBM significantly
differentiate System x offerings compared to the competition. The adjoining figure depicts
the fluid speed and temperature profiles for a 1U configuration.
With the use of CFD for a 1U configuration, the energy consumption loss has been reduced from about 19W
to 6W for the iDataPlex. Each Watt of savings translates to about $7 of customer savings/server/year. Also
the airflow requirements have reduced from about 40cfm to about 32cfm. IBM customers can now increase
computing density/rack and floor space utilization in their data centers translating to millions of dollars of
savings to operate data centers.
Increasing future use of HPC: Currently, the fan geometry is not explicitly modeled. In the future, the fan
geometry will be considered explicitly. This will allow more accurate modeling and also help fan suppliers
come up with more effective fan designs. The 3-D fan geometry representation will further ramp up HPC
requirements. Also, while the frequencies of the processors have remained almost flat in recent years, as
multi-core processors have become the norm in systems, the memory and other interconnects have increased

4
    The New Enterprise Data Center, http://www-03.ibm.com/systems/nedc/index.html
                                                                                                              4
in frequencies. This will, in the future, require the solution of the electromagnetic field equations in addition
to thermo-fluid modeling for enclosures. The shielding required for mitigating electro-magnetic leakage
would result in thicker plates and grills but this would alter the flow characteristics and potentially increase
pressure differentials. What would be required in the future is the solution of a coupled electromagnetic field
and thermo-fluid problem to optimize the packaging as frequency effects become more pronounced. Lastly,
energy-efficiency can be improved using virtualization, power management software to quiesce idle
processors, liquid cooling, and other data center optimization strategies. All these will increase HPC
requirements as CFD analysis would have to be done for more design scenarios.
Global Collaboration with the Global Storage Architecture (GSA) Based on GPFS
Many IBM product and solution development teams are global and geographically dispersed and need to
collaborate around the clock and work in a seamless and secure fashion. With recent acquisitions such as
Cognos, Rational and Daksh, deep global collaboration within the IBM technical team has become even
more crucial to IBM success. Developers and engineers work on a wide range of semiconductor, system,
software, and services development projects. These include the design of next generation systems and
processors, operating systems, and middleware (e.g. Tivoli and WebSphere).
Challenges in global petabyte data collaboration: Often IBM development teams have to share very large
data files (e.g. CAD files in semiconductor design and manufacturing) that contain very sensitive
information. So security, access control, and reliable backup procedures are crucial in addition to
performance. Moreover, end-users want to be able to access a collaborative data repository with an intuitive,
easy-to-use interface 24 by 7. They also would like to be spared the arduous system management and
administration tasks of provisioning, regular (nightly or hourly) backup and recovery, and other system and
IT management tasks.




The smart use of the high performance Global Storage Architecture (GSA): GSA – a highly scalable
cloud based NAS solution - combines proprietary IBM HPC technology (storage and server hardware and
IBM's high-performance shared-disk clustered file system - GPFS) with open source components like
Linux, Samba and CTDB to deliver distributed storage solutions. GSA exports the clustered file system
through industry standard protocols like CIFS, NFS, FTP and HTTP. All of the GSA nodes in the grid
export all files of all file systems simultaneously. This is a different approach from some other clustered
NAS solutions which pin individual files to a single node or pair of nodes thus limiting the single file
performance dramatically. Each file system can be several petabytes in size.

GSA is extensively used within IBM for global collaboration in product development and marketing. It is
one of the key file and data sharing development tools and has become a standard within IBM. At present
there are over 130,000 users globally. Multiple GSA cells have been deployed worldwide. As new
organizations are acquired into IBM, they are provided the GSA solution for development and collaboration
                                                                                                               5
especially in environments where large data sets have to be shared with security and access control. Most
internal IBM deployments are with AIX on System p. GSA has self managed user tools that empower the
user to either work interactively through an intuitive GUI, or submit batch workload through a command
line interface. The tools are similar to commonly used tools to access local shared files over the local area
network (i.e. NFS). GSA scales very well, is robust, and supports very large files with parallel reads, writes,
and updates. Branded externally as SOFS5-scale out file services, GSA is also being used outside of IBM at
several customers to collaborate in geographically dispersed organizations for product development and is
one of the key IBM private cloud solutions.
The unique and ongoing value of GSA: AFS and DFS are alternate approaches but they are not suitable
for deployment beyond a single geographical cell as these solutions don’t scale, are expensive, and have no
global registry with a unique identifier. The single, global sign on capability provided by GSA greatly
improves developers’ productivity while managing his/her persona in a secure manner across all GSA cells.
Development teams have been able to reduce their time to market and costs driven by the increased
productivity of global product development teams. With familiar interfaces, the learning curve is not steep
and user productivity improvements are seen almost immediately upon deployment.
IBM has deployed GSA for over three years with over 99.9% availability and service and no unplanned
outages. IBM plans to grow the footprint to about 200,000 users globally. The current focus is to continue
enhancing the GSA offering to make IBM users even more productive. This is expected to reduce the time
to market for IBM products overall and also reduce the development costs. As the cost per GB continues to
go down, new tools will be added to GSA to expand usage to other groups at IBM including marketing and
finance.
Leveraging Information and High Performance Analytics to Optimize IBM Internal Business
Processes

Despite today's information explosion and access to petabytes of
data, business leaders are actually operating with bigger blind
spots6 than ever before. Experience and intuition are just not
sufficient when confronting today’s business process challenges
and data deluge. Executives recognize that new high performance analytics capabilities, coupled with
advanced business process management, signal a major opportunity to create business advantage. Those
who have the vision to apply new approaches are building smart enterprises positioned to thrive. IBM has
not only developed many of these advanced analytics capabilities but also has used these techniques
effectively to improve manufacturing processes, streamline internal supply and demand chains, and
maximize sales and marketing effectiveness globally.

Yield improvement for complex manufacturing: IBM’s East Fishkill plant in New York is one of the
most complex manufacturing environments with hundreds of sophisticated processing and measuring tools,
hundreds of thousands of microprocessors and controllers, and hundreds of product routes generating vast
quantities of complex data. High performance data mining and statistical modeling tools helped identify
poorly performing tools and processes, aberrant tool components, and significantly out of control processes.
Dozens of previously unknown influences on product yield and quality were discovered translating to
dramatic reductions of false alarms and first-year savings in excess of $10M.

Improved customer relationship management and sales effectiveness: An opportunity identification and
lead generation analytics solution, OnTarget has been deployed to increase IBM software group sales
effectiveness. Over the last four years, OnTarget has consistently produced higher-quality sales leads with
larger win conversion ratios. The Market Alignment Program (MAP) helps drive the sales allocation
process based on field-validated, analytical estimates of future revenue opportunity in each market segment
in which IBM operates. As a direct result of MAP processes, sales resources were shifted to accounts with

5
    IBM Scale out file services, http://www.ibm.com/services/us/its/html/sofs-landing.html
6
    Business Analytics and Optimization for the Intelligent Enterprise, IBM Institute of Business Value, April, 2009
                                                                                                                       6
greater revenue and profit potential. Together, OnTarget and MAP have had a revenue impact of
approximately $1B over the last four years.

Better demand and supply planning: The Virtual Command Center Demand Hub (iBAT) is an analytical
platform built by IBM Research to enable better channel management for the IBM Systems and Technology
Group. It provides a web-based performance dashboard that combines up-to-date visibility of inventories
and channel sales information with innovative forecasting and inventory analytics. The use of iBAT has
resulted in over 75% reduction of aged channel inventory and a 50% reduction of total channel inventory,
translating to over tens of millions of savings quarterly. GAP is an analytics system to understand / improve
sales productivity and revenue performance by modeling relationship between sales capacity and revenue.
GAP analysis helped IBM’s software group improve productivity resulting in an increase of revenues by
15%. IBM software and systems and technology groups plan to implement GAP as a fundamental planning
tool to help in understanding and setting revenue targets, evaluating brand/region performance, and
allocating resources.
Conclusions – HPC Makes IBM Smarter
High performance computing today enables individuals and enterprises to solve previously intractable
problems faster and with greater accuracy than before. This enhanced productivity and efficiency benefits
many industries and functions ranging from design, manufacturing, marketing and sales. IBM is at the
leading edge of the HPC revolution in several ways: as a supplier through its range of multi-core processors,
ultrascalable systems, middleware, and applications; as an innovator through its leading-edge research and
development; as a collaborator through its partnerships with leading manufacturers and ISVs, and as a
solutions provider with its deep computing and other data center solutions.

Using smart HPC solutions, IBM has achieved substantial bottom line benefits by improving its ability to
innovate, enhance productivity, optimize internal business processes, and outflank competitors by:

          Solving previously intractable problems in optical lithography and extending the design and
           manufacturing of rapidly shrinking nanometer integrated chips to 22nm and below,
          Iterating on semiconductor designs several times per day as opposed to once every several days,
           significantly reducing time to market by 15-30 percent,
          Optimizing integrated chip yield and quality by detecting poorly performing tools and processes,
           aberrant tool components, and significantly out of control processes, translating to savings in excess
           of $10M in just the first year,
          Producing significant innovations in 'green' and next generation systems and data centers to reduce
           the total cost of ownership and deliver energy-efficient petascale enterprise systems for the data
           center, translating to millions of dollars of savings to IBM customers,
          Collaborating between globally dispersed product and solution development teams to reduce time to
           market for new products and solutions,
          Gaining insight from vast data available on customer behavior to optimize and refine sales and
           marketing campaigns, contributing to over $1B of additional revenues in four years.




Copyright ® 2009. Cabot Partners Group. Inc. All rights reserved. The following terms are trademarks or registered trademarks of International Business
Machines Corporation in the United States and/or other countries: IBM, the IBM logo, AIX, System x, System p, Blue Gene, iDataPlex, General Parallel
File System, GPFS, and 1350. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows,
Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Other companies’ product
names or trademarks or service marks are used herein for identification only and belong to their respective owner. All images were obtained from IBM.

                                                                                                                                                          7

More Related Content

What's hot

IBM Rational Software Conference 2009: Enterprise Modernization Track Keynote
IBM Rational Software Conference 2009: Enterprise Modernization Track KeynoteIBM Rational Software Conference 2009: Enterprise Modernization Track Keynote
IBM Rational Software Conference 2009: Enterprise Modernization Track KeynoteKathy (Kat) Mandelstein
 
G2iX CIO Forum - Updated CIO Innovation Toolkit
G2iX CIO Forum - Updated CIO Innovation ToolkitG2iX CIO Forum - Updated CIO Innovation Toolkit
G2iX CIO Forum - Updated CIO Innovation Toolkitg2ix
 
IP NGN Transformation for Service Providers in Emerging Markets
IP NGN Transformation for Service Providers in Emerging MarketsIP NGN Transformation for Service Providers in Emerging Markets
IP NGN Transformation for Service Providers in Emerging MarketsCisco Service Provider
 
Rev 5 Sap Mobility Conversation 06.9.2012
Rev 5 Sap Mobility Conversation 06.9.2012Rev 5 Sap Mobility Conversation 06.9.2012
Rev 5 Sap Mobility Conversation 06.9.2012DJ EVERETTE
 
Metered IT - The Path to Utility Computing
Metered IT - The Path to Utility ComputingMetered IT - The Path to Utility Computing
Metered IT - The Path to Utility ComputingValencell, Inc.
 
Why i ? (IBM_i webinar 2011-06-15)
Why i ? (IBM_i webinar 2011-06-15)Why i ? (IBM_i webinar 2011-06-15)
Why i ? (IBM_i webinar 2011-06-15)HELP400
 
CIBER creates next-generation IT infrastructure solution for SAP hosting serv...
CIBER creates next-generation IT infrastructure solution for SAP hosting serv...CIBER creates next-generation IT infrastructure solution for SAP hosting serv...
CIBER creates next-generation IT infrastructure solution for SAP hosting serv...IBM India Smarter Computing
 
Panorama des offres cloud IBM
Panorama des offres cloud IBMPanorama des offres cloud IBM
Panorama des offres cloud IBMClaude Riousset
 
APPNATION IV - IBM - Phil Buckellew Keynote
APPNATION IV - IBM - Phil Buckellew KeynoteAPPNATION IV - IBM - Phil Buckellew Keynote
APPNATION IV - IBM - Phil Buckellew KeynoteMasha Geller
 
Designing A Data Warehouse With Sql 2008
Designing A Data Warehouse With Sql 2008Designing A Data Warehouse With Sql 2008
Designing A Data Warehouse With Sql 2008thomduclos
 
Chris : IDC pulse 2010
Chris : IDC pulse 2010 Chris : IDC pulse 2010
Chris : IDC pulse 2010 Shanker Sareen
 
Linux in a Private Cloud with Social Business on System z
Linux in a Private Cloud with Social Business on System zLinux in a Private Cloud with Social Business on System z
Linux in a Private Cloud with Social Business on System zIBM India Smarter Computing
 
Bull vision VA
Bull vision VABull vision VA
Bull vision VABull
 
IBM Cloudburst: Integrated hardware, software and services for simplified clo...
IBM Cloudburst: Integrated hardware, software and services for simplified clo...IBM Cloudburst: Integrated hardware, software and services for simplified clo...
IBM Cloudburst: Integrated hardware, software and services for simplified clo...Vincent Kwon
 
Track 2 - architecting data centres in the information economy wipro
Track 2 - architecting data centres in the information economy wiproTrack 2 - architecting data centres in the information economy wipro
Track 2 - architecting data centres in the information economy wiproEMC Forum India
 
IBM Case Study, Kurt Anderson
IBM Case Study, Kurt AndersonIBM Case Study, Kurt Anderson
IBM Case Study, Kurt AndersonKurt Anderson
 
Warsaw Seminar Diem Ho 2
Warsaw Seminar Diem Ho 2Warsaw Seminar Diem Ho 2
Warsaw Seminar Diem Ho 2Youth Agora
 

What's hot (20)

Smarter Computing and Breakthrough IT Economics
Smarter Computing and Breakthrough IT EconomicsSmarter Computing and Breakthrough IT Economics
Smarter Computing and Breakthrough IT Economics
 
IBM Rational Software Conference 2009: Enterprise Modernization Track Keynote
IBM Rational Software Conference 2009: Enterprise Modernization Track KeynoteIBM Rational Software Conference 2009: Enterprise Modernization Track Keynote
IBM Rational Software Conference 2009: Enterprise Modernization Track Keynote
 
G2iX CIO Forum - Updated CIO Innovation Toolkit
G2iX CIO Forum - Updated CIO Innovation ToolkitG2iX CIO Forum - Updated CIO Innovation Toolkit
G2iX CIO Forum - Updated CIO Innovation Toolkit
 
IP NGN Transformation for Service Providers in Emerging Markets
IP NGN Transformation for Service Providers in Emerging MarketsIP NGN Transformation for Service Providers in Emerging Markets
IP NGN Transformation for Service Providers in Emerging Markets
 
Cloud Slam Co D Presentation
Cloud Slam Co D PresentationCloud Slam Co D Presentation
Cloud Slam Co D Presentation
 
Rev 5 Sap Mobility Conversation 06.9.2012
Rev 5 Sap Mobility Conversation 06.9.2012Rev 5 Sap Mobility Conversation 06.9.2012
Rev 5 Sap Mobility Conversation 06.9.2012
 
Metered IT - The Path to Utility Computing
Metered IT - The Path to Utility ComputingMetered IT - The Path to Utility Computing
Metered IT - The Path to Utility Computing
 
Why i ? (IBM_i webinar 2011-06-15)
Why i ? (IBM_i webinar 2011-06-15)Why i ? (IBM_i webinar 2011-06-15)
Why i ? (IBM_i webinar 2011-06-15)
 
CIBER creates next-generation IT infrastructure solution for SAP hosting serv...
CIBER creates next-generation IT infrastructure solution for SAP hosting serv...CIBER creates next-generation IT infrastructure solution for SAP hosting serv...
CIBER creates next-generation IT infrastructure solution for SAP hosting serv...
 
Panorama des offres cloud IBM
Panorama des offres cloud IBMPanorama des offres cloud IBM
Panorama des offres cloud IBM
 
APPNATION IV - IBM - Phil Buckellew Keynote
APPNATION IV - IBM - Phil Buckellew KeynoteAPPNATION IV - IBM - Phil Buckellew Keynote
APPNATION IV - IBM - Phil Buckellew Keynote
 
Designing A Data Warehouse With Sql 2008
Designing A Data Warehouse With Sql 2008Designing A Data Warehouse With Sql 2008
Designing A Data Warehouse With Sql 2008
 
Chris : IDC pulse 2010
Chris : IDC pulse 2010 Chris : IDC pulse 2010
Chris : IDC pulse 2010
 
Linux in a Private Cloud with Social Business on System z
Linux in a Private Cloud with Social Business on System zLinux in a Private Cloud with Social Business on System z
Linux in a Private Cloud with Social Business on System z
 
Bull vision VA
Bull vision VABull vision VA
Bull vision VA
 
IBM Cloudburst: Integrated hardware, software and services for simplified clo...
IBM Cloudburst: Integrated hardware, software and services for simplified clo...IBM Cloudburst: Integrated hardware, software and services for simplified clo...
IBM Cloudburst: Integrated hardware, software and services for simplified clo...
 
Track 2 - architecting data centres in the information economy wipro
Track 2 - architecting data centres in the information economy wiproTrack 2 - architecting data centres in the information economy wipro
Track 2 - architecting data centres in the information economy wipro
 
CeBIT-Preview Hamburg
CeBIT-Preview HamburgCeBIT-Preview Hamburg
CeBIT-Preview Hamburg
 
IBM Case Study, Kurt Anderson
IBM Case Study, Kurt AndersonIBM Case Study, Kurt Anderson
IBM Case Study, Kurt Anderson
 
Warsaw Seminar Diem Ho 2
Warsaw Seminar Diem Ho 2Warsaw Seminar Diem Ho 2
Warsaw Seminar Diem Ho 2
 

Viewers also liked (20)

Lightning Round Slides: CPR Messages
Lightning Round Slides: CPR MessagesLightning Round Slides: CPR Messages
Lightning Round Slides: CPR Messages
 
Jan 25 Ft. Riley Network
Jan 25 Ft. Riley NetworkJan 25 Ft. Riley Network
Jan 25 Ft. Riley Network
 
NOTORIOUSLY NAUGHTY
NOTORIOUSLY NAUGHTYNOTORIOUSLY NAUGHTY
NOTORIOUSLY NAUGHTY
 
Kafka Evaluation - High Throughout Message Queue
Kafka Evaluation - High Throughout Message QueueKafka Evaluation - High Throughout Message Queue
Kafka Evaluation - High Throughout Message Queue
 
A little about Message Queues - Boston Riak Meetup
A little about Message Queues - Boston Riak MeetupA little about Message Queues - Boston Riak Meetup
A little about Message Queues - Boston Riak Meetup
 
Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments
 
Akarsh Raj BA
Akarsh Raj BAAkarsh Raj BA
Akarsh Raj BA
 
Arinjay
ArinjayArinjay
Arinjay
 
Natural Selection
Natural Selection Natural Selection
Natural Selection
 
HITENDRA AHIR_BA
HITENDRA AHIR_BAHITENDRA AHIR_BA
HITENDRA AHIR_BA
 
CV Anoop
CV AnoopCV Anoop
CV Anoop
 
Business
BusinessBusiness
Business
 
Abdul Haseeb
Abdul HaseebAbdul Haseeb
Abdul Haseeb
 
Abhinay
AbhinayAbhinay
Abhinay
 
Quranic translation an overview
Quranic translation  an overviewQuranic translation  an overview
Quranic translation an overview
 
Rec newsletter v1 issue 1 sep
Rec newsletter   v1 issue 1 sepRec newsletter   v1 issue 1 sep
Rec newsletter v1 issue 1 sep
 
Resume - Amogh
Resume - AmoghResume - Amogh
Resume - Amogh
 
Research paper_Anshul _Ankita_RSG
Research paper_Anshul _Ankita_RSGResearch paper_Anshul _Ankita_RSG
Research paper_Anshul _Ankita_RSG
 
Agrim
AgrimAgrim
Agrim
 
Abdul Hameed Final.doc
Abdul Hameed Final.docAbdul Hameed Final.doc
Abdul Hameed Final.doc
 

Similar to HPC Inside IBM Article Final

IDC: Adding Business Value with Linux Running on IBM Servers
IDC: Adding Business Value with Linux Running on IBM ServersIDC: Adding Business Value with Linux Running on IBM Servers
IDC: Adding Business Value with Linux Running on IBM ServersIBM India Smarter Computing
 
IBM PowerLinux Open Source Infrastructure Services
IBM PowerLinux Open Source Infrastructure ServicesIBM PowerLinux Open Source Infrastructure Services
IBM PowerLinux Open Source Infrastructure ServicesIBM India Smarter Computing
 
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM SpecialHPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM SpecialMarco van der Hart
 
Hpc kompass ibm_special_2014
Hpc kompass ibm_special_2014Hpc kompass ibm_special_2014
Hpc kompass ibm_special_2014TTEC
 
Enterprise Modernization: Improving the economics of mainframe and multi-plat...
Enterprise Modernization: Improving the economics of mainframe and multi-plat...Enterprise Modernization: Improving the economics of mainframe and multi-plat...
Enterprise Modernization: Improving the economics of mainframe and multi-plat...dkang
 
Enterprise modernization: improving the economics of mainframe and multi-plat...
Enterprise modernization: improving the economics of mainframe and multi-plat...Enterprise modernization: improving the economics of mainframe and multi-plat...
Enterprise modernization: improving the economics of mainframe and multi-plat...IBM Rational software
 
Cloud computing and Innovation in Manufacturing
Cloud computing and Innovation in ManufacturingCloud computing and Innovation in Manufacturing
Cloud computing and Innovation in ManufacturingInfosys
 
IBM's power software and solution guide
IBM's power software  and solution guide IBM's power software  and solution guide
IBM's power software and solution guide IBM I community Israel
 
CBS February 2013 Cloud Computing in the context of outsourcing
CBS February 2013 Cloud Computing in the context of outsourcingCBS February 2013 Cloud Computing in the context of outsourcing
CBS February 2013 Cloud Computing in the context of outsourcingHenrik Hasselbalch
 
IBM Integracion Victor Carralero Innovación y Conectividad
IBM Integracion Victor Carralero Innovación y ConectividadIBM Integracion Victor Carralero Innovación y Conectividad
IBM Integracion Victor Carralero Innovación y ConectividadValeri Illescas
 
Rethink IT. Reinvent Business. - Dharanibalan Gurunathan
Rethink IT. Reinvent Business. - Dharanibalan GurunathanRethink IT. Reinvent Business. - Dharanibalan Gurunathan
Rethink IT. Reinvent Business. - Dharanibalan GurunathanJyothi Satyanathan
 
Cloud Computing Nedc Wp 28 May
Cloud Computing Nedc Wp 28 MayCloud Computing Nedc Wp 28 May
Cloud Computing Nedc Wp 28 MayGovCloud Network
 
IBM 'After 5' Session - IBM System X
IBM 'After 5' Session - IBM System XIBM 'After 5' Session - IBM System X
IBM 'After 5' Session - IBM System XVincent Kwon
 
CBP drives success using optimized, scalable IT infrastructure
CBP drives success using  optimized, scalable IT  infrastructureCBP drives success using  optimized, scalable IT  infrastructure
CBP drives success using optimized, scalable IT infrastructureIBM India Smarter Computing
 
Tml Deployment Strategy Overview V 1
Tml Deployment Strategy Overview V 1Tml Deployment Strategy Overview V 1
Tml Deployment Strategy Overview V 1Sukumar Daniel
 
The IBM and SAP Partnership
The IBM and SAP PartnershipThe IBM and SAP Partnership
The IBM and SAP PartnershipthinkASG
 

Similar to HPC Inside IBM Article Final (20)

Cloud Computing in Media & Entertainment
Cloud Computing in Media & EntertainmentCloud Computing in Media & Entertainment
Cloud Computing in Media & Entertainment
 
IDC: Adding Business Value with Linux Running on IBM Servers
IDC: Adding Business Value with Linux Running on IBM ServersIDC: Adding Business Value with Linux Running on IBM Servers
IDC: Adding Business Value with Linux Running on IBM Servers
 
IBM PowerLinux Open Source Infrastructure Services
IBM PowerLinux Open Source Infrastructure ServicesIBM PowerLinux Open Source Infrastructure Services
IBM PowerLinux Open Source Infrastructure Services
 
Smarter Planet: Media and Entertainment
Smarter Planet: Media and EntertainmentSmarter Planet: Media and Entertainment
Smarter Planet: Media and Entertainment
 
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM SpecialHPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
 
Hpc kompass ibm_special_2014
Hpc kompass ibm_special_2014Hpc kompass ibm_special_2014
Hpc kompass ibm_special_2014
 
Enterprise Modernization: Improving the economics of mainframe and multi-plat...
Enterprise Modernization: Improving the economics of mainframe and multi-plat...Enterprise Modernization: Improving the economics of mainframe and multi-plat...
Enterprise Modernization: Improving the economics of mainframe and multi-plat...
 
Enterprise modernization: improving the economics of mainframe and multi-plat...
Enterprise modernization: improving the economics of mainframe and multi-plat...Enterprise modernization: improving the economics of mainframe and multi-plat...
Enterprise modernization: improving the economics of mainframe and multi-plat...
 
Cloud computing and Innovation in Manufacturing
Cloud computing and Innovation in ManufacturingCloud computing and Innovation in Manufacturing
Cloud computing and Innovation in Manufacturing
 
IBM's power software and solution guide
IBM's power software  and solution guide IBM's power software  and solution guide
IBM's power software and solution guide
 
Outside In Process - Chicago V3
Outside In Process - Chicago V3Outside In Process - Chicago V3
Outside In Process - Chicago V3
 
CBS February 2013 Cloud Computing in the context of outsourcing
CBS February 2013 Cloud Computing in the context of outsourcingCBS February 2013 Cloud Computing in the context of outsourcing
CBS February 2013 Cloud Computing in the context of outsourcing
 
IBM Integracion Victor Carralero Innovación y Conectividad
IBM Integracion Victor Carralero Innovación y ConectividadIBM Integracion Victor Carralero Innovación y Conectividad
IBM Integracion Victor Carralero Innovación y Conectividad
 
Cloud - Citigroup Case Study
Cloud - Citigroup Case StudyCloud - Citigroup Case Study
Cloud - Citigroup Case Study
 
Rethink IT. Reinvent Business. - Dharanibalan Gurunathan
Rethink IT. Reinvent Business. - Dharanibalan GurunathanRethink IT. Reinvent Business. - Dharanibalan Gurunathan
Rethink IT. Reinvent Business. - Dharanibalan Gurunathan
 
Cloud Computing Nedc Wp 28 May
Cloud Computing Nedc Wp 28 MayCloud Computing Nedc Wp 28 May
Cloud Computing Nedc Wp 28 May
 
IBM 'After 5' Session - IBM System X
IBM 'After 5' Session - IBM System XIBM 'After 5' Session - IBM System X
IBM 'After 5' Session - IBM System X
 
CBP drives success using optimized, scalable IT infrastructure
CBP drives success using  optimized, scalable IT  infrastructureCBP drives success using  optimized, scalable IT  infrastructure
CBP drives success using optimized, scalable IT infrastructure
 
Tml Deployment Strategy Overview V 1
Tml Deployment Strategy Overview V 1Tml Deployment Strategy Overview V 1
Tml Deployment Strategy Overview V 1
 
The IBM and SAP Partnership
The IBM and SAP PartnershipThe IBM and SAP Partnership
The IBM and SAP Partnership
 

More from IBM India Smarter Computing

TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...IBM India Smarter Computing
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceIBM India Smarter Computing
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM India Smarter Computing
 

More from IBM India Smarter Computing (20)

All-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage EfficiencyAll-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage Efficiency
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
 
IBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product GuideIBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product Guide
 
IBM System x3250 M5
IBM System x3250 M5IBM System x3250 M5
IBM System x3250 M5
 
IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4
 
IBM System x3650 M4 HD
IBM System x3650 M4 HDIBM System x3650 M4 HD
IBM System x3650 M4 HD
 
IBM System x3300 M4
IBM System x3300 M4IBM System x3300 M4
IBM System x3300 M4
 
IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4
 
IBM System x3500 M4
IBM System x3500 M4IBM System x3500 M4
IBM System x3500 M4
 
IBM System x3550 M4
IBM System x3550 M4IBM System x3550 M4
IBM System x3550 M4
 
IBM System x3650 M4
IBM System x3650 M4IBM System x3650 M4
IBM System x3650 M4
 
IBM System x3500 M3
IBM System x3500 M3IBM System x3500 M3
IBM System x3500 M3
 
IBM System x3400 M3
IBM System x3400 M3IBM System x3400 M3
IBM System x3400 M3
 
IBM System x3250 M3
IBM System x3250 M3IBM System x3250 M3
IBM System x3250 M3
 
IBM System x3200 M3
IBM System x3200 M3IBM System x3200 M3
IBM System x3200 M3
 
IBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and ConfigurationIBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and Configuration
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization Performance
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architecture
 
X6: The sixth generation of EXA Technology
X6: The sixth generation of EXA TechnologyX6: The sixth generation of EXA Technology
X6: The sixth generation of EXA Technology
 
Stephen Leonard IBM Big Data and cloud
Stephen Leonard IBM Big Data and cloudStephen Leonard IBM Big Data and cloud
Stephen Leonard IBM Big Data and cloud
 

Recently uploaded

unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...AliaaTarek5
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 

Recently uploaded (20)

unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 

HPC Inside IBM Article Final

  • 1. Mastering the Odyssey of Scale from Nano to Peta: The Smart Use of High Performance Computing (HPC) Inside IBM® Sponsored by IBM Srini Chari, Ph.D., MBA June, 2009 mailto:chari@cabotpartners.com Cabot Partners Group, Inc. 100 Woodcrest Lane, Danbury CT 06810, www.cabotpartners.com Introduction – HPC Enhances IBM’s Bottom Line In today’s climate, companies must innovate with flexibility and speed in response to customer demand, market opportunity, regulatory changes, and competition. High performance computing helps companies achieve the speed, agility, insights, and sustained competitive advantage to deliver innovative products, increase revenues, and improve operational performance. Scientists, engineers, and analysts in many smart enterprises rely on HPC to solve challenging problems in engineering, manufacturing, finance, risk analysis, revenue management, and in the life and earth sciences. IBM is no exception. IBM not only delivers a rich portfolio of HPC solutions spanning systems, software, and services to the marketplace but also uses these HPC solutions to work smarter across its diverse technology, systems, software, and services businesses. HPC is being increasingly used at IBM to globally integrate, share, and analyze petabytes of information to develop next generation nanometer processors, build energy efficient systems and data centers, maximize revenues, and streamline supply and demand chains. Through in-depth interviews across several IBM businesses, this article highlights the substantial business value and ROI obtained by IBM’s leading-edge internal use of HPC across it diverse businesses. Globally, HPC is helping IBM master the odyssey of scale, become smarter and more innovative, improve operational efficiencies, reduce costs and complexity, and increase revenues and profits. HPC already directly benefits IBM’s bottom line today and will become even more crucial in the future with the increasing miniaturization of processor technology, emphasis on energy-efficient systems designs, globalization of collaborative product development, and the increasing pressure to derive actionable insight and efficiencies from the data deluge. In doing so, it will further improve IBM’s internal business processes ranging from development, manufacturing, supply chain, marketing, and sales. The Key Elements of the IBM HPC Solutions Portfolio IBM offers a wide array of HPC solutions through its multi-core processor systems, large storage systems, support for a broad range of operating systems, visualization, innovative applications, middleware and partner ISVs with proven expertise and deep industry presence. IBM has the leading portfolio1 of HPC architectures, systems, and software ranging from the System x® Cluster 1350TM, Blades, iDataPlex®, System p®, and Blue Gene® with support for a range of operating systems including Linux®, AIX®, and Windows® together with cluster management software, a high-performance shared-disk clustered file system - General Parallel File System (GPFSTM), and optimized scientific and engineering libraries. In addition, IBM Research has developed leading edge algorithms, advanced mathematical assets, methods Cabot and capabilities to develop predictive analytics and business optimization to create new solutions for unique challenges2. Partners Optimizing Business Value 1 The IBM Deep Computing Portfolio, http://www-03.ibm.com/systems/deepcomputing/index.html 2 IBM Business Analytics and Optimization, http://www-935.ibm.com/services/us/gbs/bus/html/bcs_centeroptimization.html 1
  • 2. Significant HPC Initiatives inside IBM IBM is a powerful example of how HPC can benefit organizations. The following internal initiatives indicate the wide range of activities through which IBM uses HPC to innovate, enhance productivity, and stay ahead of the competition:  Collaboratively design, test, validate, and manufacture rapidly shrinking nanometer integrated chips to differentiate its systems from the competition.  Deliver significant innovations in 'green' and next generation data centers to reduce the total cost of ownership and package systems with increasing computing density to deliver energy-efficient petascale enterprise systems for the data center.  Collaborate between globally dispersed product and solution development teams involving secure and rapid exchange of several petabytes of sensitive data and applications.  Leverage business analytics and optimization to gain insight from vast data available on customers and suppliers to refine internal manufacturing, sales, and marketing processes. Scalable HPC Solutions Vital to the Design and Manufacture of Next Generation IBM Nanoscale Integrated Chips HPC solutions are crucial to differentiate IBM in the semiconductor and systems space. A wide range of Electronic Design Automation (EDA) solutions enable IBM to collaboratively design, test, validate, and manufacture rapidly shrinking nanometer integrated chips leveraging advanced research, process technologies, and global research and development teams. These integrated chips are used in the IBM System z, System p, and other OEM Application Specific Integrated Circuits (ASICs) such as the game chips for the Sony PlayStation 3, the Nintendo Wii, and others. Technology challenges in nanoscale chip design and manufacturing: Power, cooling, and other fundamental physics constraints have prevented the high-volume microprocessor industry from scaling up clock frequencies to achieve higher levels of performance on single threads. The good news is that, in recent years, multi-core processors and System-on-a-Chip (SoC) designs have become the norm to continue to scale processor performance. However, this drives up transistor count significantly – nearly a billion per die today - forcing the trend to designs with feature sizes of 32nm or below and density scaling that reduces the pitch, wire widths and spacing. For over three decades, semiconductor density scaling has been achieved through optical scaling with optical lithography exposure equipment using shorter wavelengths or larger numerical lens apertures to accurately print designs on the die. Today’s ultraviolet (UV) exposure tools can be made to work well down to 32nm. Today and in the future, computational scaling techniques such as Optical Proximity Correction (OPC) and Optical Rules Checking (ORC) developed by IBM’s Computational Lithography function within its Semiconductor Research and Development Center (SRDC) in collaboration with partners such as Mentor Graphics will be required for semiconductor process flows. Together with these computational lithography techniques, more advanced circuit timing and noise analyses will also be required to minimize defects, improve yields, verify designs, and maximize performance under a range of typical operating conditions. These advanced computational techniques for circuit design and semiconductor process modeling require HPC systems and software to develop and evaluate effective designs in a timely and cost-effective manner. Smart HPC techniques for semiconductor circuit design: Static Timing Analysis (STA) is an approximation method of computing the expected timing of a digital circuit to verify that all paths operate correctly at a target clock frequency without requiring circuit simulation from first principles. With fixed delays, STA works well for designs of 45nm or more and it has become a mainstay of processor design over the last few decades. However, with increased miniaturization, process and environmental variability effects increase and adversely affect yields. To better qualify designs, engineers use multi-corner (multi-parameter) STA to analyze the impact of these variations. Typically, STA is repeated for 100s-1000s of corners to cover the range of operation. But this could be overly optimistic or pessimistic resulting in lower yields or performance. Further, with increasing miniaturization, crosstalk noise affects delay significantly. Noise- 2
  • 3. aware Statistical STA (SSTA) replaces fixed delays with distributions that accurately model crosstalk noise effects. It is a more sophisticated approach to maximize expected performance and yields. Multi-corner STA and SSTA considerably drive up computing requirements. Fortunately, with collaborative partnerships, IBM has developed smart, optimized scalable parallel HPC applications for multi-core parallel and clustered systems that minimize cache misses, leverage fast interconnects and IBM’s General Parallel File System (GPFS). These applications have been deployed on System x and p clusters and Blue Gene. A 64X speedup was obtained on Blue Gene for an early STA industry benchmark leveraging unique aspects of Blue Gene such as the fast communication ring, the co-processor mode to enable pipelining of communication with calculation, and dedicated partition access/scheduling. With such dramatic improvements in turn around time for timing analysis, IBM global teams can now iterate on semiconductor designs several times per day as opposed to once every several days. Furthermore, using the GPFS based Global Storage Architecture (GSA), these global design teams can collaborate better on design models and can share large data sets more effectively. This significantly improves designer productivity and reduces time to market. Smart high performance computational lithography for semiconductor manufacturing: A collection of math-based approaches came to the forefront of photolithography as the industry grappled with the challenges associated with the transition to 22 nanometer CMOS process technology and beyond. Used widely within IBM, Optical proximity correction (OPC) is a photolithography enhancement technique commonly used to compensate for image errors due to diffraction or process effects arising from the limitations of UV (193nm wavelength) to resolve ever-finer details of patterns on the photomasks that are used to etch semiconductor layers and create the building blocks of the transistors and other elements that make up integrated circuits. OPC anticipates the 193 nm Optical Lithography Evolution irregularities of shape and size of the projected 2 Masks + Optimized Mask(s) Mask + Correction image and applies corrective compensation to the Mask = Design + Assists = Design Correction = Design + Optimized Source = Design photo mask images, which then produce a light 90 nm 65 & 45 nm 32 & 28 nm 22 nm & Beyond beam that more closely approximates the intended shapes. The computational effort behind these Design (M1 SRAM) methods is immense. The calculations required for 20 adjusting OPC geometries (computationally 28 Mask manipulating 100s of billions of nanometer edges 23 to fit design shapes) in order to account for variations in focus and exposure for a state-of-the- Wafer / Contour art integrated circuit could take over 100 CPU- years of computer time. What’s also needed is a Designs Get More Regular, Resolution Enhancement Techniques Get More Complex smarter approach to computational scaling embodied in Source-Mask-Optimization (SMO) – a unique approach developed by IBM and its partners to extend current optical lithography to 22nm. Illustrated in the adjoining figure, SMO illuminates both the mask and the wafer resulting in highly non-intuitive mask designs and detailed source designs using mathematical optimization techniques and HPC developed by IBM Research. The IBM computational lithography function is the largest user of HPC inside IBM and currently this HPC use is growing at about 300% annually. Prior to 2004 there was very little use of HPC. Today, System x clusters with about 20,000 nodes are used almost on a daily basis. The largest System p SMP systems are also used routinely. Blade systems with IBM’s Cell processors have been also used to accelerate several very compute intensive processes. In addition, Blue Gene access is through the joint (IBM, RPI, and New York State) research endeavor – the Center for Computational Nanotechnology Initiative (CCNI). Early results on the Blue Gene are very encouraging. A virtual patterning flow problem3 that takes about eight months on a serial commercial simulator runs in under 12 hours on the Blue Gene using the advanced parallel algorithms developed by the IBM Computational Lithography group. 3 D. Chidambarrao, “Computational Materials Engineering in the Semiconductor Industry”, Presented at CICME, NMAB, NRC, May, 2007. 3
  • 4. The smart and growing use of HPC for nanoscale chip design and manufacturing: With HPC, the ultimate goal is to virtualize the full semiconductor development process. Doing so will reduce cost by requiring fewer silicon experiments and improving time to market for next generation semiconductor technologies. This will need near first principle predictive models for circuit design and process modeling building upon the current work in timing analysis and computational lithography. IBM’s HPC solutions portfolio is the backbone required to support such growing numerically intensive computing tasks. Innovative HPC Solutions to Differentiate Energy Efficient IBM Petascale Systems In recent years, IBM has made significant innovations in "green" and next generation data centers4 to reduce the Total Cost of Ownership (TCO), reflecting not just capital costs but operational and maintenance costs as well. Advanced Computational Fluid Dynamics (CFD), Thermal, and Electromagnetic HPC modeling help IBM package systems with increasing computing density to deliver energy-efficient petascale enterprise systems for the data center. Challenges in the design and development of energy-efficient computationally dense systems: With the increasing need for improving computing density for System x, the problem of energy efficiency becomes very critical for IBM to differentiate its offerings in the marketplace. Since all IBM competitors use the same basic industry standard building blocks for these x86 systems, the optimal placement of these components (processors, memory systems, and interconnect hardware) to yield the best performing system has become a more complex problem. Not only does the system have to perform to yield the maximum performance but the electric power consumed should be maximized to be “useful” electric power for the provisioning and operation of the entire data center. This implies that the power units and cooling systems must operate at very high efficiencies and the power consuming components in the server (processors, memory, disks, etc.) must be cooled effectively with energy efficient fans blowing forced air. Currently forced air cooling through fans is the industry norm. The smart use of HPC: The effectiveness of forced air is very sensitive to the packaging volumetric geometry. Two simultaneous objectives must be carefully met. First, the pressure drop between the inlet and exit must be minimized to operate the fan at the lowest possible RPM. Secondly, the airflow must be guided so that the heat generating components are cooled effectively and any leakages minimized. This becomes a three dimensional computational fluid dynamics (CFD) problem that must be modeled accurately. Lastly, the component placement should be meet all the electrical needs of ensuring system performance at various levels ranging from the processors, memory, server nodes, and the entire data center. This requires the solution of a CFD problem at the board, enclosure, and data center levels. This simulation has to be done for various configurations at all levels to yield the best possible thermo-mechanical-electric design. This comprehensive approach often yields configurations and placements that help IBM significantly differentiate System x offerings compared to the competition. The adjoining figure depicts the fluid speed and temperature profiles for a 1U configuration. With the use of CFD for a 1U configuration, the energy consumption loss has been reduced from about 19W to 6W for the iDataPlex. Each Watt of savings translates to about $7 of customer savings/server/year. Also the airflow requirements have reduced from about 40cfm to about 32cfm. IBM customers can now increase computing density/rack and floor space utilization in their data centers translating to millions of dollars of savings to operate data centers. Increasing future use of HPC: Currently, the fan geometry is not explicitly modeled. In the future, the fan geometry will be considered explicitly. This will allow more accurate modeling and also help fan suppliers come up with more effective fan designs. The 3-D fan geometry representation will further ramp up HPC requirements. Also, while the frequencies of the processors have remained almost flat in recent years, as multi-core processors have become the norm in systems, the memory and other interconnects have increased 4 The New Enterprise Data Center, http://www-03.ibm.com/systems/nedc/index.html 4
  • 5. in frequencies. This will, in the future, require the solution of the electromagnetic field equations in addition to thermo-fluid modeling for enclosures. The shielding required for mitigating electro-magnetic leakage would result in thicker plates and grills but this would alter the flow characteristics and potentially increase pressure differentials. What would be required in the future is the solution of a coupled electromagnetic field and thermo-fluid problem to optimize the packaging as frequency effects become more pronounced. Lastly, energy-efficiency can be improved using virtualization, power management software to quiesce idle processors, liquid cooling, and other data center optimization strategies. All these will increase HPC requirements as CFD analysis would have to be done for more design scenarios. Global Collaboration with the Global Storage Architecture (GSA) Based on GPFS Many IBM product and solution development teams are global and geographically dispersed and need to collaborate around the clock and work in a seamless and secure fashion. With recent acquisitions such as Cognos, Rational and Daksh, deep global collaboration within the IBM technical team has become even more crucial to IBM success. Developers and engineers work on a wide range of semiconductor, system, software, and services development projects. These include the design of next generation systems and processors, operating systems, and middleware (e.g. Tivoli and WebSphere). Challenges in global petabyte data collaboration: Often IBM development teams have to share very large data files (e.g. CAD files in semiconductor design and manufacturing) that contain very sensitive information. So security, access control, and reliable backup procedures are crucial in addition to performance. Moreover, end-users want to be able to access a collaborative data repository with an intuitive, easy-to-use interface 24 by 7. They also would like to be spared the arduous system management and administration tasks of provisioning, regular (nightly or hourly) backup and recovery, and other system and IT management tasks. The smart use of the high performance Global Storage Architecture (GSA): GSA – a highly scalable cloud based NAS solution - combines proprietary IBM HPC technology (storage and server hardware and IBM's high-performance shared-disk clustered file system - GPFS) with open source components like Linux, Samba and CTDB to deliver distributed storage solutions. GSA exports the clustered file system through industry standard protocols like CIFS, NFS, FTP and HTTP. All of the GSA nodes in the grid export all files of all file systems simultaneously. This is a different approach from some other clustered NAS solutions which pin individual files to a single node or pair of nodes thus limiting the single file performance dramatically. Each file system can be several petabytes in size. GSA is extensively used within IBM for global collaboration in product development and marketing. It is one of the key file and data sharing development tools and has become a standard within IBM. At present there are over 130,000 users globally. Multiple GSA cells have been deployed worldwide. As new organizations are acquired into IBM, they are provided the GSA solution for development and collaboration 5
  • 6. especially in environments where large data sets have to be shared with security and access control. Most internal IBM deployments are with AIX on System p. GSA has self managed user tools that empower the user to either work interactively through an intuitive GUI, or submit batch workload through a command line interface. The tools are similar to commonly used tools to access local shared files over the local area network (i.e. NFS). GSA scales very well, is robust, and supports very large files with parallel reads, writes, and updates. Branded externally as SOFS5-scale out file services, GSA is also being used outside of IBM at several customers to collaborate in geographically dispersed organizations for product development and is one of the key IBM private cloud solutions. The unique and ongoing value of GSA: AFS and DFS are alternate approaches but they are not suitable for deployment beyond a single geographical cell as these solutions don’t scale, are expensive, and have no global registry with a unique identifier. The single, global sign on capability provided by GSA greatly improves developers’ productivity while managing his/her persona in a secure manner across all GSA cells. Development teams have been able to reduce their time to market and costs driven by the increased productivity of global product development teams. With familiar interfaces, the learning curve is not steep and user productivity improvements are seen almost immediately upon deployment. IBM has deployed GSA for over three years with over 99.9% availability and service and no unplanned outages. IBM plans to grow the footprint to about 200,000 users globally. The current focus is to continue enhancing the GSA offering to make IBM users even more productive. This is expected to reduce the time to market for IBM products overall and also reduce the development costs. As the cost per GB continues to go down, new tools will be added to GSA to expand usage to other groups at IBM including marketing and finance. Leveraging Information and High Performance Analytics to Optimize IBM Internal Business Processes Despite today's information explosion and access to petabytes of data, business leaders are actually operating with bigger blind spots6 than ever before. Experience and intuition are just not sufficient when confronting today’s business process challenges and data deluge. Executives recognize that new high performance analytics capabilities, coupled with advanced business process management, signal a major opportunity to create business advantage. Those who have the vision to apply new approaches are building smart enterprises positioned to thrive. IBM has not only developed many of these advanced analytics capabilities but also has used these techniques effectively to improve manufacturing processes, streamline internal supply and demand chains, and maximize sales and marketing effectiveness globally. Yield improvement for complex manufacturing: IBM’s East Fishkill plant in New York is one of the most complex manufacturing environments with hundreds of sophisticated processing and measuring tools, hundreds of thousands of microprocessors and controllers, and hundreds of product routes generating vast quantities of complex data. High performance data mining and statistical modeling tools helped identify poorly performing tools and processes, aberrant tool components, and significantly out of control processes. Dozens of previously unknown influences on product yield and quality were discovered translating to dramatic reductions of false alarms and first-year savings in excess of $10M. Improved customer relationship management and sales effectiveness: An opportunity identification and lead generation analytics solution, OnTarget has been deployed to increase IBM software group sales effectiveness. Over the last four years, OnTarget has consistently produced higher-quality sales leads with larger win conversion ratios. The Market Alignment Program (MAP) helps drive the sales allocation process based on field-validated, analytical estimates of future revenue opportunity in each market segment in which IBM operates. As a direct result of MAP processes, sales resources were shifted to accounts with 5 IBM Scale out file services, http://www.ibm.com/services/us/its/html/sofs-landing.html 6 Business Analytics and Optimization for the Intelligent Enterprise, IBM Institute of Business Value, April, 2009 6
  • 7. greater revenue and profit potential. Together, OnTarget and MAP have had a revenue impact of approximately $1B over the last four years. Better demand and supply planning: The Virtual Command Center Demand Hub (iBAT) is an analytical platform built by IBM Research to enable better channel management for the IBM Systems and Technology Group. It provides a web-based performance dashboard that combines up-to-date visibility of inventories and channel sales information with innovative forecasting and inventory analytics. The use of iBAT has resulted in over 75% reduction of aged channel inventory and a 50% reduction of total channel inventory, translating to over tens of millions of savings quarterly. GAP is an analytics system to understand / improve sales productivity and revenue performance by modeling relationship between sales capacity and revenue. GAP analysis helped IBM’s software group improve productivity resulting in an increase of revenues by 15%. IBM software and systems and technology groups plan to implement GAP as a fundamental planning tool to help in understanding and setting revenue targets, evaluating brand/region performance, and allocating resources. Conclusions – HPC Makes IBM Smarter High performance computing today enables individuals and enterprises to solve previously intractable problems faster and with greater accuracy than before. This enhanced productivity and efficiency benefits many industries and functions ranging from design, manufacturing, marketing and sales. IBM is at the leading edge of the HPC revolution in several ways: as a supplier through its range of multi-core processors, ultrascalable systems, middleware, and applications; as an innovator through its leading-edge research and development; as a collaborator through its partnerships with leading manufacturers and ISVs, and as a solutions provider with its deep computing and other data center solutions. Using smart HPC solutions, IBM has achieved substantial bottom line benefits by improving its ability to innovate, enhance productivity, optimize internal business processes, and outflank competitors by:  Solving previously intractable problems in optical lithography and extending the design and manufacturing of rapidly shrinking nanometer integrated chips to 22nm and below,  Iterating on semiconductor designs several times per day as opposed to once every several days, significantly reducing time to market by 15-30 percent,  Optimizing integrated chip yield and quality by detecting poorly performing tools and processes, aberrant tool components, and significantly out of control processes, translating to savings in excess of $10M in just the first year,  Producing significant innovations in 'green' and next generation systems and data centers to reduce the total cost of ownership and deliver energy-efficient petascale enterprise systems for the data center, translating to millions of dollars of savings to IBM customers,  Collaborating between globally dispersed product and solution development teams to reduce time to market for new products and solutions,  Gaining insight from vast data available on customer behavior to optimize and refine sales and marketing campaigns, contributing to over $1B of additional revenues in four years. Copyright ® 2009. Cabot Partners Group. Inc. All rights reserved. The following terms are trademarks or registered trademarks of International Business Machines Corporation in the United States and/or other countries: IBM, the IBM logo, AIX, System x, System p, Blue Gene, iDataPlex, General Parallel File System, GPFS, and 1350. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Other companies’ product names or trademarks or service marks are used herein for identification only and belong to their respective owner. All images were obtained from IBM. 7