The Potsdam Institute for Climate Impact Research installed a new IBM Cluster 1350 supercomputer to perform comprehensive climate modeling calculations. The new system provides 30 times more processing power than the previous system while using 25% less energy. This allows researchers to study extreme short-term weather events. The highly efficient system helps push the boundaries of climate impact research.
This document discusses energy efficiency in data centers. It begins by outlining the large and growing energy consumption of data centers, noting they account for 1.3% of worldwide energy production and will exceed 400 GWh/year by 2015. It then discusses how data centers are used for traditional applications like web services as well as emerging applications in areas like smart cities that generate huge amounts of data. The document outlines various strategies for optimizing energy efficiency at different levels, from workload scheduling at the chip/server level to thermal-aware resource management and floorplanning. It stresses the need for holistic optimization across all levels from chips to data centers to minimize total energy consumption.
The document discusses the utility and limitations of PUE (Power Usage Effectiveness) as a metric for datacenter efficiency. While PUE is a widely used high-level metric, it does not provide enough information on its own to optimize efficiency. To enable effective efficiency actions, more detailed energy monitoring data is needed, including power consumption at the individual IT device level trended over time. Gathering additional operational data beyond just PUE can provide insights to reduce energy waste throughout the entire datacenter system.
Datacenter Transformation - Energy And Availability - Dio Van Der ArendHPDutchWorld
(1) Datacenters are facing increasing demands that many current facilities cannot meet, requiring transformation through consolidation, virtualization, and improved energy efficiency and availability.
(2) Datacenter designs are evolving from small, isolated IT islands to larger, standardized facilities with improved reliability through redundant critical systems and failover capabilities.
(3) Next generation datacenter designs focus on high power density, energy efficiency through technologies like containerization, and rapid deployment in multiple locations for business flexibility.
This presentation describes the DCM design principals, with examples of several projects and methodologies to address very high density racks (12-24kW), at exceptional efficiency levels (PUE 1.2-1.4).
Power Usage Effectiveness (PUE) is a metric used to measure data center infrastructure efficiency. While PUE provides a simple and useful ratio, it does not capture many important factors like resilience, load diversity, and server utilization. Additional metrics are needed to fully understand efficiency opportunities and benchmark the performance of the IT equipment itself. PUE should be considered as just one aspect of data center efficiency measurement and management.
This presentation discusses the infamous though old Green Computing field. It also discusses the current and future approaches to achieve more efficient solutions for it.
It was Presented on elective course "Selected Topics in advanced Embedded Systems" at university.
This document discusses energy efficiency in data centers. It begins by outlining the large and growing energy consumption of data centers, noting they account for 1.3% of worldwide energy production and will exceed 400 GWh/year by 2015. It then discusses how data centers are used for traditional applications like web services as well as emerging applications in areas like smart cities that generate huge amounts of data. The document outlines various strategies for optimizing energy efficiency at different levels, from workload scheduling at the chip/server level to thermal-aware resource management and floorplanning. It stresses the need for holistic optimization across all levels from chips to data centers to minimize total energy consumption.
The document discusses the utility and limitations of PUE (Power Usage Effectiveness) as a metric for datacenter efficiency. While PUE is a widely used high-level metric, it does not provide enough information on its own to optimize efficiency. To enable effective efficiency actions, more detailed energy monitoring data is needed, including power consumption at the individual IT device level trended over time. Gathering additional operational data beyond just PUE can provide insights to reduce energy waste throughout the entire datacenter system.
Datacenter Transformation - Energy And Availability - Dio Van Der ArendHPDutchWorld
(1) Datacenters are facing increasing demands that many current facilities cannot meet, requiring transformation through consolidation, virtualization, and improved energy efficiency and availability.
(2) Datacenter designs are evolving from small, isolated IT islands to larger, standardized facilities with improved reliability through redundant critical systems and failover capabilities.
(3) Next generation datacenter designs focus on high power density, energy efficiency through technologies like containerization, and rapid deployment in multiple locations for business flexibility.
This presentation describes the DCM design principals, with examples of several projects and methodologies to address very high density racks (12-24kW), at exceptional efficiency levels (PUE 1.2-1.4).
Power Usage Effectiveness (PUE) is a metric used to measure data center infrastructure efficiency. While PUE provides a simple and useful ratio, it does not capture many important factors like resilience, load diversity, and server utilization. Additional metrics are needed to fully understand efficiency opportunities and benchmark the performance of the IT equipment itself. PUE should be considered as just one aspect of data center efficiency measurement and management.
This presentation discusses the infamous though old Green Computing field. It also discusses the current and future approaches to achieve more efficient solutions for it.
It was Presented on elective course "Selected Topics in advanced Embedded Systems" at university.
The document discusses green cloud computing and describes a technical seminar presented by S.Sai Madhuri. It defines cloud computing and discusses types including SaaS, PaaS, and IaaS. It then explains green computing and green cloud computing, describing the core components and architecture of data centers. The document outlines the objective of calculating energy consumption using a green cloud simulator in VMWare Player to analyze existing systems and develop more efficient solutions.
This project consolidated Minnesota State Colleges and Universities' (MNSCU) 32 individual data centers into a standardized green IT data center to reduce costs and energy consumption. The project team virtualized servers, implemented alternative energy like solar power, and obtained Green IT certification for staff. These changes reduced energy usage by 5% in the first phase and were projected to yield $9 million in total savings and a 30% reduction upon completion.
Also known as stepwise-refinement or decomposition, this approach takes the whole software system as one entity and decomposes it to achieve more than one subsystem based on some characteristics.
PAC 2.5 Efficiency is Attainable, What are you Waiting for?SchneiderITB
This presentation covers ways to increase data center efficiency. From what we consider the basics to more advanced techniques and then through services that are available. Many of these are covered through individual white papers and presentations but we wanted to bring these topics together under one presentation.
Green cloud computing aims to make cloud infrastructure more energy efficient and environmentally friendly. Adopting measures like using more renewable energy sources, virtualizing servers, and improving data center cooling can help reduce carbon emissions and operational costs. Virtualizing servers allows multiple virtual machines to run on a single physical server, increasing efficiency and hardware utilization. Data centers also aim to lower their power usage effectiveness rating by implementing designs with hot-aisle/cold-aisle configurations and adopting newer technologies. Transitioning to renewable energy sources for power can further reduce the carbon footprint of cloud infrastructure and lead to more stable energy prices over time.
Benchmark the Relative Performance of Your Data CenterAFCOM
This document promotes an upcoming facilities management session at the Fall 2012 Data Center World Conference. It provides the website, www.datacenterworld.com, for more information on sessions. The document notes that the presentation contents are owned by AFCOM and Data Center World and require express permission for reuse. It provides a contact, Jay Taylor at jater@afcom.com, for any questions or permission requests.
Green cloud computing aims to minimize environmental impact by optimizing computing resource usage. It focuses on reducing materials, energy, water and e-waste through techniques like virtualization, consolidation, automation and multitenancy. These improvements lead to greater efficiency and resource utilization in cloud data centers and networks. Metrics like PUE, CUE and DCP are used to measure a cloud's environmental footprint and productivity.
This document discusses green cloud computing. It begins by defining cloud computing and green computing, noting that cloud computing requires large data centers that consume significant energy. It then discusses how green cloud computing aims to reduce this energy usage through techniques like server virtualization and energy-aware resource allocation. Specific strategies that cloud providers and data centers are taking to improve energy efficiency are also summarized, such as geographic placement of data centers and measures to optimize cooling.
In today’s world the growing demand for knowledge has made cloud computing a center of attraction. Cloud computing is providing utility based services to all the users worldwide. It enables presentation of applications from consumers, scientific and business domains. However, data centers created for cloud computing applications consume huge amounts of energy, contributing to high operational costs and a large amount of carbon dioxide emission to the environment. With enhancement of data center, the power consumption is increasing at such a rate that it has become a key concern these days because it is ultimately leading to energy shortcomings and global climatic change. Therefore, we need green cloud computing solutions that can not only save energy, but also reduce operational costs.
Green computing refers to using computing resources efficiently and minimizing environmental impact. It involves implementing energy-efficient policies and practices when setting up and operating IT systems. The goals of green computing include minimizing energy consumption, purchasing green energy, and reducing employee/customer travel requirements. Green cloud computing aims to achieve efficient infrastructure utilization and processing while minimizing energy usage. It uses techniques like dynamic resource allocation and powering down underutilized servers.
IRJET- Recent Trends in Green Cloud ComputingIRJET Journal
- Green computing aims to reduce the environmental impact of computing through more efficient use of computing resources and lower energy consumption. As data storage needs increase, more servers are required, leading to higher power usage and carbon emissions.
- Virtualization and docker are recent trends in green computing that help optimize resource utilization. Virtualization allows multiple operating systems to run on a single machine, reducing the number of physical servers needed. Docker provides a more efficient way to distribute applications using containers.
- Adopting green computing practices like virtualization can help companies reduce computing costs, lower power consumption by 30-70%, and decrease their environmental footprint by using resources more efficiently.
GMC: Greening MapReduce Clusters Considering both Computation Energy and Cool...Tarik Reza Toha
Increased processing power of MapReduce clusters generally enhances performance and availability at the cost of substantial energy consumption that often incurs higher operational costs (e.g., electricity bills) and negative environmental impacts (e.g., carbon dioxide emissions). There exist a few greening methods for computing clusters in the literature that focus mainly on computational energy consumption leaving cooling energy, which occupies a significant portion of the total energy consumed by the clusters. To this extent, in this paper, we propose a machine learning based approach named as Green MapReduce Cluster (GMC) that reduces the total energy consumption of a MapReduce cluster considering both computational energy and cooling energy. GMC predicts the number of machines that results in minimum total energy consumption. We perform the prediction through applying different machine learning techniques over year-long data collected from a real setup. We evaluate performance of GMC over a real testbed. Our evaluation reveals that GMC reduces total energy consumption by up to 47% compared to other alternatives while experiencing marginal throughput degradation in a few cases.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage based on needs. This is because of the virtualization technology. The scheduling objectives are to improve the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
The Nexlink® VFX Professional WorkstationsJan Robin
The document describes the Nexlink VFX series of professional workstations from Seneca Data Distributors. The Nexlink VFX series includes the 9100, 9200, and 9300 models, which are designed for content creation applications like video editing. The workstations feature Intel Xeon or Core i7 processors and NVIDIA Quadro or GeForce graphics cards. They offer customizable configurations, cooling options, high memory and storage capacity, and certification for demanding applications. The Nexlink VFX series provides optimized performance for 3D animation, modeling, video editing, and other graphics-intensive workflows.
Eric Baldeschwieler Keynote from Storage Developers ConferenceHortonworks
- Apache Hadoop is an open-source software framework for distributed storage and processing of large datasets across clusters of computers. It allows for the reliable storage of petabytes of data and large-scale computations across commodity hardware.
- Apache Hadoop is used widely by internet companies to analyze web server logs, power search engines, and gain insights from large amounts of social and user data. It is also used for machine learning, data mining, and processing audio, video, and text data.
- The future of Apache Hadoop includes making it more accessible and easy to use for enterprises, addressing gaps like high availability and management, and enabling partners and the community to build on it through open APIs and a modular architecture.
A basic intellectual property presentation relating to patent, trademark, copyright, and trade secret issues at the Alexandria Small Business Development Center.
Mik Godley is a British painter and art lecturer known for conceptual work that draws from digital images found online. His recent project "Considering Silesia" examines issues of mixed heritage, cultural memory, identity, and displacement through "virtual expeditions" to his mother's homeland and painting the images. He is fascinated by how pixels from low-resolution images can form abstract shapes and questions how the internet influences how we see and understand the world. His work will be featured in the Austin Museum of Art's upcoming "The Modern Masters" series.
The document discusses green cloud computing and describes a technical seminar presented by S.Sai Madhuri. It defines cloud computing and discusses types including SaaS, PaaS, and IaaS. It then explains green computing and green cloud computing, describing the core components and architecture of data centers. The document outlines the objective of calculating energy consumption using a green cloud simulator in VMWare Player to analyze existing systems and develop more efficient solutions.
This project consolidated Minnesota State Colleges and Universities' (MNSCU) 32 individual data centers into a standardized green IT data center to reduce costs and energy consumption. The project team virtualized servers, implemented alternative energy like solar power, and obtained Green IT certification for staff. These changes reduced energy usage by 5% in the first phase and were projected to yield $9 million in total savings and a 30% reduction upon completion.
Also known as stepwise-refinement or decomposition, this approach takes the whole software system as one entity and decomposes it to achieve more than one subsystem based on some characteristics.
PAC 2.5 Efficiency is Attainable, What are you Waiting for?SchneiderITB
This presentation covers ways to increase data center efficiency. From what we consider the basics to more advanced techniques and then through services that are available. Many of these are covered through individual white papers and presentations but we wanted to bring these topics together under one presentation.
Green cloud computing aims to make cloud infrastructure more energy efficient and environmentally friendly. Adopting measures like using more renewable energy sources, virtualizing servers, and improving data center cooling can help reduce carbon emissions and operational costs. Virtualizing servers allows multiple virtual machines to run on a single physical server, increasing efficiency and hardware utilization. Data centers also aim to lower their power usage effectiveness rating by implementing designs with hot-aisle/cold-aisle configurations and adopting newer technologies. Transitioning to renewable energy sources for power can further reduce the carbon footprint of cloud infrastructure and lead to more stable energy prices over time.
Benchmark the Relative Performance of Your Data CenterAFCOM
This document promotes an upcoming facilities management session at the Fall 2012 Data Center World Conference. It provides the website, www.datacenterworld.com, for more information on sessions. The document notes that the presentation contents are owned by AFCOM and Data Center World and require express permission for reuse. It provides a contact, Jay Taylor at jater@afcom.com, for any questions or permission requests.
Green cloud computing aims to minimize environmental impact by optimizing computing resource usage. It focuses on reducing materials, energy, water and e-waste through techniques like virtualization, consolidation, automation and multitenancy. These improvements lead to greater efficiency and resource utilization in cloud data centers and networks. Metrics like PUE, CUE and DCP are used to measure a cloud's environmental footprint and productivity.
This document discusses green cloud computing. It begins by defining cloud computing and green computing, noting that cloud computing requires large data centers that consume significant energy. It then discusses how green cloud computing aims to reduce this energy usage through techniques like server virtualization and energy-aware resource allocation. Specific strategies that cloud providers and data centers are taking to improve energy efficiency are also summarized, such as geographic placement of data centers and measures to optimize cooling.
In today’s world the growing demand for knowledge has made cloud computing a center of attraction. Cloud computing is providing utility based services to all the users worldwide. It enables presentation of applications from consumers, scientific and business domains. However, data centers created for cloud computing applications consume huge amounts of energy, contributing to high operational costs and a large amount of carbon dioxide emission to the environment. With enhancement of data center, the power consumption is increasing at such a rate that it has become a key concern these days because it is ultimately leading to energy shortcomings and global climatic change. Therefore, we need green cloud computing solutions that can not only save energy, but also reduce operational costs.
Green computing refers to using computing resources efficiently and minimizing environmental impact. It involves implementing energy-efficient policies and practices when setting up and operating IT systems. The goals of green computing include minimizing energy consumption, purchasing green energy, and reducing employee/customer travel requirements. Green cloud computing aims to achieve efficient infrastructure utilization and processing while minimizing energy usage. It uses techniques like dynamic resource allocation and powering down underutilized servers.
IRJET- Recent Trends in Green Cloud ComputingIRJET Journal
- Green computing aims to reduce the environmental impact of computing through more efficient use of computing resources and lower energy consumption. As data storage needs increase, more servers are required, leading to higher power usage and carbon emissions.
- Virtualization and docker are recent trends in green computing that help optimize resource utilization. Virtualization allows multiple operating systems to run on a single machine, reducing the number of physical servers needed. Docker provides a more efficient way to distribute applications using containers.
- Adopting green computing practices like virtualization can help companies reduce computing costs, lower power consumption by 30-70%, and decrease their environmental footprint by using resources more efficiently.
GMC: Greening MapReduce Clusters Considering both Computation Energy and Cool...Tarik Reza Toha
Increased processing power of MapReduce clusters generally enhances performance and availability at the cost of substantial energy consumption that often incurs higher operational costs (e.g., electricity bills) and negative environmental impacts (e.g., carbon dioxide emissions). There exist a few greening methods for computing clusters in the literature that focus mainly on computational energy consumption leaving cooling energy, which occupies a significant portion of the total energy consumed by the clusters. To this extent, in this paper, we propose a machine learning based approach named as Green MapReduce Cluster (GMC) that reduces the total energy consumption of a MapReduce cluster considering both computational energy and cooling energy. GMC predicts the number of machines that results in minimum total energy consumption. We perform the prediction through applying different machine learning techniques over year-long data collected from a real setup. We evaluate performance of GMC over a real testbed. Our evaluation reveals that GMC reduces total energy consumption by up to 47% compared to other alternatives while experiencing marginal throughput degradation in a few cases.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage based on needs. This is because of the virtualization technology. The scheduling objectives are to improve the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
The Nexlink® VFX Professional WorkstationsJan Robin
The document describes the Nexlink VFX series of professional workstations from Seneca Data Distributors. The Nexlink VFX series includes the 9100, 9200, and 9300 models, which are designed for content creation applications like video editing. The workstations feature Intel Xeon or Core i7 processors and NVIDIA Quadro or GeForce graphics cards. They offer customizable configurations, cooling options, high memory and storage capacity, and certification for demanding applications. The Nexlink VFX series provides optimized performance for 3D animation, modeling, video editing, and other graphics-intensive workflows.
Eric Baldeschwieler Keynote from Storage Developers ConferenceHortonworks
- Apache Hadoop is an open-source software framework for distributed storage and processing of large datasets across clusters of computers. It allows for the reliable storage of petabytes of data and large-scale computations across commodity hardware.
- Apache Hadoop is used widely by internet companies to analyze web server logs, power search engines, and gain insights from large amounts of social and user data. It is also used for machine learning, data mining, and processing audio, video, and text data.
- The future of Apache Hadoop includes making it more accessible and easy to use for enterprises, addressing gaps like high availability and management, and enabling partners and the community to build on it through open APIs and a modular architecture.
A basic intellectual property presentation relating to patent, trademark, copyright, and trade secret issues at the Alexandria Small Business Development Center.
Mik Godley is a British painter and art lecturer known for conceptual work that draws from digital images found online. His recent project "Considering Silesia" examines issues of mixed heritage, cultural memory, identity, and displacement through "virtual expeditions" to his mother's homeland and painting the images. He is fascinated by how pixels from low-resolution images can form abstract shapes and questions how the internet influences how we see and understand the world. His work will be featured in the Austin Museum of Art's upcoming "The Modern Masters" series.
Cisco and Greenplum Partner to Deliver High-Performance Hadoop Reference ...EMC
The document describes a partnership between Cisco and Greenplum to deliver optimized high-performance Hadoop reference configurations. Key elements include:
- Greenplum MR provides a high-performance distribution of Hadoop with features like direct data access, high availability, and advanced management.
- Cisco UCS is the exclusive hardware platform and provides a flexible, scalable computing platform optimized for Hadoop workloads.
- The Cisco Greenplum MR Reference Configuration combines these software and hardware components into an integrated solution for running Hadoop and big data analytics workloads.
The document discusses IBM's Intelligent Cluster solutions for high performance computing. It highlights that the solutions offer:
1) Leading-edge technology with flexibility of choice using IBM System x rack servers, BladeCenter servers, and iDataPlex servers.
2) High performance computing capabilities that can provide up to twice the performance of other solutions.
3) Energy and space efficiency by reducing power and cooling costs by up to 50% while maximizing performance density.
This document promotes oceanfront vacation rentals that create lasting memories. It suggests contacting the clubhouse manager, Stephanie L. Caddy, to book an oceanfront property and make memories that will last a lifetime.
The Tick App is a new online resource developed by Texas A&M University and other southern universities to provide information on 11 tick species found in Texas and the southern region. It was created by a design team led by Pete D. Teel, Otto F. Strey, and Robin L. Williams and was reviewed by entomology experts from Texas, Oklahoma, Ohio, Florida, and Auburn University as well as the Southern Region IPM Center.
Gangs are groups that engage in criminal activity and often identify with signs like colors and symbols. They make money through illegal acts like drug trafficking and threaten violence against those who cooperate with law enforcement. While some join for a sense of family, belonging to a gang often leads to criminal charges or even death.
Este documento resume la historia de la banda de heavy metal Metallica. Comienza describiendo la formación de la banda en 1981 y sus primeros álbumes Kill 'Em All y Ride the Lightning. Luego describe eventos clave como la muerte del bajista Cliff Burton en 1986 y los exitosos álbumes Master of Puppets y ...And Justice for All. Finalmente, resume el enorme éxito comercial de su álbum homónimo de 1991, también conocido como The Black Album.
The document summarizes IBM's Intelligent Cluster integrated high performance computing solutions. It highlights that the solutions are built on innovative IBM System x, BladeCenter, and iDataPlex servers, offering high performance, energy and space efficiency, and easy deployment. Key features include leading-edge technology, high performance for applications like industrial design and manufacturing, reduced power and cooling costs, and a single point of contact for support.
This document outlines Seneca Data Distributors' proven time to market process for product development which includes 1) needs assessment, 2) recommendation, 3) prototype, 4) customer evaluation, and 5) production. It then describes Seneca's offerings in digital signage, digital broadcast, digital security, and digital healthcare which include media players, workstations, video wall controllers, and storage solutions.
Elliptical Mobile Solutions is launching a new self-contained data center solution at the Uptime Institute's Green IT Symposium. The solution aims to provide scalable, modular, automated, and reusable data centers that are more energy efficient, have a smaller footprint, and are easier to retrofit than traditional data centers. The self-contained data centers use closed-loop cooling to improve energy efficiency by 50-80% and reduce energy costs by 30-50%. They also promise to help customers reduce capital and operating expenses through smaller floor areas, converged facility management, and reduced security requirements.
The document discusses the next wave of green IT and making data centers more energy efficient. It notes that data center energy costs are significant and that McKinsey predicts data centers will produce more greenhouse gases than airlines by 2020. It provides best practices for building sustainable green data centers, including exploiting virtualization, improving server utilization rates, and designing efficient cooling systems.
Applying Cloud Techniques to Address Complexity in HPC System Integrationsinside-BigData.com
In this video from the HPC User Forum at Argonne, Arno Kolster from Providentia Worldwide presents: Applying Cloud Techniques to Address Complexity in HPC System Integrations.
"The Oak Ridge Leadership Computing Facility (OLCF) and technology consulting company Providentia Worldwide recently collaborated to develop an intelligence system that combines real-time updates from the IBM AC922 Summit supercomputer with local weather and operational data from its adjacent cooling plant, with the goal of optimizing Summit’s energy efficiency. The OLCF proposed the idea and provided facility data, and Providentia developed a scalable platform to integrate and analyze the data."
Watch the video: https://wp.me/p3RLHQ-kOg
Learn more: http://www.providentiaworldwide.com/
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Green computing refers to environmentally sustainable and efficient computing practices throughout a product's lifecycle. This includes green use through energy efficient computing, green disposal like recycling, green design of efficient components, and green manufacturing with low environmental impact. Approaches to green computing involve optimizing software and deployment, like virtualization and power management, as well as recycling materials to reduce waste. The goals are to minimize environmental impact and costs while maximizing performance and sustainability.
The document discusses green IT and datacenter consolidation. It provides context on green IT, noting that IT accounts for 2% of global energy demand. Green IT strategies discussed include prolonging equipment lifetime, software optimization, virtualization, and power management. Datacenter consolidation aims to reduce costs, improve service levels and availability, and minimize external pressures by optimizing utilization of hardware and facilities. The world's most sustainable datacenter, the Cap Gemini Merlin DC, is highlighted for its fresh air cooling, modular design, proven high PUE of 1.09, and other green features.
This presentation brings insights on cloud and green cloud computing and briefs the readers with its potential in india and how it can be achieved. Numerous insights have been collectively put in into this presentation.
Why are you paying for wasted energy in IT ?
Energy costs continue to climb and yet up to a third of the money companies spend on power could be wasted due to inneficient IT infrastructure. Take a serious look at your IT energy use.
Green computers (NIRAJ KUMAR FROM BIHAR)Niraj Kumar
Over the last few years, interest in “green computing” has motivated research into energy-saving techniques for enterprise systems, from network proxies and virtual machine migration to the return of thin clients.
When selecting computers, there are many other considerations than just energy, such as computational resources, and price.
Why are you paying for wasted energy in IT?
Energy costs continue to climb and yet up to a third of the money that companies spend on power could be wasted(1) owing to inefficient IT infrastructure. Power demands are predicted to outstrip supply in the next few years, so those costs won’t comedown. Energy regulations and carbon reduction targets are adding to the pressure. So why aren’t you taking a serious look at your IT energy use?
This document discusses how the IBM XIV Storage System is designed to significantly reduce power consumption compared to other storage systems. It achieves over 65% lower power usage through an architecture that optimizes capacity utilization, eliminating unused "orphaned" storage space and using thin provisioning to allocate more virtual storage capacity than actual physical capacity installed. This allows customers to purchase only the storage needed currently while still having room for future growth. The efficient architecture also reduces the amount of hardware required, further cutting power and cooling costs while still providing high-performance storage.
This document discusses green cloud computing. It begins by defining green computing and cloud computing individually. Green computing aims to reduce power consumption and environmental impact of IT, while cloud computing involves virtualized and interconnected computers. Green cloud computing combines these concepts by making cloud infrastructure and operations more energy efficient. The document then covers benefits like reduced energy use, the role of dynamic provisioning and multi-tenancy in cloud enabling green computing, and a case study on a green cloud architecture and scheduling policies that can reduce carbon emissions by 20%.
The document discusses green computing, which aims to reduce the environmental impact of computers and data centers. It outlines various approaches like virtualization, power management, recycling, and telecommuting. These can improve energy efficiency and reduce costs. The document also discusses implementing green computing through server consolidation, replacing CRT monitors, and keeping equipment longer to reduce waste. Future trends may include more efficient and recyclable computer components to further minimize environmental impact.
Green computing aims to design, build, and operate computer systems to be more energy efficient while also improving economic viability and system performance. It seeks to reduce the negative environmental impact of computing devices through their entire lifecycles from production to disposal. Current trends in green computing include efforts to reduce e-waste, increase energy efficiency in data centers and devices, optimize data center resources through consolidation and virtualization, promote eco-labeling of green IT products, and leverage the energy efficiency of cloud computing and terminal servers.
This IBM Redpaper provides a brief overview of OpenStack and a basic familiarity of its usage with the IBM XIV Storage System Gen3. The illustration scenario that is presented uses the OpenStack Folsom release implementation IaaS with Ubuntu Linux servers and the IBM Storage Driver for OpenStack. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn how all flash needs end to end Storage efficiency. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about vSphere Storage API for Array Integration on the IBM Storwize family. IBM Storwize V7000 Unified combines the block storage capabilities of Storwize V7000 with file storage capabilities into a single system for greater ease of management and efficiency. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about IBM FlashSystem 840 and its complete product specification in this Redbook. FlashSystem 840 provides scalable performance for the most demanding enterprise class applications. IBM FlashSystem 840 accelerates response times with IBM MicroLatency to enable faster decision making. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about the IBM System x3250 M5,.The x3250 M5 offers the following energy-efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to a green environment, energy-efficient planar components help lower operational costs. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210746104/IBM-System-x3250-M5
This Redbook talks about the product specification of IBM NeXtScale nx360 M4. The NeXtScale nx360 M4 server provides a dense, flexible solution with a low total cost of ownership (TCO). The half-wide, dual-socket NeXtScale nx360 M4 server is designed for data centers that require high performance but are constrained by floor space. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210745680/IBM-NeXtScale-nx360-M4
The IBM System x3650 M4 HD is a (1) 2-socket 2U rack-optimized server that supports up to 32 internal drives and features an innovative design for optimal performance, uptime, and dense storage. It offers (2) excellent reliability, availability, and serviceability for improved business environments. The server is (3) designed for easy deployment, integration, service, and management.
Here are the product specification for IBM System x3300 M4. This product can be managed remotely.The x3300 M4 server contains IBM IMM2, which provides advanced service-processor control, monitoring, and an alerting function. The IMM2 lights LEDs to help you diagnose the problem, records the error in the event log, and alerts you to the problem. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System x iDataPlex dx360 M4. IBM System x iDataPlex is an innovative data center solution that maximizes performance and optimizes energy and space efficiency. The iDataPlex solution provides customers with outstanding energy and cooling efficiency, multi-rack level manageability, complete flexibility in configuration, and minimal deployment effort. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210744055/IBM-System-x-iDataPlex-dx360-M4
The IBM System x3500 M4 server provides powerful and scalable performance for business applications in an energy efficient tower or rack design. It features the latest Intel Xeon E5-2600 v2 or E5-2600 processors with up to 24 cores, 768GB RAM, 32 hard drives, and 8 PCIe slots. Comprehensive systems management tools and redundant components help ensure high availability, while its small footprint and 80 Plus Platinum power supplies reduce data center costs.
Learn about system specification for IBM System x3550 M4. The x3550 M4 offers numerous features to boost performance, improve scalability, and reduce costs. Improves productivity by offering superior system performance with up to 12-core processors, up to 30 MB of L3 cache, and up to two 8 GT/s QPI interconnect links. For more information on System x, visit http://ibm.co/Q7m3iQ.
Learn about IBM System x3650 M4. The x3650 M4 is an outstanding 2U two-socket business-critical server, offering improved performance and pay-as-you grow flexibility along with new features that improve server management capability. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741926/IBM-System-x3650-M4
Learn about the product specification of IBM System x3500 M3. System x3500 M3 has an energy-efficient design which works in conjunction with the IMM to govern fan rotation based on the readings that it delivers. This saves money under normal conditions because the fans do not have to spin at high speed. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741626/IBM-System-x3500-M3
Learn about IBM System x3400 M3. The x3400 M3 offers numerous features to boost performance and reduce costs, x3400 M3 has the ability to grow with your application requirements with these features. Powerful systems management features simplify local and remote management of the x3400 M3. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System 3250 M3 which is a single-socket server that offers new levels of performance and flexibility
to help you respond quickly to changing business demands. Cost-effective and compact, it is well suited to small to mid-sized businesses, as well as large enterprises. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210740347/IBM-System-x3250-M3
Learn about IBM System x3200 M3 and its specifications. The System x3200 M3 features easy installation and management with a rich set of options for hard disk drives and memory. The efficient design helps to save energy and provide a better work environment with less heat and noise. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210739508/IBM-System-x3200-M3
Learn about the configuration of IBM PowerVC. IBM PowerVC is built on OpenStack that controls large pools of server, storage, and networking resources throughout a data center. IBM Power Virtualization Center provides security services that support a secure environment. Installation requires just 20 minutes to get a virtual machine up and running. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about Ibm POWER7 Virtualization Performance. PowerVM Lx86 is a cross-platform virtualization solution that enables the running of a wide range of x86 Linux applications on Power Systems platforms within a Linux on Power partition without modifications or recompilation of the workloads. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
http://www.scribd.com/doc/210734237/A-Comparison-of-PowerVM-and-Vmware-Virtualization-Performance
This reference architecture document describes deploying the VMware vCloud Enterprise Suite on the IBM PureFlex System hardware platform. Key points:
- The vCloud Suite software provides components for managing and delivering cloud services, while the IBM PureFlex System provides an integrated hardware platform in a single chassis.
- The reference architecture focuses on installing the vCloud Suite management components as virtual machines on an ESXi host to manage consumer resources.
- The IBM PureFlex System provides servers, networking, and storage in a single chassis that can then be easily scaled out. This standardized deployment accelerates provisioning of cloud infrastructure.
- Deployment considerations cover systems management using IBM Flex System Manager, server, networking, storage configurations
Learn how x6: The sixth generation of EXA Technology is fast, agile and Resilient for Emerging Workloads from Alex Yost. Vice President, IBM PureSystems and System x
IBM Systems and Technology Group. x6 drives cloud and big data for enterprises by achieving insight faster thereby outperforming competitors. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210715795/X6-The-sixth-generation-of-EXA-Technology
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Monitoring and Managing Anomaly Detection on OpenShift.pdf
Customer Case Study - PIK
1. The Potsdam Institute for
Climate Impact Research
takes on smarter climate
research
The Potsdam Institute for Climate Impact Research (PIK) is a pioneer
in interdisciplinary climate research. PIK scientists use climate model-
Overview
ing to study global climate change and its impact on ecological,
The Need economic and social systems. High-performance computing (HPC) is
To perform comprehensive calculations critical to this type of modeling. One of the challenges involves
within challenging climate models as drawing conclusions about small-scale weather changes and short-term
scientists study extreme weather condi-
weather patterns, since brief storms or periods of drought are too short
tions and their impacts
for computer models that are normally used to calculate long-term
The Solution weather developments.
To implement an IBM Cluster 1350 solu-
tion based on IBM System x iDataPlex
technology—ranked #244 in the Pushing the limits of high-performance
TOP500 list and #59 in the Green500 list
computing
What Makes it Smarter During a recent project on extreme weather simulation, PIK
Achieves 30 times the performance of researchers had to examine short-lived weather events, using
previous computers, uses 25 percent comprehensive calculations to draw conclusions about when and where
less power than conventional solutions,
extreme weather events would occur and what their impacts could be.
and delivers industry-leading energy
efficiency These calculations made very high demands on the institute’s IT
systems, pushing their existing high-performance computers to
The Result their limits. As a result, the institute decided to install a new HPC
“With IBM, we are raising the bar every
time when it comes to performance and
computing cluster.
energy efficiency. This is the only way
that climate impact research makes Potential technology providers had to demonstrate, using benchmarks,
sense.”
that their solutions would meet the institute’s demanding require-
— Karsten Kramer, Manager, IT ments. PIK chose an IBM Cluster 1350 supercomputer with 30 times
Infrastructure and Services Group, the processing speed of the IBM POWER4™ system that was being
Potsdam Institute for Climate Impact
Research replaced. Based on IBM System x® iDataPlex™ technology, the new
2. system ranked #244 in the June 2009 TOP500 list of the world’s
highest-performing supercomputers,1 and #59 in the x86 cluster
Business Benefits
category in the June 2009 Green500 list of the world’s most energy-
● Enables PIK scientists to perform the efficient computers.2 The Cluster 1350 is administered via two
comprehensive calculations demanded IBM System x3650 servers using the Extreme Cluster Administration
by challenging climate computer
simulations
Toolkit (xCAT).
● Helps reduce the total cost of owner-
According to Karsten Kramer, Manager of the IT Infrastructure and
ship through ease of maintenance and
by supporting open source software Services Group at PIK, “With IBM, we are raising the bar every time
options when it comes to performance and energy efficiency. This is the only
● Provides horizontal scalability to easily
way that climate impact research makes sense.”
accommodate future data growth,
protecting the organization’s IT invest-
ment over the long term
Powerful processing combined with energy
efficiency
● Combines high-performance comput-
The new supercomputer integrates Intel® data processing and Voltaire
ing with industry-leading energy-
efficiency features in a high-density InfiniBand switches, providing extreme processing density, a more
architecture that makes best use of efficient power supply, and improved cooling. Processors are cooled via
valuable data center space water-cooled cabinet doors. Since iDataPlex cabinets are not as
deep as standard cabinets, less air current is needed to cool the nodes;
the blowers therefore operate at lower speeds, which also reduces
power consumption. iDataPlex also requires less energy than compara-
ble x86 systems, decreasing cooling demand per megaflop of process-
ing power.
“Thanks to the IBM supercomputer, we were able to reduce power
consumption by 25 percent compared to conventional solutions,”
Kramer reports.
Smarter Research Boosting climate impact research with green HPC solutions
Instrumented Captures and integrates weather data into “ensemble simula-
tions” that simulate an extreme weather event 20 to 50 times
with slightly different input values
Interconnected Connects data across the climate model process, from the
importing of records and processing of calculations to
the storage and backup of enormous results files
Intelligent More precisely predicts weather events that have so far
proven to be incalculable—extreme, short-term weather
phenomena such as torrential rain or drought
3. Industry-leading solutions for high-performance
Solution Components data storage and backup
After completing climate model calculations, the system uses
Software IBM Tivoli® Workload Scheduler LoadLeveler® to ensure that large
● IBM General Parallel File System data records can be quickly stored on the hard drives, enabling users to
● IBM Tivoli® Storage Manager run more jobs in a shorter time. A storage area network (SAN) with
● IBM Tivoli Workload Scheduler four IBM System Storage™ DS5300 enterprise storage systems is also
LoadLeveler®
connected to the supercomputer, with IBM General Parallel File
● IBM AIX®
System (GPFS™) providing 200 TB of usable data storage. The GPFS
● IBM AIX HSM Client
distributes the data to different disk drives, thus guaranteeing an
Hardware extremely high data flow rate.
● IBM Cluster 1350
● IBM System x® iDataPlex™ PIK controls backups and hierarchical storage management (HSM) via
● IBM Power® servers three IBM Power® servers running IBM AIX®, Tivoli Storage
● IBM System Storage™ DS5300 Manager, and AIX HSM Client. To protect existing storage invest-
● IBM System x3650 ments, they also upgraded an existing IBM TotalStorage® 3494 tape
Services archive to IBM S1130 disk drives.
● IBM Global Technology Services
Full deployment support from IBM Global
Technology Services
Because of significant space and cooling constraints, the institute
turned to IBM Global Technology Services (GTS) to help with solu-
tion installation. GTS handled project planning and implementation
and provides ongoing comprehensive maintenance and support.
Powering more accurate conclusions in critical
climate studies
The IBM supercomputing cluster makes it easier for PIK scientists to
calculate highly complex simulations using the newest scientific
“Systems for climate models. Current weather data and results from international research
projects can be integrated into the calculations. Based on this data,
modeling are becoming scientists can draw more accurate conclusions about extreme weather
increasingly effective conditions and take into account small-scale changes and short-term
and more efficient. One activity. Conclusions concerning the climate and the ways it is
influenced are therefore more accurate. This research can help reduce
reason is innovative risks for humans and the environment.
solutions from IBM.”
“The systems for climate modeling are becoming increasingly effective
— Karsten Kramer, Manager, IT
Infrastructure and Services Group, Potsdam
and more efficient,” Kramer says. “One reason is innovative solutions
Institute for Climate Impact Research from IBM.”
For more information
Contact your IBM sales representative or IBM Business Partner. Visit
us at: ibm.com/deepcomputing
For more information about the Potsdam Institute for Climate Impact
Research, visit: www.pik-potsdam.de