IBM was awarded a $325 million contract by the Department of Energy to design and build two new supercomputing systems at Lawrence Livermore and Oak Ridge National Labs. The systems will be based on IBM's OpenPOWER technology and use GPU accelerators from NVIDIA and interconnect technologies from Mellanox. Secretary of Energy Ernest Moniz and Senator Lamar Alexander announced the contract, which aims to advance key research initiatives and scientific discovery through highly powerful data-centric supercomputing.
IBM Selected by DOE to Develop Next Generation SystemsMark O'Riley
IBM was awarded a $325 million contract by the Department of Energy to design and build two new supercomputing systems at Lawrence Livermore and Oak Ridge National Laboratories. The systems will utilize IBM's OpenPOWER strategy and GPU accelerators from NVIDIA to advance key research initiatives. Officials from IBM, the labs, and partners attended announcements and press events to promote the new supercomputers and IBM's selection.
Qu'est ce que le Big Data ? Avec Victoria Galano Data Scientist chez Air FranceJedha Bootcamp
Depuis les 5 dernières années, nous avons créé plus de données que depuis les débuts de l'humanité. Nous produisons aujourd'hui tellement de données qu'il devient difficile de les gérer. C'est ce qu'on appelle le Big Data. Durant ce workshop nous parlerons des enjeux du Big Data et de ses applications concrètes dans notre société.
This document provides an overview of big data. It defines big data as large volumes of data that are high in velocity and variety, requiring new techniques and tools to analyze. Examples are given of the huge amounts of data generated daily by companies like Facebook, Twitter, and YouTube. The benefits of big data analytics are described as enabling better business decisions through hidden patterns, customer insights, and competitive advantages. The future of big data is promising, with the market expected to grow substantially in both revenue and jobs required to manage large amounts of data.
A Review Paper on Big Data: Technologies, Tools and TrendsIRJET Journal
This document provides a review of big data technologies, tools, and trends. It begins with an introduction to big data, discussing the rapid growth in data volumes and defining key characteristics like variety, velocity, and veracity. Common sources of big data are described, such as IoT devices, social media, and scientific projects. Hadoop is discussed as a major tool for big data management, with components like HDFS for scalable data storage. Overall, the document aims to discuss the state of big data technologies and challenges, as well as future domains and trends.
The document provides a timeline of key moments in the history of big data and data science from 1991 to 2020. Some of the major events included the birth of the internet in 1991, the launch of Google search engine in 1997, the release of the Hadoop open-source platform in 2005 which revolutionized data processing, and the prediction that the big data market will reach $203 billion by 2020. The timeline shows how digital storage became more cost effective than paper in the 1990s, how data volumes increased exponentially in the 2000s, and how mobile devices surpassed desktops in data access by 2014.
This document discusses the rise of big data and how the volume of data being created is growing exponentially, with 2.5 quintillion bytes created daily from various sources like sensors, social media, images, videos and purchases. It outlines how traditional databases and data analytics are struggling to handle this unstructured data, leading to the emergence of new solutions like Hadoop. It also explores how new roles like data scientists are emerging to help organizations extract value from all this big data through advanced analytics.
IBM Selected by DOE to Develop Next Generation SystemsMark O'Riley
IBM was awarded a $325 million contract by the Department of Energy to design and build two new supercomputing systems at Lawrence Livermore and Oak Ridge National Laboratories. The systems will utilize IBM's OpenPOWER strategy and GPU accelerators from NVIDIA to advance key research initiatives. Officials from IBM, the labs, and partners attended announcements and press events to promote the new supercomputers and IBM's selection.
Qu'est ce que le Big Data ? Avec Victoria Galano Data Scientist chez Air FranceJedha Bootcamp
Depuis les 5 dernières années, nous avons créé plus de données que depuis les débuts de l'humanité. Nous produisons aujourd'hui tellement de données qu'il devient difficile de les gérer. C'est ce qu'on appelle le Big Data. Durant ce workshop nous parlerons des enjeux du Big Data et de ses applications concrètes dans notre société.
This document provides an overview of big data. It defines big data as large volumes of data that are high in velocity and variety, requiring new techniques and tools to analyze. Examples are given of the huge amounts of data generated daily by companies like Facebook, Twitter, and YouTube. The benefits of big data analytics are described as enabling better business decisions through hidden patterns, customer insights, and competitive advantages. The future of big data is promising, with the market expected to grow substantially in both revenue and jobs required to manage large amounts of data.
A Review Paper on Big Data: Technologies, Tools and TrendsIRJET Journal
This document provides a review of big data technologies, tools, and trends. It begins with an introduction to big data, discussing the rapid growth in data volumes and defining key characteristics like variety, velocity, and veracity. Common sources of big data are described, such as IoT devices, social media, and scientific projects. Hadoop is discussed as a major tool for big data management, with components like HDFS for scalable data storage. Overall, the document aims to discuss the state of big data technologies and challenges, as well as future domains and trends.
The document provides a timeline of key moments in the history of big data and data science from 1991 to 2020. Some of the major events included the birth of the internet in 1991, the launch of Google search engine in 1997, the release of the Hadoop open-source platform in 2005 which revolutionized data processing, and the prediction that the big data market will reach $203 billion by 2020. The timeline shows how digital storage became more cost effective than paper in the 1990s, how data volumes increased exponentially in the 2000s, and how mobile devices surpassed desktops in data access by 2014.
This document discusses the rise of big data and how the volume of data being created is growing exponentially, with 2.5 quintillion bytes created daily from various sources like sensors, social media, images, videos and purchases. It outlines how traditional databases and data analytics are struggling to handle this unstructured data, leading to the emergence of new solutions like Hadoop. It also explores how new roles like data scientists are emerging to help organizations extract value from all this big data through advanced analytics.
The SGMT trading system generated a gross return of 5.83% in July 2016, with the strongest returns coming from the Japanese Yen and Australian Dollar. Overall market movements were volatile in July as markets assessed risks from Brexit and anticipated monetary policy actions from central banks. SGMT's risk management approach helped reduce volatility and generated modest excess returns compared to maintaining a full model exposure.
This document appears to be an edited collection of essays celebrating the 10th anniversary of the Protocol on the Rights of Women in Africa. It includes quotes from government officials supporting women's rights. The collection features essays on topics such as women's inheritance rights in Botswana, the use of SMS to spread awareness of domestic violence laws in Nigeria, and messages of solidarity from women's rights organizations in Africa.
N. S. E N T E R P R I S E S
Phone: 022-64203995
E-mail: victorfloors1@gmail.com
‘’N. S. ENTERPRISES’’ approach to public Public relation is to create, modify, enhance and protect the business environments of N. S. Enterprises clients through creating informed opinions in key audience based on the creative presentation of truthful information.
The carefully balance combination of knowledge of the local condition, the innovative approach to communication, and the experience in work in national enhance N. S. Enterprises capabilities to effectively manage perception of key audience. Adherence to stringent ethical standards, as well as the use of creative communication tools make N. S. Enterprises capable of optimally satisfying the communications needs of its clients.’’
Dr. Seth Naeve and Dr. Lee Johnston - Pork and BeansJohn Blue
Pork and Beans - Dr. Seth Naeve, soybean agronomist, associate professor, Agronomy and Plant Sciences Department, University of Minnesota; Dr. Lee Johnston, professor, swine nutrition and management, Department of Animal Science, University of Minnesota, from the Minnesota Pork Congress, January 20-21, 2010, Minneapolis, MN, USA.
La Unión Europea ha acordado un embargo petrolero contra Rusia en respuesta a la invasión de Ucrania. El embargo prohibirá las importaciones marítimas de petróleo ruso a la UE y pondrá fin a las entregas a través de oleoductos dentro de seis meses. Esta medida forma parte de un sexto paquete de sanciones de la UE destinadas a aumentar la presión económica sobre Moscú y privar al Kremlin de fondos para financiar su guerra.
Como José y María, también yo te llamo Jesús
Te adoro con ternura y gratitud.
Bendito seas Señor.
Siempre estás conmigo.
Eres el Emmanuel.
Canto para ti, mi Señor.
Todo mi ser se abre a la mirada de tu amor.
Jesús, siempre que me visitas,
se me llena de alegría el corazón.
Designing for the dmls process oct2015MecklerMedia
This document discusses design considerations for direct metal laser sintering (DMLS) additive manufacturing. DMLS allows for complex geometries but has limitations like minimum feature size, thin walls, and surface roughness that depend on the machine and material. Internal features need support structures and channels or angled surfaces can be self-supporting. Designs should minimize warping from internal stresses and supports that require removal. DMLS can enable lower cost production of complex parts or quick prototyping before transitioning to conventional manufacturing.
This document discusses supercomputers, including their history, manufacturers, uses, and challenges. It notes that supercomputers can perform billions of calculations per second and are used for complex tasks in fields like weather prediction, research, and military simulations. The document outlines some of the first supercomputers developed in the 1960s and discusses how the current fastest supercomputer is the Tianhe-1A in China. It also briefly summarizes some of the operating systems and cooling challenges of supercomputers.
This document provides an overview of big data, including its definition, characteristics, sources, tools, applications, risks and benefits. It defines big data as large volumes of diverse data that can be analyzed to reveal patterns and trends. The three key characteristics are volume, velocity and variety. Examples of big data sources include social media, sensors and user data. Tools used for big data include Hadoop, MongoDB and analytics programs. Big data has many applications and benefits but also risks regarding privacy and regulation. The future of big data is strong with the market expected to grow significantly in coming years.
This document provides an overview of a seminar on big data. It begins with an introduction on how big data has become important in the IT world. It then defines big data, noting its characteristics of volume, velocity, and variety. It discusses storing, selecting, and processing big data using tools like Hadoop. It covers sources of big data and applications such as smarter healthcare, manufacturing, and traffic control. Both the risks and benefits of big data are mentioned, such as being overwhelmed by data but also making better decisions. Finally, it discusses the future of big data and its importance for the Indian market.
This document provides an overview of big data in a seminar presentation. It defines big data, discusses its key characteristics of volume, velocity and variety. It describes how big data is stored, selected and processed. Examples of big data sources and tools used are provided. The applications and risks of big data are summarized. Benefits to organizations from big data analytics are outlined, as well as its impact on IT and future growth prospects.
This document outlines a seminar presentation on big data. It begins with an introduction that defines big data and notes how it emerged in the early 21st century mainly through online firms. It then covers the three key characteristics of big data - volume, velocity and variety. Other sections discuss storing, selecting and processing big data, as well as tools used and applications. Risks, benefits and the future impact and growth of big data are also summarized. The presentation provides an overview of the key concepts regarding big data.
This document provides an overview of big data, including its definition, characteristics, sources, tools, applications, risks, benefits and future. Big data is characterized by its volume, velocity and variety. It is generated from sources like users, applications, sensors and more. Tools like Hadoop and databases are used to store, process and analyze big data. Big data analytics can provide benefits across many industries and applications. However, it also poses risks around privacy, costs and skills that must be addressed. The future of big data is promising, with the market expected to grow significantly in the coming years.
Big data is very large data that is difficult to process using traditional methods. It is characterized by high volume, velocity, and variety. Examples of real-life big data implementations include using social media to understand customer behavior, tracking social media for marketing campaigns, and analyzing medical data to predict readmissions. Challenges include integrating diverse data sources and ensuring ethical access. Common techniques for processing big data are parallel database management systems and MapReduce frameworks like Hadoop.
This document provides an overview of big data including:
- It defines big data and describes its three key characteristics: volume, velocity, and variety.
- It explains how big data is stored, selected, and processed using techniques like Hadoop and NoSQL databases.
- It discusses some common sources of big data, tools used to analyze it, and applications of big data analytics across different industries.
This document provides an overview of big data, including its definition, characteristics, sources, tools, applications, risks, benefits and future. Big data is characterized by large volumes of data in various formats that are difficult to process using traditional data management and analysis systems. It is generated from sources like user interactions, sensors and systems logs. Tools like Hadoop and NoSQL databases enable storing, processing and analyzing big data. Organizations apply big data analytics to areas such as healthcare, retail and security. While big data poses privacy and management challenges, it also provides opportunities to gain insights and make improved decisions. The big data industry is growing rapidly and expected to be worth over $100 billion.
The document discusses the future of high performance computing (HPC). It covers several topics:
- Next generation HPC applications will involve larger problems in fields like disaster simulation, urban science, and data-intensive science. Projects like the Square Kilometer Array will generate exabytes of data daily.
- Hardware trends include using many-core processors, accelerators like GPUs, and heterogeneous computing with CPUs and GPUs. Future exascale systems may use conventional CPUs with GPUs or innovative architectures like Japan's Post-K system.
- The top supercomputers in the world currently include Summit, a IBM system combining Power9 CPUs and Nvidia Voltas at Oak Ridge, and China's Sunway Taihu
The SGMT trading system generated a gross return of 5.83% in July 2016, with the strongest returns coming from the Japanese Yen and Australian Dollar. Overall market movements were volatile in July as markets assessed risks from Brexit and anticipated monetary policy actions from central banks. SGMT's risk management approach helped reduce volatility and generated modest excess returns compared to maintaining a full model exposure.
This document appears to be an edited collection of essays celebrating the 10th anniversary of the Protocol on the Rights of Women in Africa. It includes quotes from government officials supporting women's rights. The collection features essays on topics such as women's inheritance rights in Botswana, the use of SMS to spread awareness of domestic violence laws in Nigeria, and messages of solidarity from women's rights organizations in Africa.
N. S. E N T E R P R I S E S
Phone: 022-64203995
E-mail: victorfloors1@gmail.com
‘’N. S. ENTERPRISES’’ approach to public Public relation is to create, modify, enhance and protect the business environments of N. S. Enterprises clients through creating informed opinions in key audience based on the creative presentation of truthful information.
The carefully balance combination of knowledge of the local condition, the innovative approach to communication, and the experience in work in national enhance N. S. Enterprises capabilities to effectively manage perception of key audience. Adherence to stringent ethical standards, as well as the use of creative communication tools make N. S. Enterprises capable of optimally satisfying the communications needs of its clients.’’
Dr. Seth Naeve and Dr. Lee Johnston - Pork and BeansJohn Blue
Pork and Beans - Dr. Seth Naeve, soybean agronomist, associate professor, Agronomy and Plant Sciences Department, University of Minnesota; Dr. Lee Johnston, professor, swine nutrition and management, Department of Animal Science, University of Minnesota, from the Minnesota Pork Congress, January 20-21, 2010, Minneapolis, MN, USA.
La Unión Europea ha acordado un embargo petrolero contra Rusia en respuesta a la invasión de Ucrania. El embargo prohibirá las importaciones marítimas de petróleo ruso a la UE y pondrá fin a las entregas a través de oleoductos dentro de seis meses. Esta medida forma parte de un sexto paquete de sanciones de la UE destinadas a aumentar la presión económica sobre Moscú y privar al Kremlin de fondos para financiar su guerra.
Como José y María, también yo te llamo Jesús
Te adoro con ternura y gratitud.
Bendito seas Señor.
Siempre estás conmigo.
Eres el Emmanuel.
Canto para ti, mi Señor.
Todo mi ser se abre a la mirada de tu amor.
Jesús, siempre que me visitas,
se me llena de alegría el corazón.
Designing for the dmls process oct2015MecklerMedia
This document discusses design considerations for direct metal laser sintering (DMLS) additive manufacturing. DMLS allows for complex geometries but has limitations like minimum feature size, thin walls, and surface roughness that depend on the machine and material. Internal features need support structures and channels or angled surfaces can be self-supporting. Designs should minimize warping from internal stresses and supports that require removal. DMLS can enable lower cost production of complex parts or quick prototyping before transitioning to conventional manufacturing.
This document discusses supercomputers, including their history, manufacturers, uses, and challenges. It notes that supercomputers can perform billions of calculations per second and are used for complex tasks in fields like weather prediction, research, and military simulations. The document outlines some of the first supercomputers developed in the 1960s and discusses how the current fastest supercomputer is the Tianhe-1A in China. It also briefly summarizes some of the operating systems and cooling challenges of supercomputers.
This document provides an overview of big data, including its definition, characteristics, sources, tools, applications, risks and benefits. It defines big data as large volumes of diverse data that can be analyzed to reveal patterns and trends. The three key characteristics are volume, velocity and variety. Examples of big data sources include social media, sensors and user data. Tools used for big data include Hadoop, MongoDB and analytics programs. Big data has many applications and benefits but also risks regarding privacy and regulation. The future of big data is strong with the market expected to grow significantly in coming years.
This document provides an overview of a seminar on big data. It begins with an introduction on how big data has become important in the IT world. It then defines big data, noting its characteristics of volume, velocity, and variety. It discusses storing, selecting, and processing big data using tools like Hadoop. It covers sources of big data and applications such as smarter healthcare, manufacturing, and traffic control. Both the risks and benefits of big data are mentioned, such as being overwhelmed by data but also making better decisions. Finally, it discusses the future of big data and its importance for the Indian market.
This document provides an overview of big data in a seminar presentation. It defines big data, discusses its key characteristics of volume, velocity and variety. It describes how big data is stored, selected and processed. Examples of big data sources and tools used are provided. The applications and risks of big data are summarized. Benefits to organizations from big data analytics are outlined, as well as its impact on IT and future growth prospects.
This document outlines a seminar presentation on big data. It begins with an introduction that defines big data and notes how it emerged in the early 21st century mainly through online firms. It then covers the three key characteristics of big data - volume, velocity and variety. Other sections discuss storing, selecting and processing big data, as well as tools used and applications. Risks, benefits and the future impact and growth of big data are also summarized. The presentation provides an overview of the key concepts regarding big data.
This document provides an overview of big data, including its definition, characteristics, sources, tools, applications, risks, benefits and future. Big data is characterized by its volume, velocity and variety. It is generated from sources like users, applications, sensors and more. Tools like Hadoop and databases are used to store, process and analyze big data. Big data analytics can provide benefits across many industries and applications. However, it also poses risks around privacy, costs and skills that must be addressed. The future of big data is promising, with the market expected to grow significantly in the coming years.
Big data is very large data that is difficult to process using traditional methods. It is characterized by high volume, velocity, and variety. Examples of real-life big data implementations include using social media to understand customer behavior, tracking social media for marketing campaigns, and analyzing medical data to predict readmissions. Challenges include integrating diverse data sources and ensuring ethical access. Common techniques for processing big data are parallel database management systems and MapReduce frameworks like Hadoop.
This document provides an overview of big data including:
- It defines big data and describes its three key characteristics: volume, velocity, and variety.
- It explains how big data is stored, selected, and processed using techniques like Hadoop and NoSQL databases.
- It discusses some common sources of big data, tools used to analyze it, and applications of big data analytics across different industries.
This document provides an overview of big data, including its definition, characteristics, sources, tools, applications, risks, benefits and future. Big data is characterized by large volumes of data in various formats that are difficult to process using traditional data management and analysis systems. It is generated from sources like user interactions, sensors and systems logs. Tools like Hadoop and NoSQL databases enable storing, processing and analyzing big data. Organizations apply big data analytics to areas such as healthcare, retail and security. While big data poses privacy and management challenges, it also provides opportunities to gain insights and make improved decisions. The big data industry is growing rapidly and expected to be worth over $100 billion.
The document discusses the future of high performance computing (HPC). It covers several topics:
- Next generation HPC applications will involve larger problems in fields like disaster simulation, urban science, and data-intensive science. Projects like the Square Kilometer Array will generate exabytes of data daily.
- Hardware trends include using many-core processors, accelerators like GPUs, and heterogeneous computing with CPUs and GPUs. Future exascale systems may use conventional CPUs with GPUs or innovative architectures like Japan's Post-K system.
- The top supercomputers in the world currently include Summit, a IBM system combining Power9 CPUs and Nvidia Voltas at Oak Ridge, and China's Sunway Taihu
This document provides an introduction to big data, including its key characteristics of volume, velocity, and variety. It discusses why big data has become important due to the growth in data storage capacities and processing power. Examples are given of the large amounts of data generated daily by companies like Google, Facebook, and Walmart. The document also outlines some common tools and concepts used for big data, including Hadoop and MapReduce, as well as applications in various industries. It discusses the impacts of big data on IT and the benefits organizations can realize, such as increased innovation. The future of big data is predicted to be strong with continued growth in data volumes and investment in data management software.
Bigdata.
Big data is a term for data sets that are so large or complex that traditional data processing application software is inadequate to deal with them. Challenges include capture, storage, analysis, data curation, search, sharing, transfer, visualization, querying, updating and information privacy. The term "big data" often refers simply to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from data, and seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that’s not the most relevant characteristic of this new data ecosystem."[2] Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on."[3] Scientists, business executives, practitioners of medicine, advertising and governments alike regularly meet difficulties with large data-sets in areas including Internet search, fintech, urban informatics, and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics,[4] connectomics, complex physics simulations, biology and environmental research.[5]
Data sets grow rapidly - in part because they are increasingly gathered by cheap and numerous information-sensing Internet of things devices such as mobile devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks.[6][7] The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s;[8] as of 2012, every day 2.5 exabytes (2.5×1018) of data are generated.[9] One question for large enterprises is determining who should own big-data initiatives that affect the entire organization.[10]
Relational database management systems and desktop statistics- and visualization-packages often have difficulty handling big data. The work may require "massively parallel software running on tens, hundreds, or even thousands of servers".[11] What counts as "big data" varies depending on the capabilities of the users and their tools, and expanding capabilities make big data a moving target. "For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration."
This document discusses big data, providing definitions and outlining its key characteristics of volume, velocity, and variety. It describes processes involved like integrating disparate data stores and employing Hadoop MapReduce. Sources of big data are identified as mobile devices, sensors, social media, etc. Tools used include distributed servers, storage, and databases. Statistics on data generated by companies like Facebook and Twitter are provided. Applications of big data include improving science, healthcare, finance, and security. Advantages include access to vast information, while disadvantages include costs and privacy issues.
This document provides an overview of big data. It defines big data as large volumes of diverse data that are growing rapidly and require new techniques to capture, store, distribute, manage, and analyze. The key characteristics of big data are volume, velocity, and variety. Common sources of big data include sensors, mobile devices, social media, and business transactions. Tools like Hadoop and MapReduce are used to store and process big data across distributed systems. Applications of big data include smarter healthcare, traffic control, and personalized marketing. The future of big data is promising with the market expected to grow substantially in the coming years.
Palestra apresentada por Pedro Mário Cruz e Silva, Solution Architect da NVIDIA, como parte da programação da VIII Semana de Inverno de Geofísica, em 19/07/2017.
This document provides an overview of big data including:
- It defines big data and discusses its key characteristics of volume, velocity, and variety.
- It describes sources of big data like social media, sensors, and user clickstreams. Tools for big data include Hadoop, MongoDB, and cloud computing.
- Applications of big data analytics include smarter healthcare, traffic control, and personalized marketing. Risks include privacy and high costs. Benefits include better decisions, opportunities for new businesses, and improved customer experiences.
- The future of big data is strong with worldwide revenues projected to grow from $5 billion in 2012 to over $50 billion in 2017, creating millions of new jobs for data scientists and analysts
Guest Lecture: Introduction to Big Data at Indian Institute of TechnologyNishant Gandhi
This document provides an introduction to big data, including definitions of big data and why it is important. It discusses characteristics of big data like volume, velocity, variety and veracity. It provides examples of big data applications in various industries like GE, Boeing, social media, finance, CERN, journalism, politics and more. It also introduces NoSQL and the CAP theorem, and concludes that big data is changing business and technology by enabling new insights from data to reduce costs and optimize operations.
Similar to IBM Selected by DOE to Develop Next Generation Systems (20)
Guest Lecture: Introduction to Big Data at Indian Institute of Technology
IBM Selected by DOE to Develop Next Generation Systems
1. IBM Selected by DOE to Develop Next Generation Systems
Last Friday, the Department of Energy announced IBM has been awarded a $325 million contract to
design and build two supercomputing systems, at Lawrence Livermore and Oak Ridge National Labs,
validating IBM's OpenPOWER strategy. As part of the announcement, Secretary of Energy Ernest Moniz
and Sen. Lamar Alexander (R- Tenn.) hosted a press briefing at the US Capitol Building with John Kelly in
attendance, along with multiple government leaders. Reporters from NBC, ABC and Politico were among
those in attendance.
To amplify the news, we hosted a press teleconference that included executives from the labs and
OpenPOWER partners NVIDIA and Mellanox. Reporters ranging from the San Francisco Chronicle to
International Business Times attended. We anticipate additional coverage this week.
Wire photo: (from left) Dr. Thom Mason, director, Oak Ridge National Laboratory; Dr. John E. Kelly III,
Senior Vice President & director of IBM Research; Dr. Bill Goldstein, director of Lawrence Livermore
National Laboratory. (Rich Riggins/Feature Photo Service for IBM)
The news was promoted internally through numerous blogs and in the seller community to reinforce that
the systems we are delivering to the labs are building blocks that are available today.
2. Notable Quotes
Computer designers are grappling with an explosion in the amount of data that users want to sift through.
Much of the computation conducted to locate underground oil deposits, for example, is in preparing data
for analysis rather than the analysis itself. To solve it, IBM added computing power to components—
including data storage and networking devices—that usually carry a light burden of calculation. That way,
the supercomputer can delegate heavier computing tasks to those components and reduce the amount of
data that must be transferred and processed by central processing units
. -- Wall Street Journal
Both systems will be based on next-generation IBM POWER servers with NVIDIA's GPU accelerators and
Mellanox's Interconnected technologies "to advance key research initiatives for national nuclear
deterrence, technology advancement and scientific discovery," the DOE said. -- Business Standard
Dubbed Sierra and Summit, the two GPU-accelerated supercomputers will rely on IBM's OpenPOWER
chips -- ZDNet
Sierra and Summit will employ a new “data-centric” technique, which allows the computers to decentralize
the processing of data. Rather than sending data to and from a central processor chip, IBM says, the two
supercomputers will be able to interconnect thousands of different chips to mine data across a network. --
Christian Science Monitor
John Kelly, senior vice president at IBM and director of IBM research, told VentureBeat that the new
machines will put the U.S. at the top of the heap in having the world’s most powerful supercomputers.
And, because of the unique architecture, which puts the processors near the data, the U.S. will likely have
a lead in access to Big Data. He said that IBM refers to data as “the world’s next natural resource."
http://venturebeat.com/2014/11/14/ibm-and-nvidia-win-425m-to-build-two-monstrous-
supercomputers-for-the-department-of-energy/-- VentureBeat
"The beauty of the systems being developed for Lawrence Livermore and Oak Ridge is that the core
technologies are available today to organizations of many sizes across many industries," says Rosamilia.
-- International Business Times
Earned Coverage Highlights
3. 1ClickNews (via Engadget), IBM’s new computers may change how we process big data
610KDAL, U.S. government spending $425 million to build fastest supercomputers
Business Insider, The US Government Is Spending $425 Million To Build The World's Fastest
Supercomputers
Business Standard, US plans to build world's fastest supercomputers
Chattanooga Times Free Press, World's fastest supercomputer to be built at Oak Ridge National
Laboratory in Tennessee
The Chattanoogan, Alexander: World’s Fastest Computer “Once Again Will Be At Oak Ridge”
Chicago Tribune, U.S. government spending $425 million to build fastest supercomputers
Christian Science Monitor, IBM and Nvidia begin building the fastest supercomputers ever
CIO, US funds effort to regain supercomputing crown from China
CNET, IBM, Nvidia land $325M supercomputer deal
Datacenter Dynamics, US GOV'T TO BUILD WORLD’S FASTEST SUPERCOMPUTERS
Database Trends and Applications, U.S. Department of Energy Taps IBM to Develop Supercomputers
to Meet Big Data Challenges
EE Times, IBM to Build DoE's Next-Gen Coral Supercomputers
Engadget, IBM's new computers may change how we process big data
ExtremeTech, IBM and Nvidia will build two ultra-efficient 150-petaflop supercomputers for the DoE
FedScoop, Energy Department to build two new supercomputers, further exascale computing research
Forbes, IBM Signs $320 Million Supercomputing Deal With Dept. Of Energy
GigaOm, IBM to build two supercomputers for the U.S. Department of Energy
Global Post, U.S. government spending $425 million to build fastest supercomputers
GovConWire, DOE Taps IBM, Mellanox, Nvidia Tech for $325M Nat’l Lab Supercomputer Program
Huffington Post, U.S. Government Spending $425 Million To Build Fastest Supercomputers
IDG News Service (via Computerworld), IBM shares plans for supercomputing future
InfoWorld, IBM shares plans for supercomputing future
KFGO, U.S. government spending $425 million to build fastest supercomputers
PC Mag, IBM, Nvidia to Build Huge Supercomputers for U.S. Labs
PCWorld, US funds effort to regain supercomputing crown from China
PC World, IBM shares plans for supercomputing future
Phys.org, Oak Ridge to acquire next generation supercomputer
Ping! Zine, Department of Energy To Spend $425 Million On Building Two Supercomputers
Product Design & Development, DOE Awards $425M in Next-Gen Supercomputing Tech
NASDAQ, U.S. to Spend $425 Million on Advanced Supercomputers
Network World, IBM shares plans for supercomputing future
R&D Magazine, LLNL, IBM to deliver next-generation supercomputer
Re/Code, IBM and Nvidia to Help U.S. Government Build Two Seriously Super Computers
Reuters, U.S. government spending $425 mln to build fastest supercomputers
SlashGear, IBM and NVIDIA give US supercomputers a brain boost
TechSpot, IBM, Nvidia draw $325 million US Energy Department supercomputer contract
TechGage, US Department Of Energy To Deploy Two Flagship Supercomputers In 2017
The Register, US recruits IBM, Nvidia to CRUSH world's fastest supercomputer with 300PFLOPS beast
Tom’s IT Pro, IBM To Develop World’s First Data Centric Computing Systems
Wall Street Journal, U.S. to Spend $425 Million on Advanced Supercomputers
WHBL, U.S. government spending $425 million to build fastest supercomputers
WHTC, U.S. government spending $425 million to build fastest supercomputers
WKZO, U.S. government spending $425 million to build fastest supercomputers
VentureBeat, IBM and Nvidia win $425M to build two monstrous supercomputers for the Department of
Energy
Yahoo! News, U.S. government spending $425 million to build fastest supercomputers
ZDNet, IBM, Nvidia tapped to build world's fastest supercomputers
International Coverage Index – 11/14/14
Yahoo! Finance UK, U.S. government spending $425 million to build fastest supercomputers
The West Australian (via Reuters), U.S. government spending $425 million to build fastest
4. Analyst Outreach Highlights
Briefings:
IDC - Matt Eastwood, Earl Joseph
Intersect360 - Addison Snell: The movement of data across a system has become the gating
factor. What IBM is proposing with its data-centric architecture is to intelligently locate
computational elements throughout the system -- integrated with storage components or
networking components where the data resides -- in order to optimize the workflow of a system
on an ongoing basis.-- EE Times
Cabot Partners - Srini Chari, Ajay Nair
Analyst Email Flash - Distributed to 1063 analysts worldwide
Analyst Pieces:
Tech Guru Daily.com: NVIDIA and IBM Aim to Power the US into Supercomputer Leadership - Rob
Enderle, Enderle Group
Internal Resources
Doug Balog blog: U.S. Department of Energy selects POWER's data-centric design
Ken King blog: OpenPOWER momentum continues
Stephen Leonard blog: A milestone for OpenPOWER technology
Daniel Pelino blog: IBM Tapped by U.S. Department of Energy for Breakthrough ‘Data Centric’ Systems
Anne Altman blog: U.S. Department of Energy Selects IBM to Build the World's First ‘Data Centric’
Systems
Sales resources blog
Owned Coverage
Research Blog: Data-Centric Systems -- A New Paradigm For Computing
Infographic: A New Vision for Big Data Computing
Smarter Planet blog: Kelly: It’s Time to Shift to Data-Centric Computing
YouTube Video: The National Labs and the Rise of Data-Centric Computing