The University of Adelaide is a world-class tertiary education and research institution committed to delivering high quality and distinct learning, teaching and research experiences. A member of the Group of Eight (Go8), Australia’s eight leading research universities, the University of Adelaide consistently ranks in the top
1 percent of universities worldwide. Over 25,000 students and 3,500 members of staff work across four main campuses.
When it needed a new supercomputer, it turned to Lenovo.
Umeå University -- Supercharging research to enable ground-breaking innovationLenovo Data Center
Satisfying researchers’ appetite for bigger, better, faster computing resources is never easy. With its new Lenovo supercomputer, High Performance Computing Center North (HPC2N) at Umeå University can deliver on these demands – boosting performance fivefold to support innovative computational and data-intensive research.
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
CloudLighting - A Brief Overview presented by Prof John Morrison at the Fifth National Conference on Cloud Computing and Commerce (NC4 2016).
The presentation covered project's funding and consortium, specific challenge, typical IaaS cloud usage, project's goals and ambitions, the CloudLighting architecture, beneficiaries and challenges ahead.
The University of Adelaide is a world-class tertiary education and research institution committed to delivering high quality and distinct learning, teaching and research experiences. A member of the Group of Eight (Go8), Australia’s eight leading research universities, the University of Adelaide consistently ranks in the top
1 percent of universities worldwide. Over 25,000 students and 3,500 members of staff work across four main campuses.
When it needed a new supercomputer, it turned to Lenovo.
Umeå University -- Supercharging research to enable ground-breaking innovationLenovo Data Center
Satisfying researchers’ appetite for bigger, better, faster computing resources is never easy. With its new Lenovo supercomputer, High Performance Computing Center North (HPC2N) at Umeå University can deliver on these demands – boosting performance fivefold to support innovative computational and data-intensive research.
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
CloudLighting - A Brief Overview presented by Prof John Morrison at the Fifth National Conference on Cloud Computing and Commerce (NC4 2016).
The presentation covered project's funding and consortium, specific challenge, typical IaaS cloud usage, project's goals and ambitions, the CloudLighting architecture, beneficiaries and challenges ahead.
This Presentation was prepared by Abdussamad Muntahi for the Seminar on High Performance Computing on 11/7/13 (Thursday) Organized by BRAC University Computer Club (BUCC) in collaboration with BRAC University Electronics and Electrical Club (BUEEC).
Queen’s University -- Powerful research with ultra-efficient supercomputerLenovo Data Center
The Centre for Advanced Computing at Queen’s University boosted the performance of its HPC environment by a factor of five with a high-density Lenovo supercomputer. Equipped with high-performance Intel® Xeon® processors, the Lenovo cluster gives researchers the power they need to crunch data faster and get results quicker, accelerating time to insight.
Exascale Computing Project - Driving a HUGE Change in a Changing Worldinside-BigData.com
In this video from the OpenFabrics Workshop in Austin, Al Geist from ORNL presents: Exascale Computing Project - Driving a HUGE Change in a Changing World.
"In this keynote, Mr. Geist will discuss the need for future Department of Energy supercomputers to solve emerging data science and machine learning problems in addition to running traditional modeling and simulation applications. In August 2016, the Exascale Computing Project (ECP) was approved to support a huge lift in the trajectory of U.S. High Performance Computing (HPC). The ECP goals are intended to enable the delivery of capable exascale computers in 2022 and one early exascale system in 2021, which will foster a rich exascale ecosystem and work toward ensuring continued U.S. leadership in HPC. He will also share how the ECP plans to achieve these goals and the potential positive impacts for OFA."
Learn more: https://exascaleproject.org/
and
https://www.openfabrics.org/index.php/abstracts-agenda.html
Sign up for our insideHPC Newsletter: https://www.openfabrics.org/index.php/abstracts-agenda.html
Paul Messina from Argonne presented this deck at the HPC User Forum in Santa Fe.
"The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of high-performance computing (HPC) for the United States and accelerating the development of a capable exascale computing ecosystem. Exascale refers to computing systems at least 50 times faster than the nation’s most powerful supercomputers in use today.The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA)."
Watch the video: http://insidehpc.com/2017/04/update-exascale-computing-project-ecp/
Learn more: https://exascaleproject.org/
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
High Performance Computing in the Cloud is viable in numerous use cases. Common to all successful use cases for cloud-based HPC is the ability embrace latency. Not surprisingly then, early successes were achieved with embarrassingly parallel HPC applications involving minimal amounts of data - in other words, there was little or no latency to be hidden. Over the fulness of time, however, the HPC-cloud community has become increasingly adept in its ability to ‘hide’ latency and, in the process, support increasingly more sophisticated HPC use cases in public and private clouds. Real-world use cases, deemed relevant to remote sensing, will illustrate aspects of these sophistications for hiding latency in accounting for large volumes of data, the need to pass messages between simultaneously executing components of distributed-memory parallel applications, as well as (processing) workflows/pipelines. Finally, the impact of containerizing HPC for the cloud will be considered through the relatively recent creation of the Cloud Native Computing Foundation.
The purpose of the lab to the latest skills required for Job opportunities in many industries . This helps faculties to develop their skills and publish papers in intenational conferences and also innovate solutions
KEK helps scientists uncover the mysteries of the universe with Lenovo superc...Lenovo Data Center
To begin to uncover the origins of the universe, scientists at the High Energy Accelerator Research Organization (KEK) need reliable access to high-performance computing resources. With a cluster based on Lenovo NeXtScale nx360 M5 nodes, KEK can give scientists access to the powerful compute resources needed to run complex data analysis – and further their research.
Technical computing (high-performance computing) used to be the domain of specialists using expensive, proprietary equipment. Today, technical computing is going mainstream, becoming the absolutely irreplaceable competitive tool for research scientists and businesses alike.
Here's a look at Dell’s pioneering role in the evolution of technical computing, with a focus on the key industry trends and technologies that will bring the next generation of tools and functionality to research and development organizations around the world.
In this deck from the HPC User Forum in Santa Fe, Peter Hopton from Iceotope presents: European Exascale System Interconnect & Storage.
"A new Exascale computing architecture using ARM processors is being developed by a European consortium of hardware and software providers, research centers, and industry partners. Funded by the European Union’s Horizon2020 research program, a full prototype of the new system is expected to be ready by 2018."
The project, called ExaNeSt, is based on ARM processors, originally developed for mobile and embedded applications, similar to another EU project, Mont Blanc, which also aims to design a supercomputer architecture using an ARM based supercomputer. Where ExaNeSt differs from Mont Blanc, however, is a focus on networking and on the design of applications. ExaNeSt is co-designing the hardware and software, enabling the prototype to run real-life evaluations – facilitating a stable, scalable platform that will be used to encourage the development of HPC applications for use on this ARM based supercomputing architecture.
Watch the video:
Learn more: http://www.iceotope.com/
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Paul Messina presented this deck at the HPC User Forum in Austin. "The Exascale Computing Project (ECP) is a collaborative effort of two US Department of Energy (DOE) organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA). As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a new class of high-performance computing systems whose power will be a thousand times more powerful than today’s petaflop machines. ECP’s work encompasses applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE."
Watch the video presentation: http://wp.me/p3RLHQ-fIC
Learn more: http://insidehpc.com/ecp
Application Report: Big Data - Big Cluster InterconnectsIT Brand Pulse
As a leading analytics platform that runs on industry-standard hardware and integrates industry-standard database tools and applications, one of ParAccel’s biggest challenges is to architect and test hardware (servers, storage, interconnects) that make their software perform at its peak. In this case, they have achieved their mission to eliminate a cluster bottleneck by implementing 10GbE NICs to provide the bandwidth needed to-day, and well into the future.
NVIDIA DEEP LEARNING INFERENCE PLATFORM PERFORMANCE STUDY
| TECHNICAL OVERVIEW
| 1
Introduction
Artificial intelligence (AI), the dream of computer scientists for over half
a century, is no longer science fiction—it is already transforming every
industry. AI is the use of computers to simulate human intelligence. AI
amplifies our cognitive abilities—letting us solve problems where the
complexity is too great, the information is incomplete, or the details are
too subtle and require expert training.
While the machine learning field has been active for decades, deep
learning (DL) has boomed over the last five years. In 2012, Alex
Krizhevsky of the University of Toronto won the ImageNet image
recognition competition using a deep neural network trained on NVIDIA
GPUs—beating all the human expert algorithms that had been honed
for decades. That same year, recognizing that larger networks can learn
more, Stanford’s Andrew Ng and NVIDIA Research teamed up to develop
a method for training networks using large-scale GPU computing
systems. These seminal papers sparked the “big bang” of modern AI,
setting off a string of “superhuman” achievements. In 2015, Google and
Microsoft both beat the best human score in the ImageNet challenge. In
2016, DeepMind’s AlphaGo recorded its historic win over Go champion
Lee Sedol and Microsoft achieved human parity in speech recognition.
GPUs have proven to be incredibly effective at solving some of the most
complex problems in deep learning, and while the NVIDIA deep learning
platform is the standard industry solution for training, its inferencing
capability is not as widely understood. Some of the world’s leading
enterprises from the data center to the edge have built their inferencing
solution on NVIDIA GPUs. Some examples include:
This Presentation was prepared by Abdussamad Muntahi for the Seminar on High Performance Computing on 11/7/13 (Thursday) Organized by BRAC University Computer Club (BUCC) in collaboration with BRAC University Electronics and Electrical Club (BUEEC).
Queen’s University -- Powerful research with ultra-efficient supercomputerLenovo Data Center
The Centre for Advanced Computing at Queen’s University boosted the performance of its HPC environment by a factor of five with a high-density Lenovo supercomputer. Equipped with high-performance Intel® Xeon® processors, the Lenovo cluster gives researchers the power they need to crunch data faster and get results quicker, accelerating time to insight.
Exascale Computing Project - Driving a HUGE Change in a Changing Worldinside-BigData.com
In this video from the OpenFabrics Workshop in Austin, Al Geist from ORNL presents: Exascale Computing Project - Driving a HUGE Change in a Changing World.
"In this keynote, Mr. Geist will discuss the need for future Department of Energy supercomputers to solve emerging data science and machine learning problems in addition to running traditional modeling and simulation applications. In August 2016, the Exascale Computing Project (ECP) was approved to support a huge lift in the trajectory of U.S. High Performance Computing (HPC). The ECP goals are intended to enable the delivery of capable exascale computers in 2022 and one early exascale system in 2021, which will foster a rich exascale ecosystem and work toward ensuring continued U.S. leadership in HPC. He will also share how the ECP plans to achieve these goals and the potential positive impacts for OFA."
Learn more: https://exascaleproject.org/
and
https://www.openfabrics.org/index.php/abstracts-agenda.html
Sign up for our insideHPC Newsletter: https://www.openfabrics.org/index.php/abstracts-agenda.html
Paul Messina from Argonne presented this deck at the HPC User Forum in Santa Fe.
"The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of high-performance computing (HPC) for the United States and accelerating the development of a capable exascale computing ecosystem. Exascale refers to computing systems at least 50 times faster than the nation’s most powerful supercomputers in use today.The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA)."
Watch the video: http://insidehpc.com/2017/04/update-exascale-computing-project-ecp/
Learn more: https://exascaleproject.org/
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
High Performance Computing in the Cloud is viable in numerous use cases. Common to all successful use cases for cloud-based HPC is the ability embrace latency. Not surprisingly then, early successes were achieved with embarrassingly parallel HPC applications involving minimal amounts of data - in other words, there was little or no latency to be hidden. Over the fulness of time, however, the HPC-cloud community has become increasingly adept in its ability to ‘hide’ latency and, in the process, support increasingly more sophisticated HPC use cases in public and private clouds. Real-world use cases, deemed relevant to remote sensing, will illustrate aspects of these sophistications for hiding latency in accounting for large volumes of data, the need to pass messages between simultaneously executing components of distributed-memory parallel applications, as well as (processing) workflows/pipelines. Finally, the impact of containerizing HPC for the cloud will be considered through the relatively recent creation of the Cloud Native Computing Foundation.
The purpose of the lab to the latest skills required for Job opportunities in many industries . This helps faculties to develop their skills and publish papers in intenational conferences and also innovate solutions
KEK helps scientists uncover the mysteries of the universe with Lenovo superc...Lenovo Data Center
To begin to uncover the origins of the universe, scientists at the High Energy Accelerator Research Organization (KEK) need reliable access to high-performance computing resources. With a cluster based on Lenovo NeXtScale nx360 M5 nodes, KEK can give scientists access to the powerful compute resources needed to run complex data analysis – and further their research.
Technical computing (high-performance computing) used to be the domain of specialists using expensive, proprietary equipment. Today, technical computing is going mainstream, becoming the absolutely irreplaceable competitive tool for research scientists and businesses alike.
Here's a look at Dell’s pioneering role in the evolution of technical computing, with a focus on the key industry trends and technologies that will bring the next generation of tools and functionality to research and development organizations around the world.
In this deck from the HPC User Forum in Santa Fe, Peter Hopton from Iceotope presents: European Exascale System Interconnect & Storage.
"A new Exascale computing architecture using ARM processors is being developed by a European consortium of hardware and software providers, research centers, and industry partners. Funded by the European Union’s Horizon2020 research program, a full prototype of the new system is expected to be ready by 2018."
The project, called ExaNeSt, is based on ARM processors, originally developed for mobile and embedded applications, similar to another EU project, Mont Blanc, which also aims to design a supercomputer architecture using an ARM based supercomputer. Where ExaNeSt differs from Mont Blanc, however, is a focus on networking and on the design of applications. ExaNeSt is co-designing the hardware and software, enabling the prototype to run real-life evaluations – facilitating a stable, scalable platform that will be used to encourage the development of HPC applications for use on this ARM based supercomputing architecture.
Watch the video:
Learn more: http://www.iceotope.com/
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Paul Messina presented this deck at the HPC User Forum in Austin. "The Exascale Computing Project (ECP) is a collaborative effort of two US Department of Energy (DOE) organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA). As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a new class of high-performance computing systems whose power will be a thousand times more powerful than today’s petaflop machines. ECP’s work encompasses applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE."
Watch the video presentation: http://wp.me/p3RLHQ-fIC
Learn more: http://insidehpc.com/ecp
Application Report: Big Data - Big Cluster InterconnectsIT Brand Pulse
As a leading analytics platform that runs on industry-standard hardware and integrates industry-standard database tools and applications, one of ParAccel’s biggest challenges is to architect and test hardware (servers, storage, interconnects) that make their software perform at its peak. In this case, they have achieved their mission to eliminate a cluster bottleneck by implementing 10GbE NICs to provide the bandwidth needed to-day, and well into the future.
NVIDIA DEEP LEARNING INFERENCE PLATFORM PERFORMANCE STUDY
| TECHNICAL OVERVIEW
| 1
Introduction
Artificial intelligence (AI), the dream of computer scientists for over half
a century, is no longer science fiction—it is already transforming every
industry. AI is the use of computers to simulate human intelligence. AI
amplifies our cognitive abilities—letting us solve problems where the
complexity is too great, the information is incomplete, or the details are
too subtle and require expert training.
While the machine learning field has been active for decades, deep
learning (DL) has boomed over the last five years. In 2012, Alex
Krizhevsky of the University of Toronto won the ImageNet image
recognition competition using a deep neural network trained on NVIDIA
GPUs—beating all the human expert algorithms that had been honed
for decades. That same year, recognizing that larger networks can learn
more, Stanford’s Andrew Ng and NVIDIA Research teamed up to develop
a method for training networks using large-scale GPU computing
systems. These seminal papers sparked the “big bang” of modern AI,
setting off a string of “superhuman” achievements. In 2015, Google and
Microsoft both beat the best human score in the ImageNet challenge. In
2016, DeepMind’s AlphaGo recorded its historic win over Go champion
Lee Sedol and Microsoft achieved human parity in speech recognition.
GPUs have proven to be incredibly effective at solving some of the most
complex problems in deep learning, and while the NVIDIA deep learning
platform is the standard industry solution for training, its inferencing
capability is not as widely understood. Some of the world’s leading
enterprises from the data center to the edge have built their inferencing
solution on NVIDIA GPUs. Some examples include:
As the Federation of Southern Co-operatives celebrates its 50th Anniversary, Executive Director Cornelius Blanding joined the NFCA's Sixth Annual Meeting to reflect on the role of co-operation in movements for Civil Rights, Black land retention, and community empowerment, and opportunities for collaboration and solidarity in a new political environment.
It’s no question, finding the most profitable ecommerce buyer traffic on Facebook can be frustrating. Different shoppers have different needs & respond better to different advertising messages. But with Facebook’s audience targeting capabilities, you can attract highly targeted shoppers, regain lost customers & sell upgrades/new products to existing customers.
This is an Iterable User Engagement Teardown comparing Uber and Lyft's user engagement strategies in the first 2 weeks post-signup.
After evaluating all emails and texts received, we identify what these companies do well and where there is room for improvement. Everything shown in the slides (and any recommendations) can be implemented with Iterable's Growth Marketing Platform.
To view more User Engagement Teardowns, visit http://iterable.com/teardown
CSC - Centro de Serviço Compartilhado - do conceito à implementaçãoCompanyWeb
Nosso objetivo é apresentar e exercicitar o planejamento de um CSC, passo a passo, todas as fases. Apontar os principais fatores criticos de sucessoe e como superar os obstaculos.
Todas as etapas para termos um CSC de sucesso serão exploradas no cursos, desde a infra-estrutura à operação através da gestão dos SLA (acordo de níveis de serviços).
Veja os desadios, os ganhos, estrutura e como implementar em 'ondas' (fases) seu CSC.
‘Protectionism’ has been the subject of much discussion in both political circles and the mainstream media. New heads of state are increasingly willing to pursue policies with a clear ’home-bias’, and are adopting a more critical view of the rise of globalisation which has defined the past 50 years.
Should new protectionist policies take hold globally, what are the risks for fixed income investors, and how should portfolios be positioned?
Copycamp2017 - Pavel "Mari" Martinovský - Nemoci obsahu a jejich léčbaH1.cz
Zdravý nemocný – je váš obsah v dobré kondici, nebo je na tom opravdu bledě? Z bohatého lexikonu online nemocí vyberu ty nejčastější a jejich příznaky. A jak se jich zbavit? Odpověď na tuhle otázku za 3 bludišťáky se vám pokusí nastínit Pavel Martinovský svou úvahou o tom, co by měl umět, o co by se měl zajímat a kam by se měl ubírat dnešní copywriter.
CINECA -- Providing the intelligence to solve our biggest challengesLenovo Data Center
Artificial Intelligence. Machine learning. Automation. Researchers from CINECA are pushing the boundaries of science and technology, supported by a cutting-edge supercomputer from Lenovo.
Stay up-to-date with the OpenACC Monthly Highlights. July's edition covers the OpenACC Summit 2021, GCC, upcoming GPU Hackathons and Bootcamps, Sunita Chandrasekaran named as PI for SOLLVE Project, recent research and more!
To support vital scientific research in fields as diverse as astrophysics, biomedicine and climate science, SciNet beefed up its high-performance computing resources with a Lenovo ThinkSystem supercomputer 10 times more powerful than its predecessor.
High Performance Computing Infrastructure as a Key Enabler to Engineering Des...NSEAkure
#sunshine2015 High Performance Computing Infrastructure as a Key Enabler to Engineering Designby Kola oyeniran @Nse conference in #thedome in #Akure #Nigeria
Server TCO Showdown -- Lenovo x3950 X6 and IBM Storwize V7000 vs HP superdomeLenovo Data Center
In evaluating a decision between an HP Itanium server and a Lenovo x86 (Mission Critical X6) system, this paper describes the advantages of the X6 server.
This paper summarizes the business case of a representative mid-sized consumer company running mission-critical applications in an environment requiring IT expansion. The finding in the paper is that a solution comprised of Lenovo System x3950 X6 four-server solution offers superior performance AND 55% hardware, software and operating expense savings compared to the equivalent four-server HP Superdome 2 solution..
Applying Cloud Techniques to Address Complexity in HPC System Integrationsinside-BigData.com
In this video from the HPC User Forum at Argonne, Arno Kolster from Providentia Worldwide presents: Applying Cloud Techniques to Address Complexity in HPC System Integrations.
"The Oak Ridge Leadership Computing Facility (OLCF) and technology consulting company Providentia Worldwide recently collaborated to develop an intelligence system that combines real-time updates from the IBM AC922 Summit supercomputer with local weather and operational data from its adjacent cooling plant, with the goal of optimizing Summit’s energy efficiency. The OLCF proposed the idea and provided facility data, and Providentia developed a scalable platform to integrate and analyze the data."
Watch the video: https://wp.me/p3RLHQ-kOg
Learn more: http://www.providentiaworldwide.com/
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The Science and Technology Facilities Council (STFC) is a publicly funded research council based in the United Kingdom, and one of the largest multi-disciplinary research organizations in Europe.
And it needed a new supercomputer to service a variety of academic, commercial, industrial and governmental stakeholders.
Intel colfax optimizing-machine-learning-workloadsTracy Johnson
In this lecture with live code modification components, we showcase distributed deep learning on an Intel® Xeon Phi™ processor cluster with Intel® Omni-Path Architecture. It targets developers of all skill levels, and is designed to give a brief but hands-on introduction to the machine learning frameworks with Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) enhancements.
Start with a brief introduction to machine learning frameworks that are optimized with the new Intel® MKL-DNN. Develop a simple deep learning image recognition application using the framework. Observe how the computational performance of this application scales while adding compute nodes.
Optimize Machine Learning Workloads on Intel® PlatformsIntel® Software
In this lecture with live code modification components, we showcase distributed deep learning on an Intel® Xeon Phi™ processor cluster with Intel® Omni-Path Architecture. It targets developers of all skill levels, and is designed to give a brief but hands-on introduction to the machine learning frameworks with Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) enhancements.
Start with a brief introduction to machine learning frameworks that are optimized with the new Intel® MKL-DNN. Develop a simple deep learning image recognition application using the framework. Observe how the computational performance of this application scales while adding compute nodes.
This is a presentation by Prof. Anne Elster at the International Workshop on Open Source Supercomputing held in conjunction with the 2017 ISC High Performance Computing Conference.
China Mobile is leading the way in the race for next-generation 5G mobile internet technology. With strong support from Lenovo, the company is testing out a ground-breaking new network architecture, and will introduce AI into base stations and provide users with more personalized access services that will allow them to surf, stream and scroll to their hearts’ content.
Lenovo -- Geared up for global growth with streamlined supply chain operationsLenovo Data Center
Supply chain management is something all businesses need to get right if they want to be successful. That’s why Lenovo teamed up with SAP to replace inefficient legacy systems with an all-new supply chain management platform running on SAP HANA – enabling near real-time reporting and massive performance gains.
North Carolina State University -- Harnessing Artificial Intelligence and big...Lenovo Data Center
Thanks to access to the Lenovo Artificial Intelligence (AI) Innovation Center, researchers at NC State University are pushing the boundaries of geospatial research – all in pursuit of answers to some of biggest challenges we face in the 21st century.
HiQ -- Finland Supporting sophisticated, user-friendly smartphone app develop...Lenovo Data Center
To give developers the resources they need to build exciting new mobile apps for clients, HiQ implemented a hyperconverged infrastructure from Lenovo and Nutanix. Now, it can spin up virtual environments in no time, empowering developers at every stage of the app-building process.
Oblakoteka -- Capitalizing on soaring demand for cloud computingLenovo Data Center
Cloud computing services… aren’t they all the same? Think again. Oblakoteka soars above its competitors with super-scalable cloud services backed up by an ultra-flexible, software-defined storage infrastructure based on Lenovo and Microsoft technology.
With each of its global subsidiaries running its own systems, how could Callaway keep enough control over IT to ensure smooth business operations? Together, Lenovo and Nutanix technology standardized and simplified Callaway’s global, 24/7 IT operations.
Barcelona Supercomputing Center -- Pushing the boundaries of human knowledgeLenovo Data Center
Climate change. Energy security. Fighting disease. Air quality. To help scientists turn mountains of data into accurate models of our complex world, BSC has powered up a Lenovo supercomputer capable of performing trillions of computations per second.
Lenovo Preserves the past with futuristic solutions – Six Nations PolytechnicLenovo Data Center
For 25 years, Six Nations Polytechnic (SNP) has impacted the lives of students in their community through Indigenous education and language revitalization programs. Growing into a second campus, the postsecondary institution needed a new IT infrastructure that could easily expand with their needs. With the help of I/OVision, the school implemented the Lenovo Converged HX3500, powered by Nutanix software and Intel® Xeon® E5 family of processors. By tightly integrating SNP's technologies in a seamless manner, Lenovo is helping preserve the ancient languages of the past through the innovative coding languages of the future.
Taiyuan Wusu Comprehensive Bonded Zone supports business growth and the boomi...Lenovo Data Center
As the Chinese economy continues to grow at an astonishing pace, customs organizations such as the Taiyuan Wusu Comprehensive Bonded Zone play an increasingly important role. By migrating its core business systems to a cutting-edge hyperconverged infrastructure from Lenovo and Nutanix, powered by Intel® Xeon® processors, Taiyuan gained the scalability and agility it needed to keep pace with growing numbers of imports and exports, and help maintain the overall health of the Chinese economy.
Industrial Bank unleashes innovation with Lenovo and NutanixLenovo Data Center
Industrial Bank protected its competitive edge by giving its research and design (R&D) teams the tools to work more effectively, without letting costs spiral out of control. Offering the engineers virtual desktops running on Lenovo Converged HX3310 appliances powered by Nutanix Enterprise Cloud Platform software and Intel® Xeon® processors, the bank simplified management of the environment while raising performance; enabling easier compliance, lower costs and, most importantly, boosting the productivity of R&D to keep innovation at the top of its agenda.
R&D engineers working for Industrial Bank now benefit from significantly reduced response times, offering a better-quality user experience that fosters greater productivity.
Gulftainer sets sail for easier expansion with SAP S/4HANA, MACH and LenovoLenovo Data Center
With its sights set on becoming one of the world’s major container terminal operators, how could Gulftainer ensure that it could set up new sites quickly and easily? The company migrated its SAP business applications and Marine and Container Handling (MACH) system to SAP HANA, deploying two Lenovo System servers to support the mission-critical environment, as well as Lenovo Flex System at its corporate office in Sharjah, UAE, and the Khorfakkan Container Terminal to support its Terminal Operating System (TOS). Today, Gulftainer can scale its server infrastructure as and when needed to meet demanding growth requirements, while deeper insight into operations improves business decision-making.
Storage helps us help our customers scale: and when it comes to top-of-rack switch networking, we like to bring our "A" game. In this presentation, our own Jim Whitten walks through a high-level overview of our storage and networking solutions.
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkLenovo Data Center
Some configurations deserve their own SlideShare entry: this is one of them. When the indsutry's first 100TB Spark SQL benchmark was reached, the media took notice. For good reason.
Intel, Mellanox, Lenovo and IBM came together to investigate a topology that leveraged advances in CPU, memory, storage and networking to assess the readiness of Spark SQL to harness new capabilities -- and speeds.
This is the Lenovo 1 and 2 socket rack and tower server customer presentation for the completely new and enhanced portfolio. It describes the portfolio's value proposition and key points to remember. Highlights benefits and features of products in each of the main portfolio categories: entry, mainstream, and performance. Showcases targeted workloads and optimized use cases, including big data, analytics, virtualization, and infrastructure.
The Lenovo Storage S Series Arrays bring enterprise-class performance and features to small and medium businesses at an affordable price. These features are easily managed through Lenovo SAN Manager software. This brief highlights one of the new features available - Asynchronous Replication
AR is a licensable feature that allows for creating remote copies of a virtual volume, virtual volume-group, or snapshot on a remote system.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. CASE STUDY
Overview
For research center CERFACS,finding better
solutions to scientific problems requires more
complex simulations and ultimately, more powerful
computing resources. Resolving to triple the
power of its internal computing cluster, the
organization found the optimal balance between
throughput and total cost of ownership in a
Lenovo supercomputer with 6,000 cores. Enabling
up to
30 percent performance gains and 50 percent
energy savings, the
solutionis helping CERFACSlead the way in
research innovation.
A leading research organization in France, CERFACS (Centre Européen de Recherche et de Formation
Avancée en Calcul Scientifique) aims to develop advanced methods for numerical simulations and algorithmic
solutions for scientific problems of interest for research and industry. The CERFACS shareholders are Airbus
Group, CNES, EDF,Météo France, Onera, Safran, and Total.
Endless appetite for resources
CERFACS relies on multiple layers of computational resources in its work: an internal computing cluster and
allocated hours of computation on the infrastructure of national partners Météo-France, CEA-CCRT and
GENCI, as well as international supercomputing resources provided by PRACE and INCITE. Naturally, there are
limitations around the work the organization is able to do using external resources, so the internal cluster is
critical to CERFACS’ success.
Nicolas Monnier, CIO at CERFACS,explains: “As the simulations we use increase in complexity, so do our
requirements for computing power. On average, our computational requirements increase by a factor of 1.8
every year. Our clients are seeing their demand for resources grow at the same rate—to retain our position as a
research leader, we need to periodically update our internal cluster.”
“The Lenovo solution stood
out from the competition,
demonstrating the optimal
ratio of computational
throughput to cost.”
—Nicolas Monnier,
CIO, CERFACS
CERFACS (Centre Européen de Recherche et de
Formation Avancée en
Calcul Scientifique)
EDUCATION/ ACADEMIC RESEARCH
3. CASE STUDYEDUCATION/ ACADEMIC RESEARCH
One example where CERFACS urgently required greater computing power was in climate modelling. The
organization had developed a model based on a geographical mesh, which divides the earth’s surface into 150
kilometer-square sections.
“Major geographical features that happened to fall entirely inside a grid square in the 150-km mesh—for
example, the gap between the Pyrenees and massif central mountain range (seuil of Naurouze)!—were
effectively not taking into account during the computations,” says Nicolas Monnier. “To build a better
understanding of the factors that influence climate, we needed to move to a much more detailed grid with 50
kilometer-square sections, which dramatically increases the required computational power.”
CERFACS has also invested hundreds of man-years into developing its in-house AVBP code, co-developed with
IFPEN for piston engine aspects and used to simulate combustion in turbines and engines. AVBP—which is
CPU-bound workload—accounts for 75 percent of the organization’s computing and is used by shareholders
and university labs in their own research. As CERFACS delves deeper into coupled simulations that model
different components within an engine in parallel, its need for greater internal computing resources is constantly
growing.
Choosing the best
Resolving to triple the power of its internal cluster, CERFACS began looking for a new high-performance
computing solution that could support greater numbers of more complex simulations, while meeting a
number of other selection criteria. The solution needed to offer good value for money, rapid deployment, high
reliability and—since climate change is a key area of interest for CERFACS—good energy efficiency.
Nicolas Monnier continues: “We evaluated a number of solutions, asking vendors to commit to a certain levels
of performance against benchmarks for our different workloads. The Lenovo solution stood out, demonstrating
the optimal ratio of computational throughput to cost.”
CERFACS engaged Lenovo to help deploy the new cluster, based on
252 Lenovo NeXtScale nx360 M5 compute nodes with 6,000 cores connected using InfiniBand fabric and with
IBM Spectrum Scale (IBM GPFS) technology providing
file-based storage. The solution includes two nodes featuring NVIDIAQuadro K600 graphics cards for post-
processing visualization. Equipped with Intel Xeon Haswell processors, the Lenovo solution offers excellent
performance. Nicolas Monnier comments: “Another key factor in our choice for a Lenovo solution was the
expertise of the Lenovo HPC Technical team. Lenovo helped us to integrate the new solution rapidly and
smoothly, and the team was fully committed to making our project
asuccess.”
Solution components
Hardware
Lenovo NeXtScale nx360 M5 DDN
SFA7700
Software
IBM Spectrum Scale (IBM GPFS)