This document discusses how technical computing and high performance computing can help accelerate scientific discovery and address global challenges. It provides examples of how HPC is helping to make progress in areas like weather prediction, climate change research, energy exploration, and healthcare including genome mapping and cancer research. The document argues that technical computing will be crucial to achieving exascale computing capabilities and expanding access to HPC resources to further scientific insights.
This special report about achievements at the Oak Ridge and Argonne Leadership Computing Facilities—where researchers run simulations that are large, complex, and often unprecedented—may be of interest:
This document discusses the principles of reliability engineering. It begins by noting the importance of reliability in today's technological world. There are three main categories that drive the need for reliable products: economic, environmental, and performance compulsions. The document then defines reliability and discusses its quantification and optimization. It explains the various causes of failures and the three types of failures: quality, sudden, and wearout. The remainder of the document outlines the various phases of a product's reliability program, including design, manufacturing, storage, installation, usage, and maintenance. It emphasizes the importance of reliability throughout a product's entire lifecycle.
The document advertises an upcoming conference on June 28-29, 2010 about small modular nuclear reactors. It provides an agenda for the two-day conference including topics on developing small reactor designs, licensing issues, business outlook and international development. It also lists sponsors and participating organizations. Registering before May 21, 2010 saves $300 on the conference fee.
This document summarizes research from BSI Intel on designing technology for wonderment. It discusses tools, applications, and things that matter. Specific projects explored urban atmospheres and encounters with strangers in public spaces. One project involved following a stranger, like a 1969 piece by Vito Acconci. Another examined exposing human traces across cities by observing trash cans and leaving lost postcards to create connections between individuals and communities. A further project looked at mapping personal patterns in another way to view communities. It questions whether community still exists as society becomes more mediated by networks and media.
The document discusses challenges facing Europe like financial crisis, climate change, and democratic deficit. It proposes place-based innovation and smart specialization to address these issues through a participatory and emergent process focusing on social, technical, and territorial innovation. Social innovation involves responding to social needs through innovations that benefit vulnerable groups in society. Territorial innovation involves articulating regional strengths and macro-regional ecosystems. The creative cities and regions framework shows how innovation can be fostered through diversity, safety, identity, linkages, and organizational capacity. Innovation policy should be a creative and learning process that harnesses community resources and energies.
The document summarizes statistics about the WePC.com website which engaged users to submit and discuss ideas for new PC designs. Over 300,000 unique visitors generated over 6,000 ideas and comments. Popular ideas were selected and announced at CeBIT as potential designs for a crowdsourced PC. The site also drove significant related web traffic and press coverage, with over 18,000 clicks to retail offers from visitors.
In this deck from the 2014 HPC User Forum in Seattle, Jack Collins from the National Cancer Institute presents: Genomes to Structures to Function: The Role of HPC.
Watch the video presentation: http://wp.me/p3RLHQ-d28
(Em)Powering Science: High-Performance Infrastructure in Biomedical ScienceAri Berman
We’ll explore current and future considerations in advanced computing architectures that empower the conversion of data into knowledge. Life sciences produce the largest amount of data production out of all major science domains, making analytics and scientific computing cornerstones of modern research programs and methodologies. We’ll highlight the remarkable biomedical discoveries that are emerging through combined efforts, and discuss where and how the right infrastructure can catalyze the advancement of human knowledge. On-premises architectures as well as cloud, hybrid, and exotic architectures will all be discussed. It’s likely that all life science researchers will required advanced computing to perform their research within the next year. However, there has been less focus on advanced computing infrastructures across the industry due to the increased availability of public cloud infrastructure anything as a service models.
This special report about achievements at the Oak Ridge and Argonne Leadership Computing Facilities—where researchers run simulations that are large, complex, and often unprecedented—may be of interest:
This document discusses the principles of reliability engineering. It begins by noting the importance of reliability in today's technological world. There are three main categories that drive the need for reliable products: economic, environmental, and performance compulsions. The document then defines reliability and discusses its quantification and optimization. It explains the various causes of failures and the three types of failures: quality, sudden, and wearout. The remainder of the document outlines the various phases of a product's reliability program, including design, manufacturing, storage, installation, usage, and maintenance. It emphasizes the importance of reliability throughout a product's entire lifecycle.
The document advertises an upcoming conference on June 28-29, 2010 about small modular nuclear reactors. It provides an agenda for the two-day conference including topics on developing small reactor designs, licensing issues, business outlook and international development. It also lists sponsors and participating organizations. Registering before May 21, 2010 saves $300 on the conference fee.
This document summarizes research from BSI Intel on designing technology for wonderment. It discusses tools, applications, and things that matter. Specific projects explored urban atmospheres and encounters with strangers in public spaces. One project involved following a stranger, like a 1969 piece by Vito Acconci. Another examined exposing human traces across cities by observing trash cans and leaving lost postcards to create connections between individuals and communities. A further project looked at mapping personal patterns in another way to view communities. It questions whether community still exists as society becomes more mediated by networks and media.
The document discusses challenges facing Europe like financial crisis, climate change, and democratic deficit. It proposes place-based innovation and smart specialization to address these issues through a participatory and emergent process focusing on social, technical, and territorial innovation. Social innovation involves responding to social needs through innovations that benefit vulnerable groups in society. Territorial innovation involves articulating regional strengths and macro-regional ecosystems. The creative cities and regions framework shows how innovation can be fostered through diversity, safety, identity, linkages, and organizational capacity. Innovation policy should be a creative and learning process that harnesses community resources and energies.
The document summarizes statistics about the WePC.com website which engaged users to submit and discuss ideas for new PC designs. Over 300,000 unique visitors generated over 6,000 ideas and comments. Popular ideas were selected and announced at CeBIT as potential designs for a crowdsourced PC. The site also drove significant related web traffic and press coverage, with over 18,000 clicks to retail offers from visitors.
In this deck from the 2014 HPC User Forum in Seattle, Jack Collins from the National Cancer Institute presents: Genomes to Structures to Function: The Role of HPC.
Watch the video presentation: http://wp.me/p3RLHQ-d28
(Em)Powering Science: High-Performance Infrastructure in Biomedical ScienceAri Berman
We’ll explore current and future considerations in advanced computing architectures that empower the conversion of data into knowledge. Life sciences produce the largest amount of data production out of all major science domains, making analytics and scientific computing cornerstones of modern research programs and methodologies. We’ll highlight the remarkable biomedical discoveries that are emerging through combined efforts, and discuss where and how the right infrastructure can catalyze the advancement of human knowledge. On-premises architectures as well as cloud, hybrid, and exotic architectures will all be discussed. It’s likely that all life science researchers will required advanced computing to perform their research within the next year. However, there has been less focus on advanced computing infrastructures across the industry due to the increased availability of public cloud infrastructure anything as a service models.
This document discusses forces of change, continuity, and opportunities for the future. What changes includes governance, demographics, technology, and resources. What doesn't change is that life is commerce. The future includes the arrival of big data and systems that learn, more human interfaces, and new tools. Banks can enable goals, wealth creation, and act as economic engines. The zone of discovery involves novelty, problems solved through new thinking, and educable, scalable, widgetizable insights. The 5% rule suggests allocating a small percentage of time, like 2 hours per week, to exploring new opportunities.
The document discusses challenges in analyzing next generation sequencing (NGS) data from genome sequencing and the potential for real-time analysis using in-memory technologies. Specifically, it notes that conventional genome analysis can take days to weeks but the Hasso Plattner Institute has developed an in-memory approach that can perform alignment and variant calling on 10GB of sequencing data from 1000 genomes in under 45 minutes and enable interactive analysis in real-time. This approach uses an in-memory column-oriented database to store and query sequencing data without disk access for faster processing and analysis of genomic data.
White Paper: Life Sciences at RENCI, Big Data IT to Manage, Decipher and Info...EMC
This white paper explains how the Renaissance Computing Institute (RENCI) of the University of North Carolina uses EMC Isilon scale-out NAS storage, Intel processor and system technology, and iRODS-based data management to tackle Big Data processing, Hadoop-based analytics, security and privacy challenges in research and clinical genomics.
High Performance Computing and the Opportunity with Cognitive TechnologyIBM Watson
With the ability to reduce “time to insight” and accelerate research breakthroughs by providing immense computational power, high performance computing is becoming increasingly important in the marketplace. Meanwhile, cognitive technology has risen to prominence, similarly accelerating new insight, but through a very different approach - by analyzing previously ignored unstructured data, which accounts for 80% of new data created today.
By combining the powerful computing power of the HPC market, along with the machine learning, natural language processing, and even computer vision techniques found within cognitive technology, there is a huge opportunity to accelerate breakthroughs and enable better decision making than ever before.
Watch the replay of the webinar: https://www.youtube.com/watch?v=Hxgieboj3W0
The document describes a novel therapy for treating atrial fibrillation using magnetic nanoparticles directed by an electromagnetic workstation. The therapy aims to target and treat neural ganglia to cure the arrhythmia with low risk. The company aims to become the world leader in magnetic targeted nanomedicine delivery. Their team has expertise in nanomedicine, electrophysiology, and commercialization. The therapy has the potential to be safer, more economical and curative compared to current invasive and limited treatments for atrial fibrillation.
Kennisalliantie Nieuwjaarsreceptie 31 januari 2013:
Prof. dr. Jacob de Vlieg: “Taming the Big Data Beast Together”
CEO en wetenschappelijk directeur van het Netherlands eScience Center (NLeSC)
1) HPC is being used in healthcare to quickly diagnose cancer through genomic analysis and to develop new drug therapies through simulations.
2) Pathwork Diagnostics uses cloud-based HPC to develop models that can diagnose cancer in months rather than years.
3) GNS Healthcare applies HPC to reverse engineer disease models and simulate new drug targets for conditions like cancer and diabetes.
Medical Imaging Seminar Company PresentationsSpace IDEAS Hub
Medical Imaging - Opportunities for Business Seminar
24/01/12
Short Company Presentations
14 companies took the opportunity to present a short sales pitch of their work and interests to the audience.
How novel compute technology transforms life science researchDenis C. Bauer
Unprecedented data volumes and pressure on turnaround time driven by commercial applications require bioinformatics solutions to evolve to meed these new demands. New compute paradigms and cloud-based IT solutions enable this transition. Here I present two solution capable of meeting these demands for genomic variant analysis, VariantSpark, as well as genome engineering applications, GT-Scan2.
VariantSpark classifies 3000 individuals with 80 Million genomic variants each in under 30 minutes. This Hadoop/Spark solution for machine learning application on genomic data is hence capable to scale up to population size cohorts.
GT-Scan2, identifies CRISPR target sites by minimizing off-target effects and maximizing on-target efficiency. This optimization is powered by AWS Lambda functions, which offer an “always-on” web service that can instantaneously recruit enough compute resources keep runtime stable even for queries with several thousand of potential target sites.
The document summarizes several projects undertaken by the HPC Lab, including developing software and algorithms for graph analysis on emerging platforms (CASS-MT), genome assembly (GALAXY), and RNA structure prediction (GTFold). It also mentions projects involving graph benchmarks (Graph500), dynamic graph packages for Intel platforms (STING), and phylogenetics research on the IBM Blue Waters supercomputer (PetaApps).
Beating Bugs with Big Data: Harnessing HPC to Realize the Potential of Genomi...Tom Connor
Introducing the HPC challenges associated with developing a set of clinical microbial genomics services in the NHS in Wales. Demonstrating the potential of these technologies, and the impact it is already having for the patients of the Welsh NHS.
The pulse of cloud computing with bioinformatics as an exampleEnis Afgan
The document discusses how cloud computing can enable large-scale genomic analysis by providing on-demand access to computational resources and petabytes of reference data. It describes how tools like Galaxy and CloudMan allow researchers to perform genomic analysis in the cloud through a web browser by automating the provisioning and configuration of cloud resources. This approach makes genomic research more accessible and enables the elastic scaling of analysis as needed.
Appalla Venkataprabhakar and I presented this at the Oracle\'s Annual Clinical Development and Safety Conference 2010 at Hyderabad, India on 6th October 2010.
DNA computers have potential to replace silicon-based computers by storing vast amounts of data within DNA strands. DNA computers operate in parallel through chemical reactions rather than linearly like silicon. While early DNA computers were test tubes and gold plates, they now include a 2002 gene analysis biochip and a 2003 self-powered programmable computer. DNA computers could be smaller and more powerful than supercomputers, but current challenges include lack of full accuracy and DNA degradation. Further development is still needed but DNA computing shows promise for medical and data processing uses.
Raai 2019 clinical unmet needs and its solutions of deep learning in medicine3Namkug Kim
Clinical unmet needs such as data imbalance, small datasets, and differences in multi-center trials can be addressed through techniques like data augmentation using Perlin noise or GANs, curriculum learning, and domain adaptation. Efficient labeling solutions like smart labeling using deep learning models can help address the challenge of expensive manual labeling. Interpretability, uncertainty quantification, and developing physics-informed machine learning approaches can help address the "black box" nature of deep learning models and improve deployment in clinical settings.
This document discusses forces of change, continuity, and opportunities for the future. What changes includes governance, demographics, technology, and resources. What doesn't change is that life is commerce. The future includes the arrival of big data and systems that learn, more human interfaces, and new tools. Banks can enable goals, wealth creation, and act as economic engines. The zone of discovery involves novelty, problems solved through new thinking, and educable, scalable, widgetizable insights. The 5% rule suggests allocating a small percentage of time, like 2 hours per week, to exploring new opportunities.
The document discusses challenges in analyzing next generation sequencing (NGS) data from genome sequencing and the potential for real-time analysis using in-memory technologies. Specifically, it notes that conventional genome analysis can take days to weeks but the Hasso Plattner Institute has developed an in-memory approach that can perform alignment and variant calling on 10GB of sequencing data from 1000 genomes in under 45 minutes and enable interactive analysis in real-time. This approach uses an in-memory column-oriented database to store and query sequencing data without disk access for faster processing and analysis of genomic data.
White Paper: Life Sciences at RENCI, Big Data IT to Manage, Decipher and Info...EMC
This white paper explains how the Renaissance Computing Institute (RENCI) of the University of North Carolina uses EMC Isilon scale-out NAS storage, Intel processor and system technology, and iRODS-based data management to tackle Big Data processing, Hadoop-based analytics, security and privacy challenges in research and clinical genomics.
High Performance Computing and the Opportunity with Cognitive TechnologyIBM Watson
With the ability to reduce “time to insight” and accelerate research breakthroughs by providing immense computational power, high performance computing is becoming increasingly important in the marketplace. Meanwhile, cognitive technology has risen to prominence, similarly accelerating new insight, but through a very different approach - by analyzing previously ignored unstructured data, which accounts for 80% of new data created today.
By combining the powerful computing power of the HPC market, along with the machine learning, natural language processing, and even computer vision techniques found within cognitive technology, there is a huge opportunity to accelerate breakthroughs and enable better decision making than ever before.
Watch the replay of the webinar: https://www.youtube.com/watch?v=Hxgieboj3W0
The document describes a novel therapy for treating atrial fibrillation using magnetic nanoparticles directed by an electromagnetic workstation. The therapy aims to target and treat neural ganglia to cure the arrhythmia with low risk. The company aims to become the world leader in magnetic targeted nanomedicine delivery. Their team has expertise in nanomedicine, electrophysiology, and commercialization. The therapy has the potential to be safer, more economical and curative compared to current invasive and limited treatments for atrial fibrillation.
Kennisalliantie Nieuwjaarsreceptie 31 januari 2013:
Prof. dr. Jacob de Vlieg: “Taming the Big Data Beast Together”
CEO en wetenschappelijk directeur van het Netherlands eScience Center (NLeSC)
1) HPC is being used in healthcare to quickly diagnose cancer through genomic analysis and to develop new drug therapies through simulations.
2) Pathwork Diagnostics uses cloud-based HPC to develop models that can diagnose cancer in months rather than years.
3) GNS Healthcare applies HPC to reverse engineer disease models and simulate new drug targets for conditions like cancer and diabetes.
Medical Imaging Seminar Company PresentationsSpace IDEAS Hub
Medical Imaging - Opportunities for Business Seminar
24/01/12
Short Company Presentations
14 companies took the opportunity to present a short sales pitch of their work and interests to the audience.
How novel compute technology transforms life science researchDenis C. Bauer
Unprecedented data volumes and pressure on turnaround time driven by commercial applications require bioinformatics solutions to evolve to meed these new demands. New compute paradigms and cloud-based IT solutions enable this transition. Here I present two solution capable of meeting these demands for genomic variant analysis, VariantSpark, as well as genome engineering applications, GT-Scan2.
VariantSpark classifies 3000 individuals with 80 Million genomic variants each in under 30 minutes. This Hadoop/Spark solution for machine learning application on genomic data is hence capable to scale up to population size cohorts.
GT-Scan2, identifies CRISPR target sites by minimizing off-target effects and maximizing on-target efficiency. This optimization is powered by AWS Lambda functions, which offer an “always-on” web service that can instantaneously recruit enough compute resources keep runtime stable even for queries with several thousand of potential target sites.
The document summarizes several projects undertaken by the HPC Lab, including developing software and algorithms for graph analysis on emerging platforms (CASS-MT), genome assembly (GALAXY), and RNA structure prediction (GTFold). It also mentions projects involving graph benchmarks (Graph500), dynamic graph packages for Intel platforms (STING), and phylogenetics research on the IBM Blue Waters supercomputer (PetaApps).
Beating Bugs with Big Data: Harnessing HPC to Realize the Potential of Genomi...Tom Connor
Introducing the HPC challenges associated with developing a set of clinical microbial genomics services in the NHS in Wales. Demonstrating the potential of these technologies, and the impact it is already having for the patients of the Welsh NHS.
The pulse of cloud computing with bioinformatics as an exampleEnis Afgan
The document discusses how cloud computing can enable large-scale genomic analysis by providing on-demand access to computational resources and petabytes of reference data. It describes how tools like Galaxy and CloudMan allow researchers to perform genomic analysis in the cloud through a web browser by automating the provisioning and configuration of cloud resources. This approach makes genomic research more accessible and enables the elastic scaling of analysis as needed.
Appalla Venkataprabhakar and I presented this at the Oracle\'s Annual Clinical Development and Safety Conference 2010 at Hyderabad, India on 6th October 2010.
DNA computers have potential to replace silicon-based computers by storing vast amounts of data within DNA strands. DNA computers operate in parallel through chemical reactions rather than linearly like silicon. While early DNA computers were test tubes and gold plates, they now include a 2002 gene analysis biochip and a 2003 self-powered programmable computer. DNA computers could be smaller and more powerful than supercomputers, but current challenges include lack of full accuracy and DNA degradation. Further development is still needed but DNA computing shows promise for medical and data processing uses.
Raai 2019 clinical unmet needs and its solutions of deep learning in medicine3Namkug Kim
Clinical unmet needs such as data imbalance, small datasets, and differences in multi-center trials can be addressed through techniques like data augmentation using Perlin noise or GANs, curriculum learning, and domain adaptation. Efficient labeling solutions like smart labeling using deep learning models can help address the challenge of expensive manual labeling. Interpretability, uncertainty quantification, and developing physics-informed machine learning approaches can help address the "black box" nature of deep learning models and improve deployment in clinical settings.
Similar to Accelerating the Pace of Discovery Technical Computing at Intel (20)
High Memory Bandwidth Demo @ One Intel StationIntel IT Center
Revolutionizing System Memory Bandwidth
The document discusses the growing need for memory bandwidth in applications such as HPC, 8K video, networking, and radar. It notes that current discrete solutions cannot meet the bandwidth needs of next-generation applications. The document then introduces Intel's Stratix 10 MX DRAM System-in-Package (SiP) which integrates DRAM with an Intel Stratix 10 FPGA. This solution provides up to 512 GB/s of peak memory bandwidth, addressing the bandwidth challenge and making it widely applicable in fields like HPC, military, and communications.
Disrupt Hackers With Robust User AuthenticationIntel IT Center
This document discusses the growing concern of security breaches and the need for robust user authentication. It argues that hardware-based security can better protect against hacking by securing identity, data, and threat prevention in hardware below the software layer. The document presents Intel's solution, Intel Authenticate, as a hardware-based, IT policy-managed multi-factor authentication approach that protects authentication factors, credentials, and policies in hardware to provide comprehensive identity and access protection.
Strengthen Your Enterprise Arsenal Against Cyber Attacks With Hardware-Enhanc...Intel IT Center
Jim Gordon of Intel discusses how data has become the most valuable asset for companies across industries. He notes that while investment in areas like healthcare, manufacturing, and cybersecurity often yield positive returns, returns on investments in cybersecurity remain negative due to rising costs of cybercrime. However, hardware-enhanced security solutions from Intel can help change this result by providing more effective protection for devices, networks, user identities and data.
Harness Digital Disruption to Create 2022’s Workplace TodayIntel IT Center
The document discusses trends in the modern workplace including increased remote work, mobility, and collaboration facilitated by technology. It highlights how 64% of millennials work somewhere other than their primary job site and 80-90% of the US workforce would like to work remotely at least part-time. It also notes that the PC remains the heart of organizations, with 95% of respondents saying it would be their preferred device. The document advocates that businesses harness digital disruption to create today's workplace by focusing on remote workers, idea spaces, smart offices, changing work styles, and security for a mobile world.
Don't Rely on Software Alone.Protect Endpoints with Hardware-Enhanced Security.Intel IT Center
Learn how security solutions built into Intel® Core™ vPro™ processors address top threat vectors. Our comprehensive approach to hardware-enhanced security starts with identity protection with Intel® Authenticate delivering customizable multi factor authentication options, and supports remote remediation with Intel® Active Management Technology.
Achieve Unconstrained Collaboration in a Digital WorldIntel IT Center
Technology is at the center of every digitally-savvy workplace, yet organizations are constrained with bridging current tools to more modern solutions. This session from Gartner Digital Workplace Summit will cover a new way to facilitate employee collaboration that is easy, engaging and gives IT an uncompromised security and management experience.
Intel® Xeon® Scalable Processors Enabled Applications Marketing GuideIntel IT Center
The Future-Ready Data Center platform is here. Whether you navigate in the High Performance Computing, Enterprise, Cloud, or Communications spheres, you will find an Intel® Xeon® processor that is ready to power your data center now and well into the future. An innovative approach to platform design in the Intel® Xeon® Scalable processor platform unlocks the power of scalable performance for today’s data centers—from the smallest workloads to your most mission-critical applications. Powerful convergence and capabilities across compute, storage, memory, network and security deliver unprecedented scale and highly optimized performance across a broad range of workloads—from high performance computing (HPC) and network functions virtualization, to advanced analytics and artificial intelligence (AI). Many examples here show how our software partner ecosystem has optimized their applications and/or taken advantage of inherent platform enhancements to deliver dramatic performance gains, that can translate into tangible business benefits.
#NABshow: National Association of Broadcasters 2017 Super Session Presentatio...Intel IT Center
At NAB, this session covered how technology will transform the way content is created and distributed and accelerate the rate of innovation in the industry. Intel, a revolutionary leader in technology and in transforming industries since 1968, works with other industry partners to enable the transition to new paradigms, infrastructures and technologies.
Join Jim Blakley, General Manager of Intel's Visual Cloud Division, and guests including Dave Ward (Chief Technology Officer, Cisco), AR Rahman (two-time Academy and Grammy Award winner), and Dave Andersen (School of Computer Science, Carnegie Mellon University) to learn more about how this revolution will make amazing visual cloud experiences possible for every person on Earth.
Making the digital workplace a reality requires a modern and strategic approach to identity protection. You will discover ways to build an IAM program that moves you from defense to offense. This presentation will offer practical guidance on how a hardware-based multi-factor authentication strategy is the future for identity protection.
Three Steps to Making a Digital Workplace a RealityIntel IT Center
The workplace is undergoing a dramatic evolution. Work styles are more mobile, changing the way we collaborate and share information while a more mobile workforce means a greater need to thwart cyber-attacks. You'll learn about Intel's three-part approach to help IT leaders sustainably embrace mobility and increase your security posture.
Three Steps to Making The Digital Workplace a Reality - by Intel’s Chad Const...Intel IT Center
The workplace is undergoing a dramatic evolution. Workstyles are more mobile, changing the way we collaborate and share information while a more mobile workforce means a greater need to thwart cyber-attacks. In this presentation, you'll learn about Intel's three-part approach to help IT leaders sustainably embrace mobility and increase your security posture.
Intel® Xeon® Processor E7-8800/4800 v4 EAMG 2.0Intel IT Center
This set of Intel® Xeon® processor E7-8800/4800 v4 family proof points spans several key business segments. The Intel® Xeon® processor E7-8800/4800 v4 product family delivers the horsepower for real-time, high-capacity data analysis that can help businesses derive rapid actionable insights to deliver innovative new services and customer experiences. With high performance, industry’s largest memory, robust reliability, and hardware-enhanced security features, the E7-8800/4800 v4 is optimal for scale-up platforms, delivering rapid in-memory computing for today’s most demanding real-time data and transaction-intensive workloads.
Intel® Xeon® Processor E5-2600 v4 Enterprise Database Applications ShowcaseIntel IT Center
The Intel Xeon processor E5-2600 v4 product family delivers the high performance, increased memory, and I/O bandwidth required for all forms of enterprise databases, is ideal for next-generation application workloads, and is the powerhouse for software-defined infrastructure (SDI) environments where automation and orchestration capabilities are foundational. See how database solutions deployed on the Intel® Xeon® processor E5 v4 product family can deliver increased performance and throughput, as demonstrated by key software partners.
Intel® Xeon® Processor E5-2600 v4 Core Business Applications ShowcaseIntel IT Center
Designed for architecting next-generation, software-defined data centers, the Intel® Xeon® processor E5-2600 v4 product family is supercharged for efficiency, performance, and agile services delivery across cloud-native and traditional applications. Intel® Intelligent Power Technology automatically regulates power consumption to combine industry-leading energy efficiency with intelligent performance that adapts to your workloads.
Intel® Xeon® Processor E5-2600 v4 Financial Security Applications ShowcaseIntel IT Center
The Intel® Xeon® processor E5-2600 v4 product family delivers efficient resource utilization, service tiering, and optimal quality of service (QoS) levels for financial applications by processing faster transactions and delivering exceptional uptime and availability and reduced latency, providing a high-performing, highly scalable system for your most demanding workloads. Enhanced cryptographic speed with two new instructions for Intel® AES-NI for improved security, and the Intel® SSD Data Center Family for NVMe represents optimized management for the future software-defined data centers with industry standard software and drivers.
Intel® Xeon® Processor E5-2600 v4 Telco Cloud Digital Applications ShowcaseIntel IT Center
Cloud and telecommunication companies can deliver better end user experiences while improving cost models across their data centers with the Intel® Xeon® processor E5-2600 v4 product family. See how innovative technologies can deliver high throughput, low latency and more agile delivery of network services to the software-defined data center. Additionally, unparalleled versatility across diverse workloads, such as 4K video processing, editing, and decoding and encoding where improved bandwidth and reduced latency provide noticeable performance improvements.
Intel® Xeon® Processor E5-2600 v4 Tech Computing Applications ShowcaseIntel IT Center
Where breakthrough performance is expected, the Intel® Xeon® processor E5-2600 v4 product family, a key ingredient of the Intel® Scalable System Framework and the software-defined data center, is designed to deliver better performance and performance per watt than ever before. The combination of Intel Xeon processors, Intel® Omni-Path Architecture, Intel Solutions for Lustre* software, and storage technologies improves bandwidth and reduces latency, providing a high-performing, highly scalable system for your most demanding workloads.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Accelerating the Pace of Discovery Technical Computing at Intel
1. Accelerating the Pace of Discovery
Technical Computing at Intel
John Hengeveld
Director of High Performance Computing Strategy
Intel Corporation
@IntelXeon
1
8. Big Data, HPC, and Cancer
Alex Bayen, Armando Fox, Michael Franklin,
Michael Jordan, Anthony Joseph, Randy Katz,
David Patterson, Ion Stoica, Scott Shenker
UC Berkeley
September, 2011
8
9. Big Data is …
Massive
• Facebook: 200-400TB/day:
83 million pictures
• Google: > 25 PB/day processed data
Growing
• More devices (cell phones),
More people (3rd world),
Bigger disks (2TB/$100)
Dirty
• Diverse, No Schema, Uncurated,
Inconsistent Syntax and Semantics
9
10. “Big Data”: Working Definition
When the normal application of current
technology doesn’t enable users to obtain
timely and cost-effective answers of
sufficient quality to data-driven questions MONEY
QUALITY
Challenge: Use machine learning
TIME
Algorithms, HPC/cloud computing
Machines, and crowd-sourced
People to extract value from Big Data
while decreasing the cost of maintaining it
Interesting Big Data for Academic Research?
10
11. Big Data Opportunity
The Cancer Genome Atlas (TCGA)
• 20 cancer types, 500 tumors each: 5 petabytes
• David Haussler (UCSC) Datacenter online Oct 1
• OK to place Berkeley cluster next to 5 PB cluster
• Novelty: Academic Access yet Important Big Data
11 Slide from David Haussler, UCSC, “Cancer Genomics,” AMP retreat, 5/24/11
12. TCGA Potential Impact?
“We fully expect that 10 years from now, each cancer
patient is going to want to get a genomic analysis of
their cancer and will expect customized therapy
based on that information.”
Brad Ozenberger
TCGA program director
“Cracking Cancer's Code”
Time Magazine
June 2, 2011
12
13. Big Data, Genomics, and Cancer
¼ US deaths, 7M/year worldwide
⅓ US women will get cancer
½ US men will get cancer
Cancer: perversion of normal cell
Limitless growth, evolves, spreads
Cancer is a genetic disease
Accidental DNA cell copy flaws +
carcinogen-based mutations lead to cancer
13
14. 5 Steps to Customized Therapy
Sequencing machine that identifies many 300 base pair segments
1 from a cancer tumor
Create full genome sequence of the cancer tumor from many
2 segments (“alignment”)
3 Verify correctness of this sequence
Insights from comparing many other sequences and treatments to
4 tumor genome
Diagnose and suggest of therapeutic targets for cure or non-
5 progression based on tumor genome comparisons and patient records
14
15. 1. Sequencing Machine Costs
Improving Faster Than Moore’s Law
$1,200,000
$1,000,000
$1,000,000
$800,000
$600,000
$400,000
$200,000
$10,000 $1,000
$0
2007 2009 2012-3
2007 wet lab processing problem => 2012 digital processing problem
Looks like sequencing machines not the bottleneck in speed or cost
15
16. Information Technology Obstacle?
“There is a growing gap between the generation of massively parallel
sequencing output and the ability to process and analyze the resulting
data. New users are left to navigate a bewildering maze of base calling,
alignment, assembly and analysis tools with often incomplete
documentation and no idea how to compare and validate their outputs.
Bridging this gap is essential, or the coveted $1,000 genome will come
with a $20,000 analysis price tag.”
John D McPherson
“Next-generation gap,”
Nature Methods,
October 15, 2009
16
17. 2 & 3. Full sequence and verification
UCSF prototype using
open source SW, Cloud,
Hadoop, Hypertable,
Berkeley tools
>1 year on PC to <1 day in
the cloud
17
18. 4. Compare Sequences
Machine Learning + Data Analytics
Cloud Programming Frameworks and Storage
(AMP Lab strengths)
5. Clinical Diagnosis
Suggest effective therapeutic targets for cure or stabilization
• Often events are relatively rare mutations, and not identifiable by traditional statistical
methods (solutions lie in long tail of rare mutations: tiny needle in huge haystack)
Use Artificial Artificial Intelligence: (Crowd Sourcing)?
• Imitate success of Astronomy’s Galaxy Zoo or Biology’s FoldIt?
18
19. An Opportunity or Obligation?
Given increasing genomic databases, next breakthroughs in
cancer fight as likely to come from computer scientists as from
biological scientists?
If it is plausible that CS could help millions of cancer patients live
longer and better lives, as moral people, aren’t we obliged to try?
(If need more motivation, huge future industry for customized,
genome based medicine?)
19
20. Comp Sci and Pasteur’s Quadrant
Research is inspired by:
Consideration of use?
No Yes
Use-inspired
Pure Basic Basic Research
Quest for Yes Research (Pasteur)
Fundamental (Bohr)
Understanding? Attack Big Data by
Helping Fight Cancer?
Pure Applied
No Research
(Edison)
20 From Pasteur’s Quadrant: Basic Science and Technological Innovation, Donald E. Stokes, 1997
Slide from “Engineering Education and the Challenges of the 21st Century,” Charles Vest, 9/22/09
34. International and Energy and
Nuclear Security Domestic Security Environmental Security
Basic Science Engineering Computing
Lawrence Livermore National Laboratory 34
35. Basic Science
Cyber Security
NIF Climate
Nuclear
Counterterrorism
Energy Security
Predictive Integrated Codes
Physics and Experimental Verification
Computers
Engineering Models and Validations
Lawrence Livermore National Laboratory 35
36. Building Energy
Efficiency
Carbon Capture
Addressing barriers of HPC adoption due to Electric Grid
• Lack of expertise
Building new programs through
• Lack of appropriate software application of HPC and
• Cost computational science
Lawrence Livermore National Laboratory 36
37. International Center for High
Energy Density and Inertial Fusion
Energy Science
National Ignition Facility
Campus-like environment with
collaborative space
Building-level security
Ready access for all partners,
including foreign nationals
Wireless capability and High Performance Computing
unclassified computing High Performance Computing
Capabilities and Facilities
Synergy with community plans
for economic growth
Transportation Energy Center
Combustion Research Facility
Lawrence Livermore National Laboratory 37
38. …engaging with U.S. industry to enhance American economic competitiveness by
promoting the adoption of high performance computing.
Lowering the barriers for adoption
of HPC:
High cost of entry
Lack of appropriate software
HPC
Shortage of skilled personnel at Labs
100,000+ CPUs
Deliver true business solutions computer and
for our industrial partners computational
scientists
1,000-10,000 CPUs
Build and nurture an HPC
innovation ecosystem in the community of industrial
Livermore Valley Open Campus computational users
10-100 CPUs
Lawrence Livermore National Laboratory
39. Frameworks
Domain Specific Software Ensembles of
calculations
e.g. Uncertainty
Quantification (UQ)
Optimized HPC Libraries Pipeline
Rapid prototyping,
HPC-IC partners development, and
Leading edge deployment
development
Vendors / co-design of reusable Portable and
software scalable software
LLNL programs Competitive
advantage
Lawrence Livermore National Laboratory