Advances in cell biology and creation of an immense amount of data are converging with advances in Machine learning to analyze this data. Biology is experiencing its AI moment and driving the massive computation involved in understanding biological mechanisms and driving interventions. Learn about how cutting edge technologies such as Software Guard Extensions (SGX) in the latest Intel Xeon Processors and Open Federated Learning (OpenFL), an open framework for federated learning developed by Intel, are helping advance AI in gene therapy, drug design, disease identification and more.
AI for good: Scaling AI in science, healthcare, and more.Intel® Software
How do we scale AI to its full potential to enrich the lives of everyone on earth? Learn about AI hardware and software acceleration and how Intel AI technologies are being used to solve critical problems in high energy physics, cancer research, financial inclusion, and more. Get started on your AI Developer Journey @ software.intel.com/ai
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Intel® Software
oneDNN Graph API extends oneDNN with a graph interface which reduces deep learning integration costs and maximizes compute efficiency across a variety of AI hardware including AI accelerators. Get started on your AI Developer Journey @ software.intel.com/ai.
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Intel® Software
Learn about the algorithms and associated implementations that power SigOpt, a platform for efficiently conducting model development and hyperparameter optimization. Get started on your AI Developer Journey @ software.intel.com/ai.
Whether you are an AI, HPC, IoT, Graphics, Networking or Media developer, visit the Intel Developer Zone today to access the latest software products, resources, training, and support. Test-drive the latest Intel hardware and software products on DevCloud, our online development sandbox, and use DevMesh, our online collaboration portal, to meet and work with other innovators and product leaders. Get started by joining the Intel Developer Community @ software.intel.com.
AI for good: Scaling AI in science, healthcare, and more.Intel® Software
How do we scale AI to its full potential to enrich the lives of everyone on earth? Learn about AI hardware and software acceleration and how Intel AI technologies are being used to solve critical problems in high energy physics, cancer research, financial inclusion, and more. Get started on your AI Developer Journey @ software.intel.com/ai
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Intel® Software
oneDNN Graph API extends oneDNN with a graph interface which reduces deep learning integration costs and maximizes compute efficiency across a variety of AI hardware including AI accelerators. Get started on your AI Developer Journey @ software.intel.com/ai.
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Intel® Software
Learn about the algorithms and associated implementations that power SigOpt, a platform for efficiently conducting model development and hyperparameter optimization. Get started on your AI Developer Journey @ software.intel.com/ai.
Whether you are an AI, HPC, IoT, Graphics, Networking or Media developer, visit the Intel Developer Zone today to access the latest software products, resources, training, and support. Test-drive the latest Intel hardware and software products on DevCloud, our online development sandbox, and use DevMesh, our online collaboration portal, to meet and work with other innovators and product leaders. Get started by joining the Intel Developer Community @ software.intel.com.
Medical images (CT scans, X-Rays) must be segmented to identify the region of interest; then areas of interest must be classified for diagnosis and reporting Applied for Lung Disease diagnosis from Chest X-Rays/CT-Scans Segmentation/classification can be a tedious process. AI can help! Wipro used Deep Learning to develop a Medical Image Segmentation & Diagnosis Solution running on Intel’s AI platform.
Review state-of-the-art techniques that use neural networks to synthesize motion, such as mode-adaptive neural network and phase-functioned neural networks. See how next-generation CPUs with reinforcement learning can offer better performance.
Palestra apresentada por Pedro Mário Cruz e Silva, Solution Architect da NVIDIA, como parte da programação da VIII Semana de Inverno de Geofísica, em 19/07/2017.
July 16th 2021 , Friday for our newest workshop with DoMS, IIT Roorkee, Concept to Solutions using OpenPOWER Stack. It's time to discover advances in #DeepLearning tools and techniques from the world's leading innovators across industries, research, and public speakers.
Register here:
https://lnkd.in/ggxMq2N
Learn about the benefits of joining the NVIDIA Developer Program and the resources available to you as a registered developer. This slideshare also provides the steps of getting started in the program as well as an overview of the developer engagement platforms at your disposal. developer.nvidia.com/join
This presentation covers two uses cases using OpenPOWER Systems
1. Diabetic Retinopathy using AI on NVIDIA Jetson Nano: The objective is to classify the diabetic level solely on retina image in a remote area with minimum doctor's inference. The model uses VGG16 network architecture and gets trained from scratch on POWER9. The model was deployed on the Jetson Nano board.
1. Classifying Covid positivity using lung X-ray images: The idea is to build ML models to detect positive cases using X-ray images. The model was trained on POWER9, and the application was developed using Python.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sept-2018-alliance-vitf-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group, delivers the presentation "Update on Khronos Standards for Vision and Machine Learning" at the Embedded Vision Alliance's December 2017 Vision Industry and Technology Forum. Trevett shares updates on recent, current and planned Khronos standardization activities aimed at streamlining the deployment of embedded vision and AI.
This session was held by Vladimir Brenner, Partner Account Manager, Disruptors & AI, Intel AI at the Dive into H2O: London training on June 17, 2019.
Please find the recording here: https://youtu.be/60o3eyG5OLM
TechWiseTV Workshop: Improving Performance and Agility with Cisco HyperFlexRobb Boyd
Find out how organizations like yours are deriving business value from the HyperFlex HCI solution. Join us for a deep dive and Q&A at the TechWiseTV workshop.
TechWiseTV Hyperflex 4.0 Episode: http://cs.co/9009EW2Td
This issue’s feature article, Tuning Autonomous Driving Using Intel® System Studio, illustrates how the tools in Intel System Studio give embedded systems and connected device developers an integrated development environment to build, debug, and tune performance and power usage. Continuing the theme of tuning edge applications, Building Fast Data Compression Code for Cloud and Edge Applications shows how to use the Intel® Integrated Performance Primitives
to speed data compression.
This webinar is going to cover what is a digital twin and how all stakeholders can benefit from their functionality. You will learn how model-based systems engineering enables digital engineering. Your host will discuss use cases, a realistic look at digital engineering and digital twins, and how you can use Innoslate to get started.
The Agenda
Here's what we're covering.
What is a Digital Twin
Benefits of Digital Twin
The Digital Engineering Path Enabled by MBSE
AR + MBSE Software
A More Realistic Digital Twin
Getting You Started with Digital Twins
Question Answer Session
For the full video of this presentation, please visit:
https://www.embedded-vision.com/industry-analysis/video-interviews-demos/socs-computer-vision-enabled-iot-devices-march-2019-silicon
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Bing Yu, Senior Technical Manager at MediaTek, delivers the presentation "SoCs for Computer Vision-enabled IoT Devices," at the Embedded Vision Alliance's March 2019 Silicon Valley Meetup. Yu introduces MediaTek’s line of SoCs for computer-vision-enabled IoT devices.
Accelerate AI w/ Synthetic Data using GANsRenee Yao
Strata Data Conference in Sep 2018 Presentation
Description:
Synthetic data will drive the next wave of deployment and application of deep learning in the real world across a variety of problems involving speech recognition, image classification, object recognition and language. All industries and companies will benefit, as synthetic data can create conditions through simulation, instead of authentic situations (virtual worlds enable you to avoid the cost of damages, spare human injuries, and other factors that come into play); unparalleled ability to test products, and interactions with them in any environment.
Join us for this introductory session to learn more about how Generative Adversarial Networks (GAN) are successfully used to improve data generation. We will cover specific real-world examples where customers have deployed GAN to solve challenges in healthcare, space, transportation, and retail industries.
Renee Yao explains how generative adversarial networks (GAN) are successfully used to improve data generation and explores specific real-world examples where customers have deployed GANs to solve challenges in healthcare, space, transportation, and retail industries.
Adapting to a Cambrian AI/SW/HW explosion with open co-design competitions an...Grigori Fursin
Slides from ARM's Research Summit'17 about "Community-Driven and Knowledge-Guided Optimization of AI Applications Across the Whole SW/HW Stack" (http://cKnowledge.org/repo , http://cKnowledge.org/ai , http://tinyurl.com/zlbxvmw , https://developer.arm.com/research/summit )
Co-designing the whole AI/SW/HW stack in terms of speed, accuracy, energy consumption, size, costs and other metrics has become extremely complex, long and costly. With no rigorous methodology for analyzing performance and accumulating optimisation knowledge, we are simply destined to drown in the ever growing number of design choices, system
features and conflicting optimisation goals.
We present our novel community-driven approach to solve the above problems. Originating from natural sciences, this approach is embodied in Collective Knowledge (CK), our open-source cross-platform workflow framework and repository for automatic, collaborative and reproducible experimentation. CK helps organize, unify and share representative workloads, data sets, AI frameworks, libraries, compilers, scripts, models and other artifacts as customizable and reusable components with a common JSON API.
CK helps bring academia, industry and end-users together to
gradually expose optimisation choices at all levels (e.g. from parameterized models and algorithmic skeletons to compiler
flags and hardware configurations) and autotune them across diverse inputs and platforms. Optimization knowledge gets continuously aggregated in public or private repositories such as cKnowledge.org/repo in a reproducible way, and can be then mined and extrapolated to predict better AI algorithm choices, compiler transformations and hardware designs.
We also demonstrate how we use this approach in practice together with ARM and other companies to adapt to a Cambrian AI/SW/HW explosion by creating an open repository of reusable AI artifacts, and then collaboratively optimising and co-designing the whole deep learning stack (software, hardware and models).
Enabling Artificial Intelligence - Alison B. LowndesWithTheBest
An overview and update of our hardware and software offering and support provided to the Machine & Deep Learning Community around the world.
Alison B. Lowndes, AI DevRel, EMEA
Hire a Machine to Code - Michael Arthur Bucko & Aurélien NicolasWithTheBest
Bucko and Nicolas share their vision and products, as well as their explanation of what Deckard is. They provide insights from the software development team. They believe coding can resolve problems that we face. Specifically, source coding is the solution that they teach you and they have hopes for in fixing human errors.
Michael Arthur Bucko & Aurélien Nicolas
Medical images (CT scans, X-Rays) must be segmented to identify the region of interest; then areas of interest must be classified for diagnosis and reporting Applied for Lung Disease diagnosis from Chest X-Rays/CT-Scans Segmentation/classification can be a tedious process. AI can help! Wipro used Deep Learning to develop a Medical Image Segmentation & Diagnosis Solution running on Intel’s AI platform.
Review state-of-the-art techniques that use neural networks to synthesize motion, such as mode-adaptive neural network and phase-functioned neural networks. See how next-generation CPUs with reinforcement learning can offer better performance.
Palestra apresentada por Pedro Mário Cruz e Silva, Solution Architect da NVIDIA, como parte da programação da VIII Semana de Inverno de Geofísica, em 19/07/2017.
July 16th 2021 , Friday for our newest workshop with DoMS, IIT Roorkee, Concept to Solutions using OpenPOWER Stack. It's time to discover advances in #DeepLearning tools and techniques from the world's leading innovators across industries, research, and public speakers.
Register here:
https://lnkd.in/ggxMq2N
Learn about the benefits of joining the NVIDIA Developer Program and the resources available to you as a registered developer. This slideshare also provides the steps of getting started in the program as well as an overview of the developer engagement platforms at your disposal. developer.nvidia.com/join
This presentation covers two uses cases using OpenPOWER Systems
1. Diabetic Retinopathy using AI on NVIDIA Jetson Nano: The objective is to classify the diabetic level solely on retina image in a remote area with minimum doctor's inference. The model uses VGG16 network architecture and gets trained from scratch on POWER9. The model was deployed on the Jetson Nano board.
1. Classifying Covid positivity using lung X-ray images: The idea is to build ML models to detect positive cases using X-ray images. The model was trained on POWER9, and the application was developed using Python.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sept-2018-alliance-vitf-khronos
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group, delivers the presentation "Update on Khronos Standards for Vision and Machine Learning" at the Embedded Vision Alliance's December 2017 Vision Industry and Technology Forum. Trevett shares updates on recent, current and planned Khronos standardization activities aimed at streamlining the deployment of embedded vision and AI.
This session was held by Vladimir Brenner, Partner Account Manager, Disruptors & AI, Intel AI at the Dive into H2O: London training on June 17, 2019.
Please find the recording here: https://youtu.be/60o3eyG5OLM
TechWiseTV Workshop: Improving Performance and Agility with Cisco HyperFlexRobb Boyd
Find out how organizations like yours are deriving business value from the HyperFlex HCI solution. Join us for a deep dive and Q&A at the TechWiseTV workshop.
TechWiseTV Hyperflex 4.0 Episode: http://cs.co/9009EW2Td
This issue’s feature article, Tuning Autonomous Driving Using Intel® System Studio, illustrates how the tools in Intel System Studio give embedded systems and connected device developers an integrated development environment to build, debug, and tune performance and power usage. Continuing the theme of tuning edge applications, Building Fast Data Compression Code for Cloud and Edge Applications shows how to use the Intel® Integrated Performance Primitives
to speed data compression.
This webinar is going to cover what is a digital twin and how all stakeholders can benefit from their functionality. You will learn how model-based systems engineering enables digital engineering. Your host will discuss use cases, a realistic look at digital engineering and digital twins, and how you can use Innoslate to get started.
The Agenda
Here's what we're covering.
What is a Digital Twin
Benefits of Digital Twin
The Digital Engineering Path Enabled by MBSE
AR + MBSE Software
A More Realistic Digital Twin
Getting You Started with Digital Twins
Question Answer Session
For the full video of this presentation, please visit:
https://www.embedded-vision.com/industry-analysis/video-interviews-demos/socs-computer-vision-enabled-iot-devices-march-2019-silicon
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Bing Yu, Senior Technical Manager at MediaTek, delivers the presentation "SoCs for Computer Vision-enabled IoT Devices," at the Embedded Vision Alliance's March 2019 Silicon Valley Meetup. Yu introduces MediaTek’s line of SoCs for computer-vision-enabled IoT devices.
Accelerate AI w/ Synthetic Data using GANsRenee Yao
Strata Data Conference in Sep 2018 Presentation
Description:
Synthetic data will drive the next wave of deployment and application of deep learning in the real world across a variety of problems involving speech recognition, image classification, object recognition and language. All industries and companies will benefit, as synthetic data can create conditions through simulation, instead of authentic situations (virtual worlds enable you to avoid the cost of damages, spare human injuries, and other factors that come into play); unparalleled ability to test products, and interactions with them in any environment.
Join us for this introductory session to learn more about how Generative Adversarial Networks (GAN) are successfully used to improve data generation. We will cover specific real-world examples where customers have deployed GAN to solve challenges in healthcare, space, transportation, and retail industries.
Renee Yao explains how generative adversarial networks (GAN) are successfully used to improve data generation and explores specific real-world examples where customers have deployed GANs to solve challenges in healthcare, space, transportation, and retail industries.
Adapting to a Cambrian AI/SW/HW explosion with open co-design competitions an...Grigori Fursin
Slides from ARM's Research Summit'17 about "Community-Driven and Knowledge-Guided Optimization of AI Applications Across the Whole SW/HW Stack" (http://cKnowledge.org/repo , http://cKnowledge.org/ai , http://tinyurl.com/zlbxvmw , https://developer.arm.com/research/summit )
Co-designing the whole AI/SW/HW stack in terms of speed, accuracy, energy consumption, size, costs and other metrics has become extremely complex, long and costly. With no rigorous methodology for analyzing performance and accumulating optimisation knowledge, we are simply destined to drown in the ever growing number of design choices, system
features and conflicting optimisation goals.
We present our novel community-driven approach to solve the above problems. Originating from natural sciences, this approach is embodied in Collective Knowledge (CK), our open-source cross-platform workflow framework and repository for automatic, collaborative and reproducible experimentation. CK helps organize, unify and share representative workloads, data sets, AI frameworks, libraries, compilers, scripts, models and other artifacts as customizable and reusable components with a common JSON API.
CK helps bring academia, industry and end-users together to
gradually expose optimisation choices at all levels (e.g. from parameterized models and algorithmic skeletons to compiler
flags and hardware configurations) and autotune them across diverse inputs and platforms. Optimization knowledge gets continuously aggregated in public or private repositories such as cKnowledge.org/repo in a reproducible way, and can be then mined and extrapolated to predict better AI algorithm choices, compiler transformations and hardware designs.
We also demonstrate how we use this approach in practice together with ARM and other companies to adapt to a Cambrian AI/SW/HW explosion by creating an open repository of reusable AI artifacts, and then collaboratively optimising and co-designing the whole deep learning stack (software, hardware and models).
Enabling Artificial Intelligence - Alison B. LowndesWithTheBest
An overview and update of our hardware and software offering and support provided to the Machine & Deep Learning Community around the world.
Alison B. Lowndes, AI DevRel, EMEA
Hire a Machine to Code - Michael Arthur Bucko & Aurélien NicolasWithTheBest
Bucko and Nicolas share their vision and products, as well as their explanation of what Deckard is. They provide insights from the software development team. They believe coding can resolve problems that we face. Specifically, source coding is the solution that they teach you and they have hopes for in fixing human errors.
Michael Arthur Bucko & Aurélien Nicolas
How Can AI and IoT Power the Chemical Industry?Xiaonan Wang
AI, IoT and Blockchain tech briefing to the industry to showcase our research at NUS.
by Dr. Xiaonan Wang
Assistant Professor
NUS Department of Chemical & Biomolecular Engineering
Structuring Big Data results to create new information: Smart Data. These Smart Data can be used to advance knowledge and support decision-making processes.
A close cooperation between industry and science creates better conditions for cutting-edge research in Data Engineering/Smart Data.
Meg Mude, Intel - Data Engineering Lifecycle Optimized on Intel - H2O World S...Sri Ambati
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/cnU6sqd31JU
Developing meaningful AI applications requires complete data lifecycle management. Sourcing, harvesting, labelling and ensuring the conduit to consume data structures and repositories is critical for model accuracy....but, one of the least talked about subjects. Intel’s optimized technologies enable efficient delivery of complete data samples to develop (and deploy) meaningful outcomes. During this session, we’ll review the considerations and criticality of data lifecycle management for the AI production pipeline.
Bio: Meg brings more than 17 years of global product, engineering and solutions experience. She is presently a Solutions Architect with Intel Corporation specializing in Visual Compute and AAI (Analytics and AI) Architecture. She is passionate about the potential for technology to improve the quality of peoples’ lives and humanity on the whole.
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2JrUYLl.
Alison Lowndes talks about the HW & SW that comprise NVIDIA's GPU computing platform for AI, across PC to data center, cloud to edge, training to inference. She details current state-of-the-art research & recent internal work combining robotics with virtual reality & reinforcement learning in an end-to-end simulator for training and testing robots. Filmed at qconlondon.com.
Alison Lowndes is responsible for NVIDIA's Artificial Intelligence Developer Relations in the EMEA region. She consults on a wide range of AI applications, including planetary defence with NASA & the SETI Institute and continues to manage the community of AI & Machine Learning researchers around the world.
The Internet of Things, Productivity, and Employment Alex Krause
Presentation by Bob Cohen of the Economic Strategy Institute. Cohen's presentation discusses how technology changes and the internet of things will impact productivity, jobs and employment.
In the quest for making FPGA technology more accessible to the software community, Xilinx recently released PYNQ, a framework for Zynq that relies on Python and overlays to ease the integration of functionalities of the programmable logic into applications. In this work, we build upon this framework to enable transparent hardware acceleration for scientific computations for Zynq. We do so by providing a custom NumPy library designed for PYNQ, as it is the de-facto most used library within Data Science applications written in Python. We then demonstrate the effectiveness of the proposed approach with a use case on audio signal alignment.
Talk slides from my annual address at the Bio-IT World Expo & Conference where I cover trends, best practices and emerging pain points for life science focused HPC, scientific computing and "research IT"
Email "chris@bioteam.net" if you want a PDF copy of these slides. I've disabled the raw powerpoint download option on slideshare.
This is a talk about Big Data, focusing on its impact on all of us. It also encourages institution to take a close look on providing courses in this area.
This talk gives an introduction about Healthcare Use cases - The AI ladder and Lifestyle AI at Scale Themes The iterative nature of the workflow and some of the important components to be aware in developing AI health care solutions were being discussed. The different types of algorithms and when machine learning might be more appropriate in deep learning or the other way will also be discussed. Use cases in terms of examples are also shared as part of this presentation .
A modified k means algorithm for big data clusteringSK Ahammad Fahad
Amount of data is getting bigger in every moment and this data comes from everywhere; social media, sensors, search engines, GPS signals, transaction records, satellites, financial markets, ecommerce sites etc. This large volume of data may be semi-structured, unstructured or even structured. So it is important to derive meaningful information from this huge data set. Clustering is the process to categorize data such that data are grouped in the same cluster when they are similar according to specific metrics. In this paper, we are working on k-mean clustering technique to cluster big data. Several methods have been proposed for improving the performance of the k-means clustering algorithm. We propose a method for making the algorithm less time consuming, more effective and efficient for better clustering with reduced complexity. According to our observation, quality of the resulting clusters heavily depends on the selection of initial centroid and changes in data clusters in the subsequence iterations. As we know, after a certain number of iterations, a small part of the data points change their clusters. Therefore, our proposed method first finds the initial centroid and puts an interval between those data elements which will not change their cluster and those which may change their cluster in the subsequence iterations. So that it will reduce the workload significantly in case of very large data sets. We evaluate our method with different sets of data and compare with others methods as well.
Python Data Science and Machine Learning at Scale with Intel and AnacondaIntel® Software
Python is the number 1 language for data scientists, and Anaconda is the most popular python platform. Intel and Anaconda have partnered to bring scalability and near-native performance to Python with simple installations. Learn how data scientists can now access oneAPI-optimized Python packages such as NumPy, Scikit-Learn, Modin, Pandas, and XGBoost directly from the Anaconda repository through simple installation and minimal code changes.
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
Preprocess, visualize, and Build AI Faster at-Scale on Intel Architecture. Develop end-to-end AI pipelines for inferencing including data ingestion, preprocessing, and model inferencing with tabular, NLP, RecSys, video and image using Intel oneAPI AI Analytics Toolkit and other optimized libraries. Build at-scale performant pipelines with Databricks and end-to-end Xeon optimizations. Learn how to visualize with the OmniSci Immerse Platform and experience a live demonstration of the Intel Distribution of Modin and OmniSci.
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...Intel® Software
Software AI Accelerators deliver orders of magnitude performance gain for AI across deep learning, classical machine learning, and graph analytics and are key to enabling AI Everywhere. Get started on your AI Developer Journey @ software.intel.com/ai.
AWS & Intel Webinar Series - Accelerating AI ResearchIntel® Software
Scale your research workloads faster with Intel on AWS. Learn how the performance and productivity of Intel Hardware and Software help bridge the gap between ideation and results in Data Science. Get started on your AI Developer Journey @ software.intel.com/ai.
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...Intel® Software
Explore practical elements, such as performance profiling, debugging, and porting advice. Get an overview of advanced programming topics, like common design patterns, SIMD lane interoperability, data conversions, and more.
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Intel® Software
Explore how to build a unified framework based on FFmpeg and GStreamer to enable video analytics on all Intel® hardware, including CPUs, GPUs, VPUs, FPGAs, and in-circuit emulators.
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...Intel® Software
This talk focuses on the newest release in RenderMan* 22.5 and its adoption at Pixar Animation Studios* for rendering future movies. With native support for Intel® Advanced Vector Extensions, Intel® Advanced Vector Extensions 2, and Intel® Advanced Vector Extensions 512, it includes enhanced library features, debugging support, and an extensive test framework.
ANYFACE*: Create Film Industry-Quality Facial Rendering & Animation Using Mai...Intel® Software
ANYFACE* brings film industry-quality facial rendering and animation to mainstream PC platforms using novel approaches to create face details and control microsurfaces. The solution enables users to create high-fidelity game character facial models using photogrammetry.
Ray Tracing with Intel® Embree and Intel® OSPRay: Use Cases and Updates | SIG...Intel® Software
Explore practical examples of Intel® Embree and Intel® OSPRay in production rendering and the best practices of using the kernels in typical rendering pipelines.
Use Variable Rate Shading (VRS) to Improve the User Experience in Real-Time G...Intel® Software
Variable-rate shading (VRS) is a new feature of Microsoft DirectX* 12 and is supported on the 11th generation of Intel® graphics hardware. Get an overview and learn best practices, recommendations, and how to modify traditional 3D effects to take advantage of VRS.
Bring the Future of Entertainment to Your Living Room: MPEG-I Immersive Video...Intel® Software
Explore the proposed Metadata for Immersive Video (MIV) standard specification. MIV enables real-world content captured by cameras to be viewed by users with Six Degrees of Freedom (6DoF) movement, similar to a VR experience with synthetic content.
In this presentation, we describe a heuristic for modifying the structure of sparse deep convolutional networks during training. The heuristic allows us to train sparse networks directly to reach accuracies on par with accuracies obtained through compressing/pruning of big dense models. We show that exploring the network structure during training is essential to reach best accuracies, even when the optimal network structure is known a-priori.
Intel® AI: Non-Parametric Priors for Generative Adversarial Networks Intel® Software
This presentation proposes a novel prior which is derived using basic theorems from probability theory and off-the-shelf optimizers, to improve fidelity of image generation using GANs by interpolating along any Euclidean straight line without any additional training and architecture modifications
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
A Comprehensive Look at Generative AI in Retail App Testing.pdf
AI for All: Biology is eating the world & AI is eating Biology
1. Biology is eating the
world & AI is eating
Biology
Pradeep K Dubey
Intel Senior Fellow, IEEE Fellow
Director, Parallel Computing Labs
2. Intel All.AI 2021 @ Population Scale Virtual Summit 2
Machines:
Crunch
Numbers
Humans:
Make
Decisions
3. Intel All.AI 2021 @ Population Scale Virtual Summit 3
Machines:
Crunch
Numbers
Humans:
Make
Decisions
Division of Labor Between Man and Machine Is Getting Disrupted:
Faster than Anyone Predicted!
Machines:
Number Crunching
AND
Decision Making
4. FROM
A World of
analytical
models
Computational Fluid Dynamics
Start with Mathematical Model
Model Simulate Predict
Start with Data
Initial State Increment Steer
TO
A World of
Data driven
Models
Event Detection from Social Media
Inside - Out Outside - In
5. Intel All.AI 2021 @ Population Scale Virtual Summit 5
• Effectiveness of AI relies on how well model structure matches the underlying invariant (structure) of the
high-dimensional task objective
• A good set of implicit or explicit inductive bias incorporating domain knowledge
• Such as, CNNs for vision and attention networks for NLP or emerging GNNs
• Training time: How well we manage exploitation versus exploration to get to the most generalizable
(flatter) minima
• Avoiding typical solver attraction to sharp minima
• Higher-order methods
What makes AI effective in practice
5
6. Intel All.AI 2021 @ Population Scale Virtual Summit 6
better understanding of interiors and
evolution of RED GIANT stars
Accurately extract seismic parameters from 1000
spectra in under 10 secs
Measuring the frequency separation ∆ν and period separation ∆Π in red-giant stars using Machine learning, under submission at Science Advances
Department of Astronomy and Astrophysics, Tata Institute of Fundamental, Center for Space Science, NYUAD Institute, New York University Abu Dhabi, Division of Solar and Plasma Astrophysics, NAOJ,
Mitaka, Tokyo, Japan, Parallel Computing Lab, Intel Labs, Bangalore, India
7. Intel All.AI 2021 @ Population Scale Virtual Summit 7
Convergence of Revolutions
Daphne Koller*: https://www.youtube.com/watch?v=V6bSlPNwrKo&feature=youtu.be
Advances in
CELL
biology &
creation of
immense
amount of
data
Advances in
ML to
analyZE
large scale
data and
leverage To
make
Prediction
8. Intel All.AI 2021 @ Population Scale Virtual Summit 8
AI is Eating Biology
8
Biology is experiencing its “AI moment”
Publications involving AI methods (e.g. deep learning, NLP, computer vision, RL) in biology are growing
21000 papers in 2020 alone
> 50% YoY since 2019
Papers since 2019 = 25% of all output
since 2000
https://pubs.acs.org/doi/10.1021/acs.jcim.1c01114
10. Intel All.AI 2021 @ Population Scale Virtual Summit 10
Understand mechanisms, Design Interventions:
Massive Compute Appetite
Big Data: Astronomical or Genomical
https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002195
Algorithmic, Computational & Data Management
Requirements
>1000x
growth
IN COMPUTE
NEEDEDTO MATCH
DEMAND
100’s of TB/s
MEMORY BW AT
100’S OF GB
CAPACITY
Process 100’s of exabytes of
multi-modal data
e.g., Learning on Large Graphs,
structure learning, regulatory
networks, Combinatorial
optimizations…
Secure, Privacy preserving,
Federated
11. Intel All.AI 2021 @ Population Scale Virtual Summit 11
Accelerating Graph Neural
Networks on Xeon
Supercomputing’21 - distGNN: Scalable Distributed Training
for Large-Scale Graph Neural Networks
Full batch Training ~2-3.7x faster on 1s-CLX (1s) for GraphSAGE on OGB-Products & Reddit ~83x for distributed training on 128 sockets on OGB-
Papers
Cascade Lake Xeon: Intel® Xeon® Platinum 8280 Processor 38.5M Cache, 2.70 GHz, 28 cores
[arXiv’20, arXiv’21, SC’21]
DGL v 0.5.3
GraphSAGE on Reddit
GraphSAGE on OGB-Products
OGB-Papers: 100 Million Node Graph
Roofline: Upper &
lower bound
DGL v 0.5.3
12. Intel All.AI 2021 @ Population Scale Virtual Summit 12
LamBdaZero
Search space 10^18 vs internet 10^9
Combinatorial Optimization at scale
Uses ML and HPC to accelerate screening of drug-like
molecules
@MILA with Prof. Yoshua Bengio
[Intel-MILA announcement]
14. Intel All.AI 2021 @ Population Scale Virtual Summit 14
Bao outperforms them all!
SIGMOD’21: Best Paper
(Data Management)*
In collab with Prof. Tim
Kraska@MIT
* SIGMOD’21 Best Paper Announcement: https://2021.sigmod.org/sigmod_best_papers.shtml
15. Intel All.AI 2021 @ Population Scale Virtual Summit 15
BWA-MEM2* : An Accelerated
version of BWA MEM
(BWA-MEM has 950K+ Downloads, 70K
Users WW)
15
Higher is better
In collaboration with Dr Heng Li, Author BWA-MEM
Reference genome: GRCh38; Read dataset: 50x WGS ERR194147 (NA12878/HG001)
from Illumina HiSeq 2000
Sequence alignment
Cascade Lake Xeon: Intel® Xeon® Platinum 8280 Processor 38.5M Cache, 2.70 GHz, 28 cores
Ice Lake Xeon, ICX: Intel® Xeon® Platinum 8380 Processor 60MB Cache, 2.40 GHz, 40 cores
9.8
15.8
22.1
8.9
2s CLX 2s CLX 2s ICX 1 A100
BWA-MEM BWA-MEM2 Clara Parabricks BWA-
MEM
Throughput in genomes/day for 50x WGS
Higher is better
2.25x
2.5x
Source of Clara Parabricks results: https://at-cg.github.io/posts/ParaBricks-WGS/
Enabling Community Worldwide
https://github.com/bwa-mem2/bwa-mem2
horticulture
nutrition
In production use by Cancer, Ageing and Somatic
Mutations, Wellcome Sanger Institute; tested on ~88
Billion reads
16. Intel All.AI 2021 @ Population Scale Virtual Summit 16
MM2-Fast Accelerates
MINIMAP2 on Xeon by 3.1
Cascade Lake Xeon, CLX: Intel® Xeon® Platinum 8280 Processor 38.5M Cache, 2.70 GHz, 28 cores
Ice Lake Xeon, ICX: Intel® Xeon® Platinum 8380 Processor 60MB Cache, 2.40 GHz, 40 cores
[bioRxiv’21]
MM2-Fast Branch in
Minimap2 repo
In collaboration with Dr Heng Li, Author Minimap2
Reference genome: GRCh38; Read dataset: ONT, PacBio HiFi and PacBio CLR datasets derived from human trio benchmark genomes HG002, HG003 and HG004 as given at https://precision.fda.gov/challenges/10/view
and https://github.com/genome-in-a-bottle/giab_data_indexes
Minimap2 has >
100k Downloads
17. Intel All.AI 2021 @ Population Scale Virtual Summit 17
9x speedup for Analysis of Single Cell ATAC-
SEQ Data
Denoising and peak calling on noisy
ATAC-Seq data
Cascade Lake Xeon, CLX: Intel® Xeon® Platinum 8280 Processor 38.5MB Cache, 2.70 GHz, 28 cores
Cooper Lake Xeon, CPX: Intel® Xeon® Platinum 8380H Processor 38.5MB Cache, 2.90 GHz, 28 cores
Ice Lake Xeon, ICX: Intel® Xeon® Platinum 8380 Processor 60MB Cache, 2.40 GHz, 40 cores
Higher is better
1.8x
2.3x
Source of Clara Parabricks performance: [Nvidia, 2020] AtacWorks: A deep convolutional neural network toolkit for epigenomics
2.3x speedup over NVIDIA
Clara Parabricks on DGX-1
box (8 card V100) with 16
sockets of Cooper Lake
1.8x speedup over NVIDIA
Clara Parabricks on DGX-1
box (8 card V100) with 16
sockets of Ice Lake
[arXiv’21,
bioRxiv’21]
19. Intel All.AI 2021 @ Population Scale Virtual Summit 19
Brain tumor segmentation finds tumors from
MRIs
Sheller, M.J., Edwards, B., Reina, G.A. et al. Federated learning in medicine: facilitating multi-institutional
collaborations without sharing patient data. Sci Rep 10, 12598 (2020).
Intel-UPenn Collaboration
How much better does each institution do
when training on the full data vs. just their
own data?
17%
BETTER
2.6%
BETTER
on their own validation data
on the hold-out BraTS data
Other names and brands may be claimed as the property of others
20. Intel All.AI 2021 @ Population Scale Virtual Summit 20
1. Privacy Preserved Machine Learning for data
and model privacy / protection
2. Privacy/Confidentiality Preservation
3. Attestation and integrity
4. Federation deployment
5. Federated nodes software stacks for TTM
6. Curation tools and deployment automation
github.com/intel/openfl
openfl.readthedocs.io/
Enables greatest access to data
Any company can host a privacy
preserved federation
Complete software and platform
offering time to market deployment
21. Intel All.AI 2021 @ Population Scale Virtual Summit 21
: a Benchmark Suite For
Many GenomicsBench benchmarks have abundant data parallelism, but significant irregularity
makes it challenging to achieve good performance.
12 representative kernels spanning the major steps in short-read and long-read sequence
analysis pipelines
FM-index, Banded Smith-Waterman, deBruijn graphs, Pair HMM, DP Chaining, SIMD Partial Order
Alignment, Adaptive Banded Signal to Event Alignment, Genomic Relationship Matrix, Neural networks
based Basecalling, Neural networks based variant calling, Kmer counting, Pileup counting
Open-sourced and under active development:
https://github.com/arun-sub/genomicsbench
Xeon Optimized implementations of kernels under active development at:
https://github.com/IntelLabs/Trans-Omics-Acceleration-Library
AI-Driven HPC Research: A first of its kind Deep Learning approach to learn parameters that govern stellar evolution for Red Giant Stars, achieving average inference time of 5ms/star on Intel® Xeon® Platinum 8280, much faster (>10000x) than current SOTA methods based auto-correlation and MCMC: The power spectra of red giant stars are studied for better understanding of interiors and evolution of stars. The Kepler and TESS space missions have provided a vast set of red giant light curves data, and such data sets are expected to grow exponentially with future missions such as PLATO. There is a need to analyze such data accurately and efficiently at scale to enhance the understanding of physics of stars. For this, working in collaboration with cross-geo group of scientists, led by Tata Institute of Fundamental Research in India, we have developed a Deep Learning approach that can learn various parameters that govern the complex behavior of such stellar evolution. We train the networks using simulated data on a single node Intel® Xeon® Platinum 8280. Inference on a star takes average 5 milliseconds, which is 10000x faster than auto-correlation based methods, and 1000000x faster than MCMC methods. To the best of our knowledge, we are the first one to develop such efficient machine learning approach to analyze red giant stars. We have been invited to submit the paper to Science Advances scientific journal (impact factor 14.4).
Our network consists of six 1D convolution layers, followed by two LSTM layers and one dense layer. We apply categorical cross entropy loss and ADAM optimizer for backpropagation. The network takes a normalized power spectrum as input and outputs a probability (confidence score) of a parameter to be in a bin (range of values). Currently, we focus on learning the marginal distribution of three seismic parameters, namely, frequency separation ∆ν, period separation ∆Π, and peak frequency ν_max, using separate networks for each such parameter. Training time takes ~50 node hours for each seismic parameter on a single node Xeon cascade lake with 56 cores using Tensorflow.
Our learned model is accurate distinguishing red giants from noise by analyzing the spectra of real stars. It has a precision of 87% and recall 86%. The false positive rates are dominated by non-solar-like pulsator stars. Additionally, our model can discover new potential red giants. After eliminating false positives by visual inspection, we detect ~25 new red giants (validated this through various catalogues). Finally, our model can infer the relationship among various such seismic parameters, e.g., strong linear correlation between ∆ν and ∆Π (well-established in physics), and the relationship between ∆ν and ν_max that is observed in other studies. First figure below: The red points are predicted (∆ν, ν_max) and green band maps the relation observed in other studies; second figure: Prediction results (along with confidence) of our model on real stars.
AI is inferring laws of physics, unravelling complex phenomena, giving human super-human capabilities to see. Every time humans have seen more , world has transformed (think astronomy, microscopy).
Now that is happening to biology …. With increased resolution and sense making …. We can begin to understand mechanisms behind how biological systems work …understand how diseases happen, how different characteristics evolve
Even after decades of work, we knew structure for only about ~4K proteins and then overnight … with AI (AlphaFold), 20000 Human Protein structures were decoded. Using data, AI is beginning to unravel complex phenomena.
Imagine ….we can engineer biological systems and give ourselves capabilities/materials that otherwise biology discovers in thousands or even millions of years of evolution
Biological data is going to be the largest dataset on the planet >> YouTube with for example billions of genomes getting sequenced routinely….. We will need massive leaps in computational power
State of the art platforms today can do < 10 Whole Genome Sequences in a day, we need > 1000x leap in computational power to do all kinds of omics, rapidly to realize the vision of precision medicine.
Similarly, to design new material or drugs …. Search space is orders of magnitudes greater than number of web pages == massive compute appetite
Next Frontier in AI --- Search & Combinatorial Optimization e.g.
Search for Novel Molecules > O (10^60)
Search space for Protein Design: O (10^130)
Number of webpages on Internet: O (10^9)
CLX: Cascade Lake Xeon
CPX: Cooper Lake Xeon
1-D convolutions are specially important to digital biology due to sequence data
Nvidia performance source for 1D convolutions: [Nvidia, 2020] AtacWorks: A deep convolutional neural network toolkit for epigenomics