This document discusses Intel's hardware and software portfolio for artificial intelligence. It highlights Intel's move from multi-purpose to purpose-built AI compute solutions from the cloud to edge devices. It also discusses Intel's data-centric infrastructure including CPUs, accelerators, networking fabric and memory technologies. Finally, it provides examples of Intel optimizations that have increased AI performance on Intel Xeon scalable processors.
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Intel® Software
Explore how to build a unified framework based on FFmpeg and GStreamer to enable video analytics on all Intel® hardware, including CPUs, GPUs, VPUs, FPGAs, and in-circuit emulators.
Ray Tracing with Intel® Embree and Intel® OSPRay: Use Cases and Updates | SIG...Intel® Software
Explore practical examples of Intel® Embree and Intel® OSPRay in production rendering and the best practices of using the kernels in typical rendering pipelines.
Review state-of-the-art techniques that use neural networks to synthesize motion, such as mode-adaptive neural network and phase-functioned neural networks. See how next-generation CPUs with reinforcement learning can offer better performance.
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Intel® Software
Explore how to build a unified framework based on FFmpeg and GStreamer to enable video analytics on all Intel® hardware, including CPUs, GPUs, VPUs, FPGAs, and in-circuit emulators.
Ray Tracing with Intel® Embree and Intel® OSPRay: Use Cases and Updates | SIG...Intel® Software
Explore practical examples of Intel® Embree and Intel® OSPRay in production rendering and the best practices of using the kernels in typical rendering pipelines.
Review state-of-the-art techniques that use neural networks to synthesize motion, such as mode-adaptive neural network and phase-functioned neural networks. See how next-generation CPUs with reinforcement learning can offer better performance.
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...Intel® Software
This talk focuses on the newest release in RenderMan* 22.5 and its adoption at Pixar Animation Studios* for rendering future movies. With native support for Intel® Advanced Vector Extensions, Intel® Advanced Vector Extensions 2, and Intel® Advanced Vector Extensions 512, it includes enhanced library features, debugging support, and an extensive test framework.
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Intel® Software
Learn about the algorithms and associated implementations that power SigOpt, a platform for efficiently conducting model development and hyperparameter optimization. Get started on your AI Developer Journey @ software.intel.com/ai.
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...Intel® Software
Explore practical elements, such as performance profiling, debugging, and porting advice. Get an overview of advanced programming topics, like common design patterns, SIMD lane interoperability, data conversions, and more.
Whether you are an AI, HPC, IoT, Graphics, Networking or Media developer, visit the Intel Developer Zone today to access the latest software products, resources, training, and support. Test-drive the latest Intel hardware and software products on DevCloud, our online development sandbox, and use DevMesh, our online collaboration portal, to meet and work with other innovators and product leaders. Get started by joining the Intel Developer Community @ software.intel.com.
Medical images (CT scans, X-Rays) must be segmented to identify the region of interest; then areas of interest must be classified for diagnosis and reporting Applied for Lung Disease diagnosis from Chest X-Rays/CT-Scans Segmentation/classification can be a tedious process. AI can help! Wipro used Deep Learning to develop a Medical Image Segmentation & Diagnosis Solution running on Intel’s AI platform.
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Intel® Software
oneDNN Graph API extends oneDNN with a graph interface which reduces deep learning integration costs and maximizes compute efficiency across a variety of AI hardware including AI accelerators. Get started on your AI Developer Journey @ software.intel.com/ai.
Use Variable Rate Shading (VRS) to Improve the User Experience in Real-Time G...Intel® Software
Variable-rate shading (VRS) is a new feature of Microsoft DirectX* 12 and is supported on the 11th generation of Intel® graphics hardware. Get an overview and learn best practices, recommendations, and how to modify traditional 3D effects to take advantage of VRS.
The field of machine programming — the automation of the development of software — is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In today’s technological landscape, software is integrated into almost everything we do, but maintaining software is a time-consuming and error-prone process. When fully realized, machine programming will enable everyone to express their creativity and develop their own software without writing a single line of code. Intel realizes the pioneering promise of machine programming, which is why it created the Machine Programming Research (MPR) team in Intel Labs. The MPR team’s goal is to create a society where everyone can create software, but machines will handle the “programming” part.
In this deck from ATPESC 2019, James Moawad and Greg Nash from Intel present: FPGAs and Machine Learning.
"Neural networks are inspired by biological systems, in particular the human brain. Through the combination of powerful computing resources and novel architectures for neurons, neural networks have achieved state-of-the-art results in many domains such as computer vision and machine translation. FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL device C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design."
Watch the video: https://wp.me/p3RLHQ-lnc
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
and
https://www.intel.com/content/www/us/en/products/programmable/fpga.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...Intel® Software
Software AI Accelerators deliver orders of magnitude performance gain for AI across deep learning, classical machine learning, and graph analytics and are key to enabling AI Everywhere. Get started on your AI Developer Journey @ software.intel.com/ai.
E5 Intel Xeon Processor E5 Family Making the Business Case Intel IT Center
This presentation highlights cloud computing advantages of the Intel® Xeon® processor E5 family and helps you make the business case for investing. Includes access to an ROI calculator.
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
RenderMan*: The Role of Open Shading Language (OSL) with Intel® Advanced Vect...Intel® Software
This talk focuses on the newest release in RenderMan* 22.5 and its adoption at Pixar Animation Studios* for rendering future movies. With native support for Intel® Advanced Vector Extensions, Intel® Advanced Vector Extensions 2, and Intel® Advanced Vector Extensions 512, it includes enhanced library features, debugging support, and an extensive test framework.
Advanced Techniques to Accelerate Model Tuning | Software for AI Optimization...Intel® Software
Learn about the algorithms and associated implementations that power SigOpt, a platform for efficiently conducting model development and hyperparameter optimization. Get started on your AI Developer Journey @ software.intel.com/ai.
Advanced Single Instruction Multiple Data (SIMD) Programming with Intel® Impl...Intel® Software
Explore practical elements, such as performance profiling, debugging, and porting advice. Get an overview of advanced programming topics, like common design patterns, SIMD lane interoperability, data conversions, and more.
Whether you are an AI, HPC, IoT, Graphics, Networking or Media developer, visit the Intel Developer Zone today to access the latest software products, resources, training, and support. Test-drive the latest Intel hardware and software products on DevCloud, our online development sandbox, and use DevMesh, our online collaboration portal, to meet and work with other innovators and product leaders. Get started by joining the Intel Developer Community @ software.intel.com.
Medical images (CT scans, X-Rays) must be segmented to identify the region of interest; then areas of interest must be classified for diagnosis and reporting Applied for Lung Disease diagnosis from Chest X-Rays/CT-Scans Segmentation/classification can be a tedious process. AI can help! Wipro used Deep Learning to develop a Medical Image Segmentation & Diagnosis Solution running on Intel’s AI platform.
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Intel® Software
oneDNN Graph API extends oneDNN with a graph interface which reduces deep learning integration costs and maximizes compute efficiency across a variety of AI hardware including AI accelerators. Get started on your AI Developer Journey @ software.intel.com/ai.
Use Variable Rate Shading (VRS) to Improve the User Experience in Real-Time G...Intel® Software
Variable-rate shading (VRS) is a new feature of Microsoft DirectX* 12 and is supported on the 11th generation of Intel® graphics hardware. Get an overview and learn best practices, recommendations, and how to modify traditional 3D effects to take advantage of VRS.
The field of machine programming — the automation of the development of software — is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In today’s technological landscape, software is integrated into almost everything we do, but maintaining software is a time-consuming and error-prone process. When fully realized, machine programming will enable everyone to express their creativity and develop their own software without writing a single line of code. Intel realizes the pioneering promise of machine programming, which is why it created the Machine Programming Research (MPR) team in Intel Labs. The MPR team’s goal is to create a society where everyone can create software, but machines will handle the “programming” part.
In this deck from ATPESC 2019, James Moawad and Greg Nash from Intel present: FPGAs and Machine Learning.
"Neural networks are inspired by biological systems, in particular the human brain. Through the combination of powerful computing resources and novel architectures for neurons, neural networks have achieved state-of-the-art results in many domains such as computer vision and machine translation. FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL device C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design."
Watch the video: https://wp.me/p3RLHQ-lnc
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
and
https://www.intel.com/content/www/us/en/products/programmable/fpga.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Software AI Accelerators: The Next Frontier | Software for AI Optimization Su...Intel® Software
Software AI Accelerators deliver orders of magnitude performance gain for AI across deep learning, classical machine learning, and graph analytics and are key to enabling AI Everywhere. Get started on your AI Developer Journey @ software.intel.com/ai.
E5 Intel Xeon Processor E5 Family Making the Business Case Intel IT Center
This presentation highlights cloud computing advantages of the Intel® Xeon® processor E5 family and helps you make the business case for investing. Includes access to an ROI calculator.
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
Tackle more data science challenges than ever before without the need for discrete acceleration with the 3rd Gen Intel® Xeon® Scalable processors. Learn about the built-in AI acceleration and performance optimizations for popular AI libraries, tools and models.
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
Preprocess, visualize, and Build AI Faster at-Scale on Intel Architecture. Develop end-to-end AI pipelines for inferencing including data ingestion, preprocessing, and model inferencing with tabular, NLP, RecSys, video and image using Intel oneAPI AI Analytics Toolkit and other optimized libraries. Build at-scale performant pipelines with Databricks and end-to-end Xeon optimizations. Learn how to visualize with the OmniSci Immerse Platform and experience a live demonstration of the Intel Distribution of Modin and OmniSci.
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase - Tec...Intel IT Center
This Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase focuses on Technical Computing software companies who have seen performance increases with Intel products.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/acceleration-of-deep-learning-using-openvino-3d-seismic-case-study-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Manas Pathak, Global AI Lead for Oil and Gas at Intel, presents the “Acceleration of Deep Learning Using OpenVINO: 3D Seismic Case Study” tutorial at the September 2020 Embedded Vision Summit.
The use of deep learning for automatic seismic data interpretation is gaining the attention of many researchers across the oil and gas industry. The integration of high-performance computing (HPC) AI workflows in seismic data interpretation brings the challenge of moving and processing large amounts of data from HPC to AI computing solutions and vice-versa.
In this presentation, Pathak illustrates this challenge via a case study using a public deep learning model for salt identification applied on a 3D seismic survey from the F3 Dutch block in the North Sea. He presents a workflow to address this challenge and perform accelerated AI on seismic data. The Intel Distribution of OpenVINO toolkit was used to increase the inference performance of a pre-trained model on an Intel CPU. OpenVINO allows CPU users to get significant improvement in AI inference performance for high memory capacity deep learning models used on large datasets without any significant loss in accuracy.
Apache CarbonData & Spark meetup
"QATCodec: past, present and future" if from INTEL
Apache Spark™ is a unified analytics engine for large-scale data processing.
CarbonData is a high-performance data solution that supports various data analytic scenarios, including BI analysis, ad-hoc SQL query, fast filter lookup on detail record, streaming analytics, and so on. CarbonData has been deployed in many enterprise production environments, in one of the largest scenario it supports queries on single table with 3PB data (more than 5 trillion records) with response time less than 3 seconds!
Accelerate Machine Learning Software on Intel Architecture Intel® Software
This session presents performance data for deep learning training for image recognition that achieves greater than 24 times speedup performance with a single Intel® Xeon Phi™ processor 7250 when compared to Caffe*. In addition, we present performance data that shows training time is further reduced by 40 times the speedup with a 128-node Intel® Xeon Phi™ processor cluster over Intel® Omni-Path Architecture (Intel® OPA).
Spring Hill (NNP-I 1000): Intel's Data Center Inference Chipinside-BigData.com
Today at Hot Chips 2019, Intel revealed new details of upcoming high-performance AI accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O.
"To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources," said Naveen Rao, Intel vice president and GM, Artificial Intelligence Products Group. "Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”
Learn more: https://www.intel.ai/accelerating-for-ai/?elq_cid=1192980
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This session was held by Vladimir Brenner, Partner Account Manager, Disruptors & AI, Intel AI at the Dive into H2O: London training on June 17, 2019.
Please find the recording here: https://youtu.be/60o3eyG5OLM
AI for All: Biology is eating the world & AI is eating Biology Intel® Software
Advances in cell biology and creation of an immense amount of data are converging with advances in Machine learning to analyze this data. Biology is experiencing its AI moment and driving the massive computation involved in understanding biological mechanisms and driving interventions. Learn about how cutting edge technologies such as Software Guard Extensions (SGX) in the latest Intel Xeon Processors and Open Federated Learning (OpenFL), an open framework for federated learning developed by Intel, are helping advance AI in gene therapy, drug design, disease identification and more.
Python Data Science and Machine Learning at Scale with Intel and AnacondaIntel® Software
Python is the number 1 language for data scientists, and Anaconda is the most popular python platform. Intel and Anaconda have partnered to bring scalability and near-native performance to Python with simple installations. Learn how data scientists can now access oneAPI-optimized Python packages such as NumPy, Scikit-Learn, Modin, Pandas, and XGBoost directly from the Anaconda repository through simple installation and minimal code changes.
AI for good: Scaling AI in science, healthcare, and more.Intel® Software
How do we scale AI to its full potential to enrich the lives of everyone on earth? Learn about AI hardware and software acceleration and how Intel AI technologies are being used to solve critical problems in high energy physics, cancer research, financial inclusion, and more. Get started on your AI Developer Journey @ software.intel.com/ai
AWS & Intel Webinar Series - Accelerating AI ResearchIntel® Software
Scale your research workloads faster with Intel on AWS. Learn how the performance and productivity of Intel Hardware and Software help bridge the gap between ideation and results in Data Science. Get started on your AI Developer Journey @ software.intel.com/ai.
ANYFACE*: Create Film Industry-Quality Facial Rendering & Animation Using Mai...Intel® Software
ANYFACE* brings film industry-quality facial rendering and animation to mainstream PC platforms using novel approaches to create face details and control microsurfaces. The solution enables users to create high-fidelity game character facial models using photogrammetry.
Bring the Future of Entertainment to Your Living Room: MPEG-I Immersive Video...Intel® Software
Explore the proposed Metadata for Immersive Video (MIV) standard specification. MIV enables real-world content captured by cameras to be viewed by users with Six Degrees of Freedom (6DoF) movement, similar to a VR experience with synthetic content.
In this presentation, we describe a heuristic for modifying the structure of sparse deep convolutional networks during training. The heuristic allows us to train sparse networks directly to reach accuracies on par with accuracies obtained through compressing/pruning of big dense models. We show that exploring the network structure during training is essential to reach best accuracies, even when the optimal network structure is known a-priori.
Intel® AI: Non-Parametric Priors for Generative Adversarial Networks Intel® Software
This presentation proposes a novel prior which is derived using basic theorems from probability theory and off-the-shelf optimizers, to improve fidelity of image generation using GANs by interpolating along any Euclidean straight line without any additional training and architecture modifications
Pmemkv is an open source, key-value store for persistent memory based on the Persistent Memory Development Kit (PMDK). Written in C and C++, it provides optimized bindings for Java*, Javascript*, and Ruby on Rails*), and includes multiple storage engines for different use cases.
Big Data Uses with Distributed Asynchronous Object StorageIntel® Software
Learn about the architecture and features of Distributed Asynchronous Object Storage (DAOS). This open source object store is based on the Persistent Memory Development Kit (PMDK) for massively distributed non-volatile memory applications.
Debugging Tools & Techniques for Persistent Memory ProgrammingIntel® Software
Learn about pmempool, a Persistent Memory Development Kit tool that helps you prevent, diagnose, and recover from data corruption. The session also covers other debugging tools for persistent memory programming.
Persistent Memory Development Kit (PMDK): State of the ProjectIntel® Software
Get an introduction to a PMDK based on the Non-Volatile Memory (NVM) Programming Model from SNIA*. Review the goals, successes, and challenges that still remain.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
3. hardware
Multi-purpose to purpose-built
AI compute from device to cloud
solutions
Partner ecosystem to facilitate AI in
finance, health, retail, industrial & more
Intel analytics
ecosystem to
get your data
ready
Data
Driving AI forward
through R&D,
investments &
policy
Future
tools
Software to accelerate development &
deployment of real solutions
Bring Your AI Vision to Life Using Intel’s Comprehensive Portfolio
#IntelAIDC2019 | #AIonIntel | #IntelAI
4. Data-centricinfrastructure
Move Faster Process EverythingStore More
INTEL® SILICON PHOTONICS CPU
AI ACCELERATORSINTEL® ETHERNET
INTEL® OMNI-PATH FABRIC
GPU
(Integrated &
Discrete)
FPGA, GPU
Powering the Future of Compute & Communications
#IntelAIDC2019 | #AIonIntel | #IntelAI
5. HARDWARE Multi-purpose to purpose-built
AI compute from cloud to device
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
Deep
Learning
Training
Inference
AI
Mainstream intensive
Most
other
#IntelAIDC2019 | #AIonIntel | #IntelAI
6. HARDWARE Multi-purpose to purpose-built
AI compute from device to cloud
Large-scale data centers such as public
cloud or comms service providers, gov’t
& academia, large enterprise IT
User-touch end point devices with lower power
requirements such as laptops, tablets, smart home
devices, drones
Small-scale data centers, small business
IT infrastructure, to few on-premise
server racks & workstations
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
Varies to <1ms <5ms <10-40ms ~100ms
DatacenterEdgeEndpoint
#IntelAIDC2019 | #AIonIntel | #IntelAI
7. HARDWARE Multi-purpose to purpose-built
AI compute from device to cloud
IoT SENSORS
(Security, home, retail, industrial…)
Display, Video, AR/VR, Gestures, Speech
DESKTOP & MOBILITY
Vision &
Inference Speech
SELF-DRIVING VEHICLES
Autonomous
Driving
SERVERS, APPLIANCES & GATEWAYS
Latency-
Bound
Inference
Basic Inference,
Media & Vision
Most Use Cases
SERVERS & APPLIANCES
DatacenterEdgeEndpoint
Flexible & Memory
Bandwidth-Bound
Use Cases
Varies to <1ms <5ms <10-40ms ~100ms
Dedicated
Media & Vision
Inference
Most Use Cases
Most Intensive
Use Cases
NNP-L
M.2 CardSOC
Special Purpose Special Purpose
1GNA=Gaussian Neural Accelerator
All products, computer systems, dates, and figures are preliminary based on current expectations, and are
subject to change without notice. Images are examples of intended applications but not an exhaustive list.
Onesizedoesnotfitall
8. Intel®Xeon®ScalableProcessorFamily
Now build the AI you want on the CPU you know
yourFOUNDATION
forAI
Getmaximumutilization
running data center & AI workloads side-by-side
Breakmemorybarriers
in order to apply AI to large data sets & models
Trainmodelsatscale
through efficient scaling to many nodes
Accessoptimizedtools
including continuous performance gains for TensorFlow, MXNet,
& more
Runinthecloud
including AWS, Microsoft, Alibaba, TenCent, Google, Baidu, & more
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may
cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance Source: Intel
measured as of November 2016. Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not
guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel
microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804
10. Upto65%PerformanceBoostwithIntel®AVX-512
onIntel®Xeon®Platinum8180processor
1
1.37
1
1.65
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Intel® AVX-512 OFF
Caffe GoogLeNet v1
Intel® AVX-512 ON
Caffe GoogLeNet v1
Intel® AVX-512 OFF
Caffe AlexNet
Intel® AVX-512 ON
Caffe AlexNet
Convolution layer performance on Intel® Xeon® Platinum 8180 Processor
Test results above quantify the value add of Intel® AVX-512 to Convolution layer performance. All results shown above are measured
on Intel® Xeon® Platinum 8180 Processor running AI topologies on Caffe framework with and without Intel® AVX-512 enabled
Convolutionlayerperformance
(Measuredinmilliseconds)
representedrelativetoabaseline1.0
Higherisbetter
Performance estimates were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as
"Spectre" and "Meltdown." Implementation of these updates may make these results inapplicable to your device or system.
Batch Sizes AlexNet:256 GoogleNet-V1: 96 Configuration Details on Slide: 24
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components,
software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the
performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance Source: Intel measured as of June 2017
Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations
include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured
by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel
microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Generational
performance
improvements
Enhanced compute performance with Intel® AVX-512 on Intel® Xeon® Scalable Processor
13. IncreasingAIperformanceonIntel®Xeon®PROCESSORS
Intel® Optimizations for Caffe ResNet-50
Inference Throughput Performance
Intel® DL Boost Theoretical Throughput per core over
1st Generation Intel® Xeon® Scalable Processors
BASE
SKX launch
July 2017
1st Generation Intel® Xeon®
Scalable Processor
2S Intel® Xeon®
Platinum 8280
processor
(28 cores/S)
2S Intel®Xeon®
Platinum
9282 processor
(56 cores/S)
vs. BASE vs. BASE
2S Intel® Xeon®
Platinum 8180
processor
(28 cores/S)
14x1 30x15.7x1
1 Based on Intel internal testing: 1X,5.7x,14x and 30x performance improvement based on Intel® Optimization for Café ResNet-50 inference throughput performance on Intel® Xeon® Scalable Processor. See Configuration Details slide 22
Performance results are based on testing as of 7/11/2017(1x) ,11/8/2018 (5.7x), 2/20/2019 (14x) and 2/26/2019 (30x) and may not reflect all publically available security updates. No product can be absolutely secure. See configuration slide 22
Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction
sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel
microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific
instruction sets covered by this notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using
specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your
contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance
2nd Generation Intel® Xeon®
Scalable Processor
1st Gen
Xeon-SP
Int8
1s Gen
Xeon-SP
FP32
Upto
1.3x
2nd Gen
Xeon-SP
Int8 w/ Intel®
DL Boost
3 Instructions
VPMADDUBSW
VPMADDWD
VPADDD
1st Gen
Xeon-SP
Int8
Upto
3x
1 Instruction
VPDPBUSD
Faster throughput, but inefficient
Uses 3 instructions per operation
DL Boost fixes this, combines 3 instructions into 1
vs. BASE
#IntelAIDC2019 | #AIonIntel | #IntelAI
14. Intel® Nervana™neuralnetworkprocessors(NNP)¥
NNP-L
NNP-I Highly-efficient multi-model
inferencing for cloud, data
center & intense appliances
Fastest time-to-train with
high bandwidth AI server
connections for the most
persistent, intense usage
DEDICATED
DL TRAINING
DEDICATED
DL INFERENCE
‡The Intel® Nervana™ Neural Network Processor is a future product that is not broadly available today
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
15. INTEL® FPGAPRODUCTPORTFOLIO
‡The Intel® Nervana™ Neural Network Processor is a future product that is not broadly available today
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
16. ‡The Intel® Nervana™ Neural Network Processor is a future product that is not broadly available today
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
17. Intel® Movidius™ Visionprocessingunit(vPU)
Power-Efficient Image Processing, Computer Vision & Deep Learning for Devices
SURVEILLANCe
Detection & Classification •
Identification •
Multi-Nodal Systems •
Multi-Modal Sensing •
Video, Image Capture •
SERVICEROBOTS
Navigation •
3D Vol. Mapping •
Multi-Modal Sensing •
WEARABLES
Detection, Tracking •
Recognition •
Video, Image, Session Capture •
DRONES
• Sense & Avoid
• GPS Denied Hovering
• Pixel Labeling
• Video, Image Capture
SMARTHOME
• Detection, Tracking
• Perimeter, Presence Monitoring
• Recognition, Classification
• Multi-Nodal Systems
• Multi-Modal Sensing
• Video, Image Capture
AR-VRHMD
• 6DOF Pose, Position, Mapping
• Gaze, Eye Tracking
• Gesture Tracking, Recognition
• See-Through Camera
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
#IntelAIDC2019 | #AIonIntel | #IntelAI
19. Intelintegratedprocessorgraphics
Built-in Deep Learning Inference Acceleration
• Shipped in > 1billion Intel SOCs
• Broad choice of performance/power offering across
Intel® Atom™ , Intel®
Core™ and Intel® Xeon™ processors
Ubiquity/Scalability
MediaLeadership
• Intel® Quick Synch Video – fixed function media
blocks to improve power and performance
• Intel® Media SDK - API that provides access to
hardware-accelerated codecs
Powerful&FlexibleArchitecture
• Rich data type support for 32bitFP, 16bitFP,
32bitInteger, 16bitInteger with
SIMD multiply-accumulate instructions
MemoryArchitecture
• Shared memory architecture on die between CPU
and GPU to enable lower latency and power
Hardwareintegration Softwaresupport
MacOS (CoreML and MPS1)
Windows O/S (WinML)
OpenVINO™ Toolkit (Win, Linux)
clDNN
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
20. Intel®Gaussianneuralaccelerator(GNA)
https://software.intel.com/en-
us/iot/speech-enabling-dev-kit
TryitTODAY!
Intel® Speech Enabling
Developer Kit
Learn more: https://sigport.org/sites/default/files/docs/PosterFinal.pdf
Amplethroughput
For speech, language, and
other sensing inference
Lowpower
<100 mW power consumption
for always-on applications
Flexibility
Gaussian mixture model (GMM) and
neural network inference support
Intel®
GNA
(IP)
DSP
Streaming Co-Processor for Low-Power Audio Inference & More
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
21. less often →
AccessDistribution
Data Access Frequency
Cooler data
more often
Hot data
DRAM
HOT TIER
SSD
WARM TIER
Intel®3DNandSSD
Optimize performancegiven
cost and power budget
LLC
Core
CPU
L2
L1
pico-secs
nano-secs
Memory Sub-System 10s
GB
<100nanosecs
Network
Storage
SSD
10s
TB
<100millisecs
10s
TB
<100microsecs
Compute
100s GB
Move Data Closer to
<1microsec
1s TB
Maintain Persistenc<
y10microsecs
Goal:EfficientData-CentricArchitecture
HDD / TAPE
COLD TIER
#IntelAIDC2019 | #AIonIntel | #IntelAI
22. Thebestofbothworldswith
Intel®Optane™DCPersistentMemory
Performance comparable
to DRAM at low latencies1
Data persistence with
higher capacity than DRAM2
Memoryattributes
1“Performance comparable to DRAM” - Intel persistent memory is expected to perform at latencies near DDR4 DRAM. “low latencies” - Data transferred across the memory bus causes latencies to be orders of magnitude lower when compared to
transferring data across PCIe or I/O bus’ to NAND/Hard Disk. 2Intel persistent memory offers 3 different capacities – 128GB, 256GB, 512GB. Individual DIMMs of DDR4 DRAM max out at 256GB. Performance results are based on testing as of February
22, 2019 and may not reflect all publicly available security updates. See slide 24 for details. No product or component can be absolutely secure.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software,
operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that
product when combined with other products. For more information go to www.intel.com/benchmarks.
Storageattributes
#IntelAIDC2019 | #AIonIntel | #IntelAI
23. Connectivity
High-speed connectivity for massively parallel & distributed AI
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
Intel®SiliconPhotonics
Connects memory and compute,
integrating connectivity
technologies onto a single die for
affordable, scalable solutions
Comingsoon
SmartNIC(CascadeGlacier)
Enables optimized performance for
Intel® Xeon® processor-based
systems
Intel®Omni-PathArchitecture
Provides low-latency interconnect to
scale to hundreds of thousands of
nodes without losing performance
or reliability
24. Intel®Omni-PathArchitectureEvolutionaryApproach,RevolutionaryFeatures,End-to-EndSolution
HFI Adapters
Single port
x8 and x16
x8 Adapter
(58 Gb/s)
x16
Adapter
(100 Gb/s)
Edge Switches
1U Form Factor
24 and 48 port
24-port
Edge Switch
48-port
Edge Switch
Director Switches
QSFP-based
288 and 1,152 port
288-port
Director Switch
(7U chassis)
48-port Leaves
1,152-port
Director
Switch
(20U chassis)
48-port
Leaves
Cables
Third Party Vendors
Passive Copper
Active Optical
Silicon
OEM custom designs
HFI and Switch ASICs
Switch silicon
up to 48 ports
(1200 GB/s
total b/w
HFI silicon
Up to 2 ports
(50 GB/s
total b/w)
“-F”
Processors
w/integrated
HFI
Software
Open Source
Host Software and
Fabric Manager
#IntelAIDC2019 | #AIonIntel | #IntelAI
25. Edge
Device
ARTIFICIALINTELLIGENCE
Platforms Finance Healthcare Energy Industrial Transport Retail Home More…
Data Center
TOOLKITSApp
Developers
librariesData
Scientists
foundationLibrary
Developers
*
*
*
*
FOR
* * * *
HardwareIT System
Architects
SolutionsSolution
Architects
AI Solutions Catalog
(Public & Internal)
DEEPLEARNINGACCELERATORS
DEEPLEARNINGDEPLOYMENT
OpenVINO™ † Intel® Movidius™ SDK
Open Visual Inference & Neural Network Optimization
toolkit for inference deployment on CPU, processor
graphics, FPGA & VPU using TF, Caffe* & MXNet*
Optimized inference deployment
for all Intel® Movidius™ VPUs using
TensorFlow* & Caffe*
DEEPLEARNINGFRAMEWORKS
Now optimized for CPU Optimizations in progress
TensorFlow* MXNet* Caffe* BigDL/Spark* Caffe2* PyTorch* PaddlePaddle*
DEEPLEARNING
Intel® Deep
Learning Studio‡
Open-source tool to compress
deep learning development cycle
MACHINELEARNINGLIBRARIES
Python R Distributed
•Scikit-
learn
•Pandas
•NumPy
•Cart
•Random
Forest
•e1071
•MlLib (on Spark)
•Mahout
ANALYTICS,MACHINE&DEEPLEARNINGPRIMITIVES
Python DAAL MKL-DNN clDNN
Intel distribution
optimized for
machine learning
Intel® Data Analytics
Acceleration Library
(for machine learning)
Open-source deep neural
network functions for
CPU, processor graphics
DEEPLEARNINGGRAPHCOMPILER
Intel® nGraph™ Compiler (Alpha)
Open-sourced compiler for deep learning model
computations optimized for multiple devices (CPU, GPU,
NNP) using multiple frameworks (TF, MXNet, ONNX)
AIFOUNDATION
A
R
T
I
F
I
C
I
A
l
I
N
T
E
L
L
I
G
E
n
C
e
NNP L-1000
* * * *
† Formerly the Intel® Computer Vision SDK
*Other names and brands may be claimed as the property of others.
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
Ai.intel.com
Inference
#IntelAIDC2019 | #AIonIntel | #IntelAI
30. Edge
Device
ARTIFICIALINTELLIGENCE
Platforms Finance Healthcare Energy Industrial Transport Retail Home More…
Data Center
TOOLKITSApp
Developers
librariesData
Scientists
foundationLibrary
Developers
*
*
*
*
FOR
* * * *
HardwareIT System
Architects
SolutionsSolution
Architects
AI Solutions Catalog
(Public & Internal)
DEEPLEARNINGACCELERATORS
DEEPLEARNINGDEPLOYMENT
OpenVINO™ † Intel® Movidius™ SDK
Open Visual Inference & Neural Network Optimization
toolkit for inference deployment on CPU, processor
graphics, FPGA & VPU using TF, Caffe* & MXNet*
Optimized inference deployment
for all Intel® Movidius™ VPUs using
TensorFlow* & Caffe*
DEEPLEARNINGFRAMEWORKS
Now optimized for CPU Optimizations in progress
TensorFlow* MXNet* Caffe* BigDL/Spark* Caffe2* PyTorch* PaddlePaddle*
DEEPLEARNING
Intel® Deep
Learning Studio‡
Open-source tool to compress
deep learning development cycle
MACHINELEARNINGLIBRARIES
Python R Distributed
•Scikit-
learn
•Pandas
•NumPy
•Cart
•Random
Forest
•e1071
•MlLib (on Spark)
•Mahout
ANALYTICS,MACHINE&DEEPLEARNINGPRIMITIVES
Python DAAL MKL-DNN clDNN
Intel distribution
optimized for
machine learning
Intel® Data Analytics
Acceleration Library
(for machine learning)
Open-source deep neural
network functions for
CPU, processor graphics
DEEPLEARNINGGRAPHCOMPILER
Intel® nGraph™ Compiler (Alpha)
Open-sourced compiler for deep learning model
computations optimized for multiple devices (CPU, GPU,
NNP) using multiple frameworks (TF, MXNet, ONNX)
AIFOUNDATION
AI
NNP L-1000
* * * *
† Formerly the Intel® Computer Vision SDK
*Other names and brands may be claimed as the property of others.
All products, computer systems, dates, and figures are preliminary based on current expectations, and are subject to change without notice.
Ai.intel.com
Inference
#IntelAIDC2019 | #AIonIntel | #IntelAI
31. Intel®Xeon®processorsNow Optimized For Deep Learning
INFERENCE THROUGHPUT
Intel® Xeon® Platinum 8180 Processor
higher Intel optimized Caffe GoogleNet v1 with Intel®
MKL inference throughput compared to
Intel® Xeon® Processor E5-2699 v3 with BVLC-Caffe
TRAINING THROUGHPUT
Intel® Xeon® Platinum 8180 Processor
higher Intel Optimized Caffe AlexNet with Intel® MKL
training throughput compared to
Intel® Xeon® Processor E5-2699 v3 with BVLC-Caffe
UP TO
241x
1
UP TO
277x
1 Optimized
Frameworks
Optimized Intel®
MKL Libraries
Inference and training throughput uses FP32 instructions
Deliver significant AI performance with hardware & software optimizations on Intel® Xeon® Scalable family
1 The benchmark results may need to be revised as additional testing is conducted. The results depend on the specific platform configurations and workloads utilized in the testing, and may not be applicable to any particular user's components, computer system or workloads. The results are not necessarily representative of other
benchmarks and other benchmark results may show greater or lesser impact from mitigations. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems,
components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more
complete information visit: http://www.intel.com/performance.Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and
functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit:
http://www.intel.com/performance Source: Intel measured as of June 2018. Configurations: See slide 4.
#IntelAIDC2019 | #AIonIntel | #IntelAI
32. Intelsoftware–ExtractPerformance
Build highly
optimized
media
infrastructure,
solutions, &
applications
Fast, Dense, High
Quality Transcoding
Improve performance,
scalability, & reliability
for applications and
frameworks -Computing
and ML/DL
Technical & Enterprise compute, HPC, AI
Take advantage of
deep system-wide
insight & analysis
for system &
embedded apps
Manuf., Retail,
Drones, Robots…
Smart Cities, Auto. Driving, Gaming…
Create solutions using
Computer Vision –
OpenVino Toolkit,
Deep Learning,
Graphics, Libraries,
Media, OpenCL™,
& more
Optimization Tools , SDKs
Edge to Data Center to Cloud
Intel® Distribution of Python
Intel® DAAL
AI&IoT AI,HPC,Enterprise
#IntelAIDC2019 | #AIonIntel | #IntelAI
34. 34
AISoftwareOptimization
Intel® Parallel Studio XE
Up to 35X faster application
performance
**Intel® Xeon Phi™ Processor Software Ecosystem Momentum Guide
Performance results are based tests from 2016-2017 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer
systems, components, software, operations & functions. Any change to any of those factors may cause the results to vary. You should consult other information & performance tests to assist you in fully evaluating your
contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/performance. See configurations in individual case study links.
Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3
instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent
optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product
User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804. For more complete information about compiler optimizations, see
our Optimization Notice.
NERSC (National Energy Research
Scientific Computing Center)
Read case study
Science&research
For more success stories, review Intel® Parallel Studio XE Case Studies
Artificialintelligence
Performance speedup of
up to 23X faster with Intel
optimized scikit-learn vs. stock
scikit-learn
Google Cloud Platform
Read blog
LifeScience
Simulations ran up to 7.6X
faster with 9X
energy efficiency**
LAMMPS code - Sandia National
Laboratories
Read technology brief
#IntelAIDC2019 | #AIonIntel | #IntelAI
35. Artificial Intelligence
Energy
EDA
Science & Research
Manufacturing
Government
Computer Software
IT
Healthcare
Digital Media
Telecommunications
35
Intel®ParallelStudioXEforAI:HighPerformance,
ScalableSoftwareacrossMultipleIndustries
4X 8X 1.35X
Kyoto University
the Walker Molecular
Dynamics lab
3X
1.4X 4X
10X
11X
25X
2.5X 1.25X 1.3X
5X 2X
20X
2.5X
Performance results are based tests from ~2015-2017 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. For more
complete information about performance and benchmark results, visit www.intel.com/benchmark. See configurations in Intel® Parallel Studio XE Case Studies deck, & individual case studies links at this site
More Success Stories
▪ Intel® Parallel Studio XE
Case Studies deck
▪ Case studies site
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets & other optimizations. Intel does not guarantee the availability, functionality,
or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer
to the applicable product User & Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804
Google Cloud Platform 23X
37. 3
Python*Landscape
Challenge#1
Domain experts are not
professional software programmers
Adoption of Python
continues to grow among
domain experts &
developers for its
productivity benefits
MostPopularCodingLanguagesof2018
Challenge#2
Python performance limits migration
to production systems
Intel’s Python Tools
› Accelerate Python performance
› Enable easy access
› Empower the community
#IntelAIDC2019 | #AIonIntel | #IntelAI
38. 38
1Available only in Intel® Parallel Studio Composer Edition.
EcosystemcompatibilityGreaterProductivityFasterPerformance
Supports Python 2.7 & 3.6, Conda & PIP
Operating System: Windows*, Linux*, MacOS1*
Intel® Architecture Platforms
Performance Libraries, Parallelism,
Multithreading, Language Extensions
› Accelerated NumPy/SciPy/scikit-learn
with Intel® MKL1 & Intel® DAAL2
› Data analytics, machine learning & deep
learning with scikit-learn, pyDAAL,
TensorFlow* & Caffe*
› Scale with Numba* & Cython*
› Includes optimized mpi4py, works with
Dask* & PySpark*
› Optimized for latest Intel® architecture
› Prebuilt & optimized packages for
numerical computing, machine/deep
learning, HPC, & data analytics
› Drop in replacement for existing Python-
No code changes required
› Jupyter* notebooks, Matplotlib included
› Free download & free for all uses
including commercial deployment
› Supports Python 2.7 & 3.6, optimizations
integrated in Anaconda* Distribution
› Distribution & optimized packages available
via Conda, PIP, APT GET, YUM, & DockerHub,
numerical performance optimizations
integrated in Anaconda Distribution
› Optimizations upstreamed to main Python
trunk
› Priority Support with Intel® Parallel Studio XE
1Intel® Math Kernel Library
2Intel® Data Analytics Acceleration Library
Prebuilt & Accelerated Packages
AcceleratePython*withIntel®DistributionforPython*High Performance Python* for Scientific Computing, Data Analytics, Machine & Deep Learning
Learn More: software.intel.com/distribution-for-python
Operating System: Windows*, Linux*, MacOS1*
Intel® Architecture Platforms
40. Fast,ScalableCodewithIntel®MathKernelLibrary
(Intel® MKL)
40
› Speeds computations for scientific, engineering, financial and
machine learning applications by providing highly optimized,
threaded, and vectorized math functions
› Provides key functionality for dense and sparse linear algebra
(BLAS, LAPACK, PARDISO), FFTs, vector math, summary
statistics, deep learning, splines and more
› Dispatches optimized code for each processor automatically
without the need to branch code
› Optimized for single core vectorization and cache utilization
› Automatic parallelism for multi-core and many-core
› Scales from core to clusters
› Available at no cost & royalty free
› Great performance with minimal effort!
1 Available only in Intel® Parallel Studio Composer Edition.
Dense&SPARSELinearAlgebra
FastFourierTransforms
VectorMath
VectorRNGs
FastPoissonSolver
&More!
Intel®MKLLibraryOffers…
41. SpeedupAnalytics&MachineLearningwith
Intel®DataAnalyticsAccelerationLibrary(Intel®DAAL)
› Highly tuned functions for classical machine learning & analytics
performance from datacenter to edge running on Intel®
processor-based devices
› Simultaneously ingests data & computes results for highest
throughput performance
› Supports batch, streaming & distributed usage models to meet a
range of application needs
› Includes Python*, C++, Java* APIs, & connectors to popular data
sources including Spark* & Hadoop*
Pre-processing Transformation Analysis Modeling DecisionMaking
Decompression,
Filtering,
Normalization
Aggregation,
Dimension Reduction
Summary Statistics
Clustering, etc.
Machine Learning (Training)
Parameter Estimation
Simulation
Forecasting
Decision
Trees, etc.
Validation
Hypothesis Testing
Model Errors
What’sNewinthe2019Release
New Algorithms
› Logistic Regression, most widely-used classification algorithm
› Extended Gradient Boosting Functionality for inexact split calculations &
user-defined callback canceling for greater flexibility
› User-defined Data Modification Procedure supports a wide range of
feature extraction & transformation techniques
Learn More: software.intel.com/daal
#IntelAIDC2019 | #AIonIntel | #IntelAI
42. Spark
Core
Feature
Parity
Lower TCO, improved
ease of use
Efficient
Scale-Out
HighPerformanceDeepLearningforApacheSpark*onCPUInfrastructure
No need to deploy costly accelerators, duplicate data,
or suffer through scaling headaches!
Designed&OptimizedforIntel®Xeon®Processor
Powered by Intel® MKL-DNN
DataFrame
ML Pipelines
SQL SparkR Streaming MLlib GraphX BigDL
#IntelAIDC2019 | #AIonIntel | #IntelAI
43. 4
Consumer
Sentiment Analysis
Image Similarity
* Search
Image Transfer
Learning
Image Generation
3D Image
Support
Fraud Detection
Anomaly Detection
Recommendation
NCF
Wide n Deep
Object
Detection
Tensorflow
support
Low latency
serving
Health Finance Retail Manufacturing Infrastructure
#IntelAIDC2019 | #AIonIntel | #IntelAI
44. Result
Client
JD.Com, 2nd largest
online retailer in China,
~ 25 M users.
ChallengE
Building deep learning applications
such as image similarity search
without moving data.
Solution
Switched from GPU to CPU cluster.
Using Apache Spark* with BigDL,
running on Intel® Xeon® processors
Intel® Xeon® CPU
Processing ~380M images
4XGaiN
CaseStudy: ImageRecognition
#IntelAIDC2019 | #AIonIntel | #IntelAI
45. The integrated surveillance system connected to cameras at stadiums, which transmitted video
data to operational HQ in each city.
Intel® Distribution of OpenVINO™ toolkit allowed Axxonsoft to distribute the neural network
video analytics of the video across all available Intel hardware, for zone entry detection,
abandoned objects detection, and facial recognition.
SecurityforStadiums
atWorldCup2018
Result
used to protect
Surveillance
Cameras9000+
fans2Million+
See case study for details.
46. Result
60%increaseIn coverage rate and 20% accuracy rate, better
than traditional rule-based approach
Intel does not control or audit third-party benchmark data or the web sites referenced in this document.
You should visit the referenced web site and confirm whether referenced data are accurate. *Other names and brands may be claimed as the property of others.
“Performance of Intel® Xeon® processors and the sustained
optimization of Apache Spark were key [to deploy] a single-
platform that consolidates and analyzes all types of data, from
any channel, within a highly secure environment.”
https://ai.intel.com/nervana/wp-content/uploads/sites/53/2018/06/Intel-White-Paper-Union-Pay_2_hir-res_Keep-the-Size-of-Figure-6.pdf
https://www.intel.com/content/www/us/en/financial-services-it/union-pay-case-study.html
Client
China UnionPay*, which
specializes in banking
services and payment
systems. It is the 3rd largest
payment network in the world.
ChallengE
Detect fraudulent credit card
transactions with more coverage and
accuracy.
Solution
Using Cloudera Enterprise (Hadoop Cluster),
Apache Spark* with BigDL, running on Intel®
Xeon® and 5th Gen Intel® Core™ Processors for
credit card fraud detection. Historical data is
stored on Apache Hive*. Data preprocessing
done with Apache Spark SQL*.
47. Result
Working closely with Intel’s Analytics Zoo team,
Midea built a highly-optimized defect detection
solution, and chose Intel® Xeon® Scalable
6130/6148 over GPU-based servers as it met
their latency requirements and more easily
integrated into their existing infrastructure
https://software.intel.com/en-us/articles/industrial-inspection-platform-in-midea-and-kuka-using-distributed-tensorflow-on-analytics
“Analytics Zoo from Intel provides a great tool for developing
the end-to-end AI solutions, building pipelines across cloud
and edge computing, and optimizing the hardware resources.”
Zheng Hu, Director of Computer
Vision Research Institute, Midea
Public
Client
Midea Group is a Chinese
electrical appliance
manufacturer with 21
manufacturing plants and 260
logistics centers across 200
countries
ChallengE
Midea needed to eliminate defects
caused by scratched surfaces, missing
bolts, misaligned labeling on surfaces
(glass, polished metal, painted), and
human inspection was not able to meet
target quality metrics or detection rate
requirements.
Solution
An advanced defect inspection system built on
top of Analytics Zoo, which provides a unified
analytics + AI platform that seamlessly unites
Spark, BigDL and TensorFlow* programs into
an integrated pipeline. The system was based
on Intel® Xeon Scalable 6130/6148 servers
and Core i7 edge devices.
Intel does not control or audit third-party benchmark data or the web sites referenced in this document.
You should visit the referenced web site and confirm whether referenced data are accurate. *Other names and brands may be claimed as the property of others.
48. Result
The platform provides multiple functions from onboard Wi-Fi to computer vision applications
such as human/vehicle detection at crossroads, onboard empty seat detection and intruder
detection.
OpenVINO™ provides a scalable, high performance common platform across a variety of
hardware for greater efficiencies.
In-train
visionplatform
Enables pedestrian & vehicle
identification at crossroads +
on-train empty seat detection
#IntelAIDC2019 | #AIonIntel | #IntelAI
50. OpenVINO™SoftwaretoolkitVisual Inferencing & Neural Network Optimization
DEPLOY COMPUTER
VISION & DEEP LEARNING
CAPABILITIES TO THE
EDGE
HighPerformance,highEfficiencyfortheedge
Writeonce+scaletoDiverseAccelerators
BroadFrameworksupport
Other names and brands may be claimed as the property of others
VPU = Vision Processing Unit (Movidius)
51. 51
What’sInsidetheOpenVINO™toolkit
OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
Intel®Architecture-Based
PlatformsSupport
OS Support CentOS* 7.4 (64 bit) Ubuntu* 16.04.3 LTS (64 bit) Microsoft Windows* 10 (64 bit) Yocto Project* version Poky Jethro v2.0.3 (64 bit)
Intel® Deep Learning Deployment Toolkit Traditional Computer Vision Tools & Libraries
Model Optimizer
Convert & Optimize
Inference Engine
Optimized InferenceIR OpenCV* OpenVX*
Photography
Vision
Optimized Libraries
IR = Intermediate
Representation file
For Intel® CPU & CPU with integrated graphics
Increase Media/Video/Graphics Performance
Intel® Media SDK
Open Source version
OpenCL™
Drivers & Runtimes
For CPU with integrated graphics
Optimize Intel® FPGA
FPGA RunTime Environment
(from Intel® FPGA SDK for OpenCL™)
Bitstreams
FPGA – Linux* only
20+ Pre-trained
Models
Code SamplesComputer Vision
Algorithms
Samples
52. 0
2
4
6
8
10
12
14
16
18
20
GoogLeNet v1 Vgg16* Squeezenet* 1.1 GoogLeNet v1 (32) Vgg16* (32) Squeezenet* 1.1 (32)
Std. Caffe on CPU OpenCV on CPU OpenVINO on CPU OpenVINO on GPU OpenVINO on FPGA
52
Get an even Bigger Performance Boost with Intel® FPGA
1Depending on workload, quality/resolution for FP16 may be marginally impacted. A performance/quality tradeoff from FP32 to FP16 can affect accuracy; customers are encouraged to experiment to find what works best
for their situation. Performance results are based on testing as of June 13, 2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. For
more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Configuration: Testing by Intel as of June 13, 2018. Intel® Core™ i7-6700K CPU @ 2.90GHz fixed, GPU GT2 @
1.00GHz fixed Internal ONLY testing, Test v3.15.21 – Ubuntu* 16.04, OpenVINO 2018 RC4, Intel® Arria® 10 FPGA 1150GX. Tests were based on various parameters such as model used (these are public), batch size, and
other factors. Different models can be accelerated with different Intel hardware solutions, yet use the same Intel software tools.
Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and
other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended
for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information
regarding the specific instruction sets covered by this notice. Notice revision #20110804
Public Models (Batch Size)
RelativePerformance
Improvement
Standard
Caffe*
Baseline
19.9x1
OpenVINO on CPU+Intel® FPGAOpenVINOon CPU+ Intel® Processor Graphics(GPU)/ (FP16)
Comparison of Frames per Second (FPS)
IncreaseDeepLearningWorkloadPerformance
onPublicModelsusingOpenVINO™toolkit&Intel®Architecture
53. oneAPI
Single Programming Model
to Deliver Cross-Architecture Performance
All information provided in this deck is subject to change without notice.
Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.
54. 54
ProgrammingChallenge
Diverse set of data-centric hardware
No common programming language or APIs
Inconsistent tool support across platforms
Each platform requires unique
software investment
Spatial
FPGA
Matrix
AI
Vector
GPU
Scalar
CPU
SVMS
Optimization Notice
55. The future is a diverse mix of scalar,
vector, matrix, & spatial architectures
deployed in CPU, GPU, AI, FPGA & other
accelerators
DiverseWorkloadsrequireDIVERSEARCHITECTURES
55
Spatial
FPGA
Matrix
AI
Vector
GPU
Scalar
CPU
SVMS
Optimization Notice
56. 56
Project oneAPI delivers a unified
programming model to simplify
development across diverse
architectures
Common developer experience across
Scalar, Vector, Matrix & Spatial
architectures (CPU, GPU, AI and FPGA)
Uncompromised native high-level
language performance
Based on industry standards & open
specifications
Optimized Applications
Optimized
Middleware / Frameworks
oneAPI Language & Libraries
Intel’soneAPICoreConcept
FPGAAIGPUCPU
Scalar Vector Matrix Spatial
oneAPI
Tools
Optimization Notice
57. Some capabilities may differ per architecture.
57
oneAPIforcross-architectureperformance
Optimized Applications
Optimized Middleware & Frameworks
oneAPI Product
Direct Programming
Data Parallel C++
API-Based Programming
Libraries
Analysis &
Debug Tools
Scalar Vector Matrix Spatial
FPGAAIGPUCPU
Optimization Notice
58. Language to deliver uncompromised parallel programming productivity and performance across CPUs
and accelerators
Based on C++ with language enhancements being driven through community project
Open, cross-industry alternative to single architecture proprietary language
DataparallelC++Standards-based, Cross-architecture Language
There will still be a need to tune for each architecture.
Optimization Notice
59. Visit TechDecoded.intel.io — a video series where developers learn to put into
practice key optimization strategies with Intel Development tools.
Focused conversations where
tech. visionaries share key
concepts on front-line topics,
what you need to know and why
it matters.
Put into practice — short videos
and articles that deliver the how-
to’s of specific programming
tasks using Intel tools.
Watch big picture videos Dig deeper with Essential Get started with Quick Hits
Webinars covering strategies,
practices and tools that help
you optimize applications and
solutions performance.
Visual Code Systems & IoT Data Science Data Center &
Computing Modernization Cloud Computing
GetTheMostFromYourCodeTodaywithIntelTech.Decoded
Optimization Notice
68. 68
KeyVisionSolutionsOptimizedbyIntel®
DistributionofOpenVINO™toolkit
Intel teamed with Philips to show that servers powered by Intel® Xeon®
Scalable processors & Intel® Distribution of OpenVINO™ toolkit can efficiently
perform deep learning inference on patients’ X-rays & computed tomography
(CT) scans, without the need for accelerators. Achieved breakthrough
performance for AI inferencing:
▪ 188x increase in throughput (images/sec) on Bone-age prediction model.1
▪ 38x increase in throughput (images/sec) on Lung segmentation model. 1
“Intel® Xeon® Scalable processors and OpenVINO toolkit appears to be the right solution for medical imaging AI
workloads. Our customers can use their existing hardware to its maximum potential, without having to complicate their
infrastructure, while still aiming to achieve quality output resolution at exceptional speeds."
— Vijayananda J., chief architect and fellow, Data Science and AI, Philips HealthSuite Insights, India
White Paper
1See white paper for performance details.
Philips
#IntelAIDC2019 | #AIonIntel | #IntelAI
69. 69
The Intel® Distribution of OpenVINO™ toolkit helped GE deliver
optimized inferencing to its deep learning image-classification solution.
By bringing AI to its clinical diagnostic scanning, GE no longer needed
an expensive 3rd party accelerator board, achieving:
▪ 5.9x inferencing performance above the target1
▪ 14x inferencing speed over the baseline solution1
▪ Improved image quality, diagnostic capabilities, and clinical workflows
With the OpenVINO™ toolkit , we are now able to optimize inferencing across Intel® silicon, exceeding our throughput goals by almost 6x,”
said David Chevalier, Principal Engineer for GE Healthcare.
“We want to not only keep deployment costs down for our customers, but also offer a flexible, high-performance solution for a new era of
smarter medical imaging. Our partnership with Intel allows us to bring the power of AI to clinical diagnostic scanning and other healthcare
workflows in a cost-effective manner.”
GE Healthcare*
Intel-GE Healthcare, Intel® Distribution of OpenVINO™ Optimizes Deep Learning Performance for Healthcare Imaging
KeyVisionSolutionsOptimizedbyIntel®Distributionof
OpenVINO™toolkit
1See white paper for performance details.