In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
Preprocess, visualize, and Build AI Faster at-Scale on Intel Architecture. Develop end-to-end AI pipelines for inferencing including data ingestion, preprocessing, and model inferencing with tabular, NLP, RecSys, video and image using Intel oneAPI AI Analytics Toolkit and other optimized libraries. Build at-scale performant pipelines with Databricks and end-to-end Xeon optimizations. Learn how to visualize with the OmniSci Immerse Platform and experience a live demonstration of the Intel Distribution of Modin and OmniSci.
Accelerate Machine Learning Software on Intel Architecture Intel® Software
This session presents performance data for deep learning training for image recognition that achieves greater than 24 times speedup performance with a single Intel® Xeon Phi™ processor 7250 when compared to Caffe*. In addition, we present performance data that shows training time is further reduced by 40 times the speedup with a 128-node Intel® Xeon Phi™ processor cluster over Intel® Omni-Path Architecture (Intel® OPA).
Python* Scalability in Production EnvironmentsIntel® Software
This document discusses scaling Python performance in production environments. It introduces the Intel Distribution for Python, which provides optimized versions of NumPy, SciPy, and Scikit-Learn using Intel MKL to accelerate linear algebra and machine learning algorithms. It also supports parallelism through MPI, TBB for multithreading, and integration with big data frameworks. Profiling tools like Intel VTune Amplifier help optimize mixed-language Python applications for Intel architectures. The goal is to make Python usable for high performance computing and big data workloads while maintaining its ease of use.
This document discusses accelerating artificial intelligence and contains the following key points:
1. The amount of data generated from various technologies is growing exponentially and will require vast increases in computing power to process and analyze.
2. Intel is developing new hardware like Intel Xeon and FPGA chips as well as frameworks and libraries to provide the computing capabilities needed for advanced AI workloads.
3. Optimizations in hardware, software, frameworks and algorithms can significantly boost AI performance on Intel platforms, allowing complex deep learning models to be trained in hours instead of days.
Tackle more data science challenges than ever before without the need for discrete acceleration with the 3rd Gen Intel® Xeon® Scalable processors. Learn about the built-in AI acceleration and performance optimizations for popular AI libraries, tools and models.
This session was held by Vladimir Brenner, Partner Account Manager, Disruptors & AI, Intel AI at the Dive into H2O: London training on June 17, 2019.
Please find the recording here: https://youtu.be/60o3eyG5OLM
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
Preprocess, visualize, and Build AI Faster at-Scale on Intel Architecture. Develop end-to-end AI pipelines for inferencing including data ingestion, preprocessing, and model inferencing with tabular, NLP, RecSys, video and image using Intel oneAPI AI Analytics Toolkit and other optimized libraries. Build at-scale performant pipelines with Databricks and end-to-end Xeon optimizations. Learn how to visualize with the OmniSci Immerse Platform and experience a live demonstration of the Intel Distribution of Modin and OmniSci.
Accelerate Machine Learning Software on Intel Architecture Intel® Software
This session presents performance data for deep learning training for image recognition that achieves greater than 24 times speedup performance with a single Intel® Xeon Phi™ processor 7250 when compared to Caffe*. In addition, we present performance data that shows training time is further reduced by 40 times the speedup with a 128-node Intel® Xeon Phi™ processor cluster over Intel® Omni-Path Architecture (Intel® OPA).
Python* Scalability in Production EnvironmentsIntel® Software
This document discusses scaling Python performance in production environments. It introduces the Intel Distribution for Python, which provides optimized versions of NumPy, SciPy, and Scikit-Learn using Intel MKL to accelerate linear algebra and machine learning algorithms. It also supports parallelism through MPI, TBB for multithreading, and integration with big data frameworks. Profiling tools like Intel VTune Amplifier help optimize mixed-language Python applications for Intel architectures. The goal is to make Python usable for high performance computing and big data workloads while maintaining its ease of use.
This document discusses accelerating artificial intelligence and contains the following key points:
1. The amount of data generated from various technologies is growing exponentially and will require vast increases in computing power to process and analyze.
2. Intel is developing new hardware like Intel Xeon and FPGA chips as well as frameworks and libraries to provide the computing capabilities needed for advanced AI workloads.
3. Optimizations in hardware, software, frameworks and algorithms can significantly boost AI performance on Intel platforms, allowing complex deep learning models to be trained in hours instead of days.
Tackle more data science challenges than ever before without the need for discrete acceleration with the 3rd Gen Intel® Xeon® Scalable processors. Learn about the built-in AI acceleration and performance optimizations for popular AI libraries, tools and models.
This session was held by Vladimir Brenner, Partner Account Manager, Disruptors & AI, Intel AI at the Dive into H2O: London training on June 17, 2019.
Please find the recording here: https://youtu.be/60o3eyG5OLM
In this talk, Tong will start with the current landscape and typical use cases of Artificial Intelligence applications in the Telco domain. Then, she will introduce Intel’s strategy and products for Network AI, including our focus areas, our hardware portfolio, software stacks, roadmaps and some case studies.
Speaker: Tong Zhang, Principal Engineer and Chief Architect for AI and Analytics of the Network Platforms Group, Intel
oneAPI: Industry Initiative & Intel ProductTyrone Systems
With the growth of AI, machine learning, and data-centric applications, the industry needs a programming model that allows developers to take advantage of rapid innovation in processor architectures. TensorFlow supports the oneAPI industry initiative and its standards-based open specification.
oneAPI complements TensorFlow’s modular design and provides increased choice of hardware vendor and processor architecture, and faster support of next-generation accelerators. TensorFlow uses oneAPI today on Xeon processors and we look forward to using oneAPI to run on future Intel architectures.
Improve AI inference performance with HPE ProLiant DL380 Gen11 servers, power...Principled Technologies
In ResNet-50 image-recognition testing, these servers handled significantly more samples per second than previous-generation HPE ProLiant servers while achieving lower latency
Conclusion
Companies using AI inference to solve business problems have a range of choices for running these computationally demanding applications. We explored the potential of one solution, the HPE ProLiant DL380 Gen11 server featuring 4th Generation Intel Xeon Gold processors. We compared this server to its previous-generation counterpart on ResNet-50 tests using FP32 precision and found it delivered 2.86 times the inference performance while reducing latency by 30.1 percent. We also tested the HPE ProLiant DL380 Gen11 server at lower precision levels, which place greater demand on CPU resources, and found its performance to be strong with both Int8 and bfloat16 precision levels. Compared to potentially pricey pay-as-you-go cloud solutions and high-end GPU-based server solutions, the HPE ProLiant DL380 Gen11 we tested can be a smart option for businesses harnessing the power of AI imaging applications.
Tuning For Deep Learning Inference with Intel® Processor Graphics | SIGGRAPH ...Intel® Software
This document discusses optimizing deep learning inference on Intel processor graphics using the OpenVINOTM toolkit. Some key points include:
- Running inference on client devices provides advantages over cloud like privacy, bandwidth savings, and responsiveness.
- OpenVINOTM provides tools to optimize models for Intel hardware and achieve 5-10x speedups on Intel GPUs compared to CPU baselines.
- A case study demonstrates optimizing a deep image matting model, reducing inference time from 2.35 seconds to 291 milliseconds on Intel GPU using OpenVINOTM.
- Emerging technologies like federated learning are discussed which could improve privacy for on-device inference.
This issue’s feature article, Tuning Autonomous Driving Using Intel® System Studio, illustrates how the tools in Intel System Studio give embedded systems and connected device developers an integrated development environment to build, debug, and tune performance and power usage. Continuing the theme of tuning edge applications, Building Fast Data Compression Code for Cloud and Edge Applications shows how to use the Intel® Integrated Performance Primitives
to speed data compression.
Python Data Science and Machine Learning at Scale with Intel and AnacondaIntel® Software
Python is the number 1 language for data scientists, and Anaconda is the most popular python platform. Intel and Anaconda have partnered to bring scalability and near-native performance to Python with simple installations. Learn how data scientists can now access oneAPI-optimized Python packages such as NumPy, Scikit-Learn, Modin, Pandas, and XGBoost directly from the Anaconda repository through simple installation and minimal code changes.
Software Development Tools for Intel® IoT PlatformsIntel® Software
This talk familiarizes participants with the benefits of using the Intel® software development tools and libraries for developing end-to-end IoT solutions.
Intel's Data Center & Connected Systems Group and Diane Bryant shares the latest news on the latest Intel Xeon E5v2 family of processors and technologies like Intel Network Builders to enable the re-architecture of the Data Center.
Helixa uses serverless machine learning architectures to power an audience intelligence platform. It ingests large datasets and uses machine learning models to provide insights. Helixa's machine learning system is built on AWS serverless services like Lambda, Glue, Athena and S3. It features a data lake for storage, a feature store for preprocessed data, and uses techniques like map-reduce to parallelize tasks. Helixa aims to build scalable and cost-effective machine learning pipelines without having to manage servers.
Unleashing Data Intelligence with Intel and Apache Spark with Michael GreeneDatabricks
Organizations are developing deep learning applications to derive new insights, identify new opportunities and uncover new efficiencies. However, deep learning application development often means tapping into multiple frameworks, libraries, and clusters—a complex, time-consuming, and costly effort. This keynote will discuss what the newly released BigDL (open source distributed deep learning framework for Apache Spark and Intel® Xeon® clusters) can offer to developers and what solutions Intel has enabled for customers and partners. In addition, plans for expanding BigDL ecosystem will also be highlighted.
1. The document introduces the Intel Xeon Scalable platform, which provides the foundation for data center innovation with a 1.65x average performance boost over previous generations.
2. It highlights key advantages of the platform including scalable performance, agility in rapid service delivery, and hardware-enhanced security with near-zero performance overhead.
3. Various workload-optimized solutions are discussed that leverage the platform's performance to accelerate insights from analytics, deploy cloud infrastructure more quickly, and transform networks.
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase – Big D...Intel IT Center
This Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase focuses on Big Data/Analytics software companies who have seen preformance increases with Intel products.
Faster deep learning solutions from training to inference - Michele Tameni - ...Codemotion
Intel Deep Learning SDK enables using of optimized open source deep-learning frameworks, including Caffe and TensorFlow through a step-by-step wizard or iPython interactive notebooks. It includes easy and fast installation of all depended libraries and advanced tools for easy data pre-processing and model training, optimization and deployment, providing an end-to-end solution to the problem. In addition, it supports scale-out on multiple computers for training, as well as using compression methods for deployment of the models on various platforms, addressing memory and speed constraints.
This document discusses Intel's hardware and software portfolio for artificial intelligence. It highlights Intel's move from multi-purpose to purpose-built AI compute solutions from the cloud to edge devices. It also discusses Intel's data-centric infrastructure including CPUs, accelerators, networking fabric and memory technologies. Finally, it provides examples of Intel optimizations that have increased AI performance on Intel Xeon scalable processors.
This document discusses trends in high performance computing (HPC) and big data analytics. It notes that while HPC and big data have different resource needs and programming models traditionally, they are converging as big data workloads require more real-time processing and HPC workloads incorporate more data-driven analytics. The document outlines challenges in both HPC and big data such as system bottlenecks, energy efficiency, and barriers to wider usage. It advocates for more integrated solutions that combine storage, networking, processing and memory to address these challenges.
Intel Core i5 processor-powered HP EliteBooks: A better experience for enterp...Principled Technologies
Workers have a range of tasks to complete and use a number of different applications. Laptops for these users, while providing advantages in mobility, are not always equal in terms of performance, experience, or battery life. We found that Intel Core i5 processor-powered HP EliteBooks provided a number of advantages in performance and application responsiveness over an AMD processor-based HP EliteBook, while also delivering longer battery life. When your organization needs notebooks for a broad range of users performing different yet vital tasks, our testing shows that an Intel processor-powered HP EliteBook could offer a better experience than an AMD processor-based HP EliteBook notebook.
Intel Microprocessors- a Top down ApproachEditor IJCATR
IBM is the world's largest manufacturer of computer chips. Although it has been challenged in recent years by
newcomers AMD and Cyrix, Intel still Predominate the market for PC microprocessors. Nearly all PCs are based on Intel's x86
architecture. IBM (International Business Machines)IBM (International Business Machines) is by far the world's largest information
technology company in terms of Gross ($88 billion in 2000) and by most other measures, a position it has held for about the past
50 years. IBM products include hardware and software for a line of business servers, storage products, custom-designed microchips,
and application software. Increasingly, IBM derives revenue from a range of consulting and outsourcing services. In this paper we
will compare different technologies of computer system, its processor and chips
AI for good: Scaling AI in science, healthcare, and more.Intel® Software
How do we scale AI to its full potential to enrich the lives of everyone on earth? Learn about AI hardware and software acceleration and how Intel AI technologies are being used to solve critical problems in high energy physics, cancer research, financial inclusion, and more. Get started on your AI Developer Journey @ software.intel.com/ai
Optimize creative and design workflows and enjoy a better user experience wit...Principled Technologies
In a series of tests, a Dell Precision 5680 handled several heavy workloads better while remaining cooler than a 16‑inch Apple MacBook Pro
Conclusion
Whether you’re editing video, rendering 3D graphics, analyzing data, or collaborating with coworkers on a PowerPoint presentation, the performance of your device can impact how productive you are. When you’re catching up on emails on the couch or trying to put the finishing touches on a video project before boarding a flight, the temperature of the device in your lap can impact your comfort. Comparing a Dell Precision 5680 to a MacBook Pro 16", we found the Precision 5680 offered better performance running several demanding workloads, remained up to 12.8°F cooler under a sustained Cinebench workload, and offered comparable audio quality. Based on our tests, users who value performance and comfort should consider the Dell Precision 5680 workstation.
Help skilled workers succeed with Dell Latitude 7030 and 7230 Rugged Extreme ...Principled Technologies
Instead of equipping consumer-grade tablets with rugged cases
Conclusion
In our hands-on testing, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets showed that they are better equipped to help skilled workers than consumer-grade Apple iPad Pro and Samsung Galaxy Tab S9 tablets in multiple ways. They provide more built-in capabilities and features than the consumer-grade tablets we tested. And, while they were more expensive than the rugged-case fortified consumer-grade options we tested, their rugged claims were more than skin deep.
In our performance and durability tests, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets performed better in demanding manufacturing, logistics, and field service environments than consumer-grade tablets with rugged cases. Both Rugged Extreme Tablets, with their greater thermal range, suffered less performance degradation in extreme temperatures, never failed and were merely scuffed after 26 hard drops, survived a 10 minute drenching with no ill effects, and were easier to view in direct sunlight than Apple iPad Pro and Samsung Galaxy Tab S9 tablets.
Bring ideas to life with the HP Z2 G9 Tower Workstation - InfographicPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to a similarly configured Dell Precision 3660 Tower Workstation in its out-of-box performance mode
More Related Content
Similar to Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks
In this talk, Tong will start with the current landscape and typical use cases of Artificial Intelligence applications in the Telco domain. Then, she will introduce Intel’s strategy and products for Network AI, including our focus areas, our hardware portfolio, software stacks, roadmaps and some case studies.
Speaker: Tong Zhang, Principal Engineer and Chief Architect for AI and Analytics of the Network Platforms Group, Intel
oneAPI: Industry Initiative & Intel ProductTyrone Systems
With the growth of AI, machine learning, and data-centric applications, the industry needs a programming model that allows developers to take advantage of rapid innovation in processor architectures. TensorFlow supports the oneAPI industry initiative and its standards-based open specification.
oneAPI complements TensorFlow’s modular design and provides increased choice of hardware vendor and processor architecture, and faster support of next-generation accelerators. TensorFlow uses oneAPI today on Xeon processors and we look forward to using oneAPI to run on future Intel architectures.
Improve AI inference performance with HPE ProLiant DL380 Gen11 servers, power...Principled Technologies
In ResNet-50 image-recognition testing, these servers handled significantly more samples per second than previous-generation HPE ProLiant servers while achieving lower latency
Conclusion
Companies using AI inference to solve business problems have a range of choices for running these computationally demanding applications. We explored the potential of one solution, the HPE ProLiant DL380 Gen11 server featuring 4th Generation Intel Xeon Gold processors. We compared this server to its previous-generation counterpart on ResNet-50 tests using FP32 precision and found it delivered 2.86 times the inference performance while reducing latency by 30.1 percent. We also tested the HPE ProLiant DL380 Gen11 server at lower precision levels, which place greater demand on CPU resources, and found its performance to be strong with both Int8 and bfloat16 precision levels. Compared to potentially pricey pay-as-you-go cloud solutions and high-end GPU-based server solutions, the HPE ProLiant DL380 Gen11 we tested can be a smart option for businesses harnessing the power of AI imaging applications.
Tuning For Deep Learning Inference with Intel® Processor Graphics | SIGGRAPH ...Intel® Software
This document discusses optimizing deep learning inference on Intel processor graphics using the OpenVINOTM toolkit. Some key points include:
- Running inference on client devices provides advantages over cloud like privacy, bandwidth savings, and responsiveness.
- OpenVINOTM provides tools to optimize models for Intel hardware and achieve 5-10x speedups on Intel GPUs compared to CPU baselines.
- A case study demonstrates optimizing a deep image matting model, reducing inference time from 2.35 seconds to 291 milliseconds on Intel GPU using OpenVINOTM.
- Emerging technologies like federated learning are discussed which could improve privacy for on-device inference.
This issue’s feature article, Tuning Autonomous Driving Using Intel® System Studio, illustrates how the tools in Intel System Studio give embedded systems and connected device developers an integrated development environment to build, debug, and tune performance and power usage. Continuing the theme of tuning edge applications, Building Fast Data Compression Code for Cloud and Edge Applications shows how to use the Intel® Integrated Performance Primitives
to speed data compression.
Python Data Science and Machine Learning at Scale with Intel and AnacondaIntel® Software
Python is the number 1 language for data scientists, and Anaconda is the most popular python platform. Intel and Anaconda have partnered to bring scalability and near-native performance to Python with simple installations. Learn how data scientists can now access oneAPI-optimized Python packages such as NumPy, Scikit-Learn, Modin, Pandas, and XGBoost directly from the Anaconda repository through simple installation and minimal code changes.
Software Development Tools for Intel® IoT PlatformsIntel® Software
This talk familiarizes participants with the benefits of using the Intel® software development tools and libraries for developing end-to-end IoT solutions.
Intel's Data Center & Connected Systems Group and Diane Bryant shares the latest news on the latest Intel Xeon E5v2 family of processors and technologies like Intel Network Builders to enable the re-architecture of the Data Center.
Helixa uses serverless machine learning architectures to power an audience intelligence platform. It ingests large datasets and uses machine learning models to provide insights. Helixa's machine learning system is built on AWS serverless services like Lambda, Glue, Athena and S3. It features a data lake for storage, a feature store for preprocessed data, and uses techniques like map-reduce to parallelize tasks. Helixa aims to build scalable and cost-effective machine learning pipelines without having to manage servers.
Unleashing Data Intelligence with Intel and Apache Spark with Michael GreeneDatabricks
Organizations are developing deep learning applications to derive new insights, identify new opportunities and uncover new efficiencies. However, deep learning application development often means tapping into multiple frameworks, libraries, and clusters—a complex, time-consuming, and costly effort. This keynote will discuss what the newly released BigDL (open source distributed deep learning framework for Apache Spark and Intel® Xeon® clusters) can offer to developers and what solutions Intel has enabled for customers and partners. In addition, plans for expanding BigDL ecosystem will also be highlighted.
1. The document introduces the Intel Xeon Scalable platform, which provides the foundation for data center innovation with a 1.65x average performance boost over previous generations.
2. It highlights key advantages of the platform including scalable performance, agility in rapid service delivery, and hardware-enhanced security with near-zero performance overhead.
3. Various workload-optimized solutions are discussed that leverage the platform's performance to accelerate insights from analytics, deploy cloud infrastructure more quickly, and transform networks.
Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase – Big D...Intel IT Center
This Intel® Xeon® Processor E5-2600 v3 Product Family Application Showcase focuses on Big Data/Analytics software companies who have seen preformance increases with Intel products.
Faster deep learning solutions from training to inference - Michele Tameni - ...Codemotion
Intel Deep Learning SDK enables using of optimized open source deep-learning frameworks, including Caffe and TensorFlow through a step-by-step wizard or iPython interactive notebooks. It includes easy and fast installation of all depended libraries and advanced tools for easy data pre-processing and model training, optimization and deployment, providing an end-to-end solution to the problem. In addition, it supports scale-out on multiple computers for training, as well as using compression methods for deployment of the models on various platforms, addressing memory and speed constraints.
This document discusses Intel's hardware and software portfolio for artificial intelligence. It highlights Intel's move from multi-purpose to purpose-built AI compute solutions from the cloud to edge devices. It also discusses Intel's data-centric infrastructure including CPUs, accelerators, networking fabric and memory technologies. Finally, it provides examples of Intel optimizations that have increased AI performance on Intel Xeon scalable processors.
This document discusses trends in high performance computing (HPC) and big data analytics. It notes that while HPC and big data have different resource needs and programming models traditionally, they are converging as big data workloads require more real-time processing and HPC workloads incorporate more data-driven analytics. The document outlines challenges in both HPC and big data such as system bottlenecks, energy efficiency, and barriers to wider usage. It advocates for more integrated solutions that combine storage, networking, processing and memory to address these challenges.
Intel Core i5 processor-powered HP EliteBooks: A better experience for enterp...Principled Technologies
Workers have a range of tasks to complete and use a number of different applications. Laptops for these users, while providing advantages in mobility, are not always equal in terms of performance, experience, or battery life. We found that Intel Core i5 processor-powered HP EliteBooks provided a number of advantages in performance and application responsiveness over an AMD processor-based HP EliteBook, while also delivering longer battery life. When your organization needs notebooks for a broad range of users performing different yet vital tasks, our testing shows that an Intel processor-powered HP EliteBook could offer a better experience than an AMD processor-based HP EliteBook notebook.
Intel Microprocessors- a Top down ApproachEditor IJCATR
IBM is the world's largest manufacturer of computer chips. Although it has been challenged in recent years by
newcomers AMD and Cyrix, Intel still Predominate the market for PC microprocessors. Nearly all PCs are based on Intel's x86
architecture. IBM (International Business Machines)IBM (International Business Machines) is by far the world's largest information
technology company in terms of Gross ($88 billion in 2000) and by most other measures, a position it has held for about the past
50 years. IBM products include hardware and software for a line of business servers, storage products, custom-designed microchips,
and application software. Increasingly, IBM derives revenue from a range of consulting and outsourcing services. In this paper we
will compare different technologies of computer system, its processor and chips
AI for good: Scaling AI in science, healthcare, and more.Intel® Software
How do we scale AI to its full potential to enrich the lives of everyone on earth? Learn about AI hardware and software acceleration and how Intel AI technologies are being used to solve critical problems in high energy physics, cancer research, financial inclusion, and more. Get started on your AI Developer Journey @ software.intel.com/ai
Optimize creative and design workflows and enjoy a better user experience wit...Principled Technologies
In a series of tests, a Dell Precision 5680 handled several heavy workloads better while remaining cooler than a 16‑inch Apple MacBook Pro
Conclusion
Whether you’re editing video, rendering 3D graphics, analyzing data, or collaborating with coworkers on a PowerPoint presentation, the performance of your device can impact how productive you are. When you’re catching up on emails on the couch or trying to put the finishing touches on a video project before boarding a flight, the temperature of the device in your lap can impact your comfort. Comparing a Dell Precision 5680 to a MacBook Pro 16", we found the Precision 5680 offered better performance running several demanding workloads, remained up to 12.8°F cooler under a sustained Cinebench workload, and offered comparable audio quality. Based on our tests, users who value performance and comfort should consider the Dell Precision 5680 workstation.
Similar to Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks (20)
Help skilled workers succeed with Dell Latitude 7030 and 7230 Rugged Extreme ...Principled Technologies
Instead of equipping consumer-grade tablets with rugged cases
Conclusion
In our hands-on testing, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets showed that they are better equipped to help skilled workers than consumer-grade Apple iPad Pro and Samsung Galaxy Tab S9 tablets in multiple ways. They provide more built-in capabilities and features than the consumer-grade tablets we tested. And, while they were more expensive than the rugged-case fortified consumer-grade options we tested, their rugged claims were more than skin deep.
In our performance and durability tests, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets performed better in demanding manufacturing, logistics, and field service environments than consumer-grade tablets with rugged cases. Both Rugged Extreme Tablets, with their greater thermal range, suffered less performance degradation in extreme temperatures, never failed and were merely scuffed after 26 hard drops, survived a 10 minute drenching with no ill effects, and were easier to view in direct sunlight than Apple iPad Pro and Samsung Galaxy Tab S9 tablets.
Bring ideas to life with the HP Z2 G9 Tower Workstation - InfographicPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to a similarly configured Dell Precision 3660 Tower Workstation in its out-of-box performance mode
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
Get in and stay in the productivity zone with the HP Z2 G9 Tower WorkstationPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to Dell Precision 3660 and 5860 tower workstations in optimized performance modes
Conclusion
HP Z2 G9 Tower Workstation users can change the BIOS settings to dial in the performance mode that best suits their needs: High Performance Mode, Performance Mode, or Quiet Mode. In good
news for both creative and technical professionals, we found that an Intel Core i9-13900 processor-powered HP Z2 G9 Tower Workstation set to High Performance mode received higher CPU-based benchmark scores than both a similarly configured Dell Precision 3660 and a Dell Precision 5860 equipped with an Intel Xeon w5-2455x processor. Plus, the HP Z2 G9 Tower Workstation was quieter while running CPU-intensive Cinebench 2024 and SPECapc for Solidworks 2022 workloads than both Dell Precision tower workstations. This means HP Z2 G9 Tower Workstation users who prize performance over everything else can do so without sacrificing a quiet workspace.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Programming Foundation Models with DSPy - Meetup Slides
Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks
1. Workstations powered
by Intel can play a vital
role in CPU-intensive
AI developer tasks
In three AI development workflows, Intel
processor-powered workstations delivered strong
performance, without using their GPUs, making
them a good choice for this part of the AI process
As the adoption of artificial intelligence (AI) has
exploded in recent years, much attention has focused
on graphical processing units (GPUs) and cloud-based
platforms that support many AI model training and
inferencing functions. However, CPU-only workstations
can be a cost-effective alternative for many parts of
the AI development workflow, and Intel has developed
new hardware and software optimized for these kinds
of tasks. In this report, we discuss the proof-of-concept
testing Principled Technologies (PT) conducted,
in which we executed three AI development
workflows using only the CPU cores in tower
and mobile workstations.
Characterizing
documents
then adding them
to a database and
indexing them
Analyzing a
portrait
by asking a local
LLM to describe it
Standardizing
images
and creating new
images suitable for
testing or training
Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks May 2024
This project was commissioned by Intel.
A Principled Technologies report: Hands-on testing. Real-world results.
2. Hypothesis: Workstations with Intel processors
are a strong option for certain AI tasks
While applications and services that run in the cloud or on expensive
purpose-built GPUs may be useful for certain parts of an AI workflow, they
can also pose obstacles in the areas of cost, privacy, and security. By shifting
some common development or prototyping AI tasks to on-site workstations,
organizations can mitigate these concerns.
PT conducted proof-of-concept testing to determine whether workstations
with Intel processors provide an appropriate environment for carrying out
data-intensive AI tasks using only the CPU cores. We tested both tower and
mobile workstations.
We created three representative AI development workflows using different
data sources. One workflow involved characterizing documents, adding
them to a vector database running on the system, and indexing this content.
A second workflow combined the disparate data sources in a multi-modal
large language model (LLM) to determine the characteristics of a painting
by Leonardo da Vinci. A third workflow standardized images to a common
scale and precision, and used an ML k-means approach to find features
in the images.
We executed the workflows on two sets of systems, all with Intel processors:
three similarly configured mobile workstations from different vendors and
three similarly configured tower workstations from the same vendors. We
measured the time necessary—and in some cases the system memory
required—to perform tasks that manipulate data. We also experimented
with using the same data at different precision levels to determine how they
affected performance. Our goal was not to compare the performance of
the various workstations, but to demonstrate that these two categories of
devices are well-suited to these tasks, regardless of the vendor you choose.
About the hardware and software
environments we tested
For our proof-of-concept study, we created three custom AI development
workflows and measured the time to complete them. Although all the
systems we tested contained GPUs, we installed and used the CPU-only
versions of the Python libraries and did not install the drivers needed to
access GPU compute functions. This ensured that the workflows did not use
those GPUs and all of the compute power came from the Intel CPU cores.
Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks May 2024 | 2
3. Test systems
We viewed the tower workstations in our testing as potentially solid choices for data scientists working with AI
workflows with larger ML and AI models and more data. We tested the following tower workstations:
• Dell™
Precision™
7960 tower
workstation with the Intel®
Xeon®
w7-3455 processor, 128 GB of
RAM, and a 1TB PCIe NVMe®
solid-state drive (SSD)
• HP Z8 Fury G5 tower
workstation with the Intel Xeon
w7-3455 processor, 128 GB of
RAM, and a 1TB PCIe NVMe SSD
• Lenovo®
ThinkStation®
P7 tower
workstation with the Intel Xeon
w9-3495X processor, 128 GB of
RAM, and a 1TB PCIe NVMe SSD
About the Intel processors in the tower workstations we tested
All three of the tower workstations we tested feature processors from the Intel®
Xeon®
W-3400 processor collection.
Both the Dell Precision 7960 and the HP Z8 Fury G5 feature the 24-core Intel Xeon w7-3455 processor, and the
Lenovo ThinkStation P7 features the 56-core Intel®
Xeon®
w9-3495X processor. According to Intel, platforms
featuring these processors deliver “the ultimate workstation solution for professional creators, delivering outstanding
performance, security, and reliability along with expanded platform capabilities for VFX, 3D rendering, complex 3D
CAD, and AI development & edge deployments.”1
We viewed the mobile workstations as well-suited to development environments where data scientists explore
and attempt to improve smaller AI models. Many data scientists and AI developers train very small data sets
on their local workstations to reduce the cost of exploration. A cost-effective approach to development is to
experiment on local clients first and then scale to multiple servers or the cloud. We tested the following mobile
workstation systems:
• Dell Precision 7780 mobile
workstation with the 13th
Gen
Intel Core™
i7-13850HX processor,
64 GB of RAM, and a 1TB PCIe
NVMe SSD
• HP ZBook Fury 16 G10 mobile
workstation with the 13th
Gen
Intel Core i7-13850HX processor,
32 GB of RAM, and a 512GB PCIe
NVMe SSD
• Lenovo ThinkPad®
P16 G2
mobile workstation with the
13th
Gen Intel Core i9-13980HX
processor, 64 GB of RAM, and a
1TB PCIe NVMe SSD
About the Intel processors in the mobile workstations we tested
As we noted earlier, the Dell Precision 7780 and the HP ZBook Fury 16 G10 we tested both feature the 13th
Gen
Intel Core i7-13850HX processor. This processor has 20 cores, 5.30 GHz maximum turbo frequency, and 30 MB Intel
Smart Cache. The Lenovo ThinkPad P16 G2 we tested features the 13th
Gen Intel Core i9-13980HX processor with
24 cores, 5.60 GHz maximum turbo frequency, and 36 MB Intel Smart Cache.
Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks May 2024 | 3
4. Workflow environment and analytic tools
We used the Ubuntu 22.04 Linux operating system. For our scripting language, we used Python, which is well
suited for data ingestion, manipulation, and exploration. Higher-level functions that support AI are available as
add-on packages to Python. Many of these are efficient because they use optimized libraries to perform their
calculations in CPU-only or with the assistance of GPUs. We used the CPU-only versions.
We performed all of the tasks using Python scripts. Another option we could have used is Python in notebooks,
a browser-based visual interface to sequences of Python commands. Notebooks allow a data scientist to perform
tasks repeatedly as well as explore many “what-if” ideas. Users can save notebooks and reuse the code in
them. Visualizing data is an important part of data science. We used Python plotting packages to display results
inside a notebook.
We also used some AI functions from Python that a data scientist might use to find better ways to represent
data for AI (e.g., to test the efficiency of an embedding model, or determine which data fields to use to index
documents). We used local AI models, though a data scientist might also wish to use server cluster- or cloud-
based ones via standard API calls. Again, we used Python to effect these steps.
Finally, we used the Intel Python distribution and some of the optimized versions of Python packages it provides
for ML and AI. These are part of Intel AI Tools (see box below).
About Intel AI Tools
AI Tools from Intel, formerly known as the Intel®
AI
Analytics Toolkit, aim to maximize performance at
all stages of the AI pipeline, from preprocessing
through machine learning, and support efficient model
development through interoperability.2
According to
Intel, AI Tools “give data scientists, AI developers,
and researchers familiar Python* tools and frameworks
to accelerate end-to-end data science and analytics
pipelines on Intel®
architecture.”
AI Tools allow users to “train on Intel®
CPUs and GPUs
and integrate fast inference into your AI development
workflow with Intel®
-optimized, deep learning
frameworks for TensorFlow* and PyTorch*, pretrained
models, and model optimization tools; achieve drop-
in acceleration for data preprocessing and machine
learning workflows with compute-intensive Python
packages, Modin*, scikit-learn*, and XGBoost; and
gain direct access to analytics and AI optimizations
from Intel to ensure that your software works
together seamlessly.”3
Learn more at https://www.intel.com/content/www/us/
en/developer/tools/oneapi/ai-analytics-toolkit.html.
Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks May 2024 | 4
5. Our findings
Workflow 1: Characterizing documents, adding them to a vector database running on the
system, and indexing them
For this workflow, we began with a single clean data
source of unstructured text documents. We used a
local embedding model to automatically characterize
the documents, add them to a local vector Redis
database, and index them. This is typically part of the
process for setting up an AI-assisted chatbot. Once the
corpus of documents is in a vector database, chatbots
can efficiently search them. If you don’t care about the
general information that might be in a large language
model (LLM) AI such as GPT, you can get the answer
from this corpus of data and use the LLM’s ability to
recognize language to read the query and come up
with a good answer.
Our workflow’s scripts analyzed the document and
categorized the various parts of it. It pulled out tables
as well as the summary and the text, turned it into
a format that is easy to search, and put into a Redis
database that has vector database capabilities.
Figure 1 shows the average time the three tower
workstations needed to complete the tasks in
Workflow 1. Table 1 breaks down the time across the
five phases of the workflow. As they show, loading
and database uploading were the most time-intensive
phases of the workflow.
Workflow 1: Average time in seconds to complete task across three tower workstations
Using Intel Python and packages, which use Intel optimized libraries
10 20 30 40 60
50
0
49.89
32-bit FP precision
Load Split Chunk Load embed model DB upload
Figure 1: The average time to execute Workflow 1 across the three tower workstations using Intel AI tools. Note that the Split and Chunk
phases took so little time that they do not appear in the chart. Source: Principled Technologies.
Table 1: The amount of time, in seconds, each phase of Workflow 1 took using Intel Python and libraries (average of three tower
workstations). Asterisk indicates less than 10 milliseconds. Source: Principled Technologies.
Task Load Split Chunk Load embed model DB upload Total
Time to complete (seconds) 12.99 * 0.02 1.42 35.44 49.89
Completing Workflow 1 on the mobile workstations took roughly twice as long as on the tower workstations
(see Figure 2 and Table 2). Again, loading and database uploading were the most time-intensive phases of the
workflow. We consider these times acceptable given the tasks, indicating that these workstations are appropriate
choices for this workflow.
For simplicity, we have truncated the numbers in the tables and charts. Consequently, numbers may not sum to the total. Untruncated
numbers appear in the science behind the report. For Workflow 2, we report the maximum time it took to complete any phase as well
as the maximum time to complete all phases. The times we report come from different images, which vary widely in complexity. As a
result, in two cases, the sum of the maximum times for the tasks exceeded the maximum time to process an image.
Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks May 2024 | 5
6. Workflow 1: Average time in seconds to complete task across three mobile workstations
Using Intel Python and packages, which use Intel optimized libraries
20 40 60 80 120
100
0
100.14
32-bit FP precision
Load Split Chunk Load embed model DB upload
Figure 2: The average time to execute Workflow 1 across the three mobile workstations using Intel AI tools. Note that the Split and Chunk
phases took so little time that they do not appear in the chart. Source: Principled Technologies.
Table 2: The amount of time, in seconds, each phase of Workflow 1 took using Intel Python and libraries (average of three mobile
workstations). Asterisk indicates less than 10 milliseconds. Source: Principled Technologies.
Task Load Split Chunk Load embed model DB upload Total
Time to complete (seconds) 11.03 * 0.02 1.18 87.90 100.14
Workflow 2: Combining disparate data sources
In this workflow, we used the disparate data
sources in a multi-modal large language model
(LLM) to determine the characteristics of a
painting by Leonardo.
We used a multiprocessing model where we sent
images from the main process to the other cores,
which performed the image segmentation and
returned the results to the main process. This allowed
us to keep the cores busy; if an image required more
time because it had complicated features, it tied
up only one core. The other cores processed their
images independently.
Table 3 shows the average time the three tower
workstations needed to complete Workflow 2.
Processing made up the bulk of the time, with very
brief pre- and post-processing phases.
Table 4 shows the average time the three mobile
workstations needed to complete Workflow 2. As
we saw with Workflow 1, the mobile workstations
as a group took roughly twice as long as the tower
workstations did to execute the tasks. Again, we
believe these times are acceptable for these tasks, and
these workstations are a solid option for this workflow.
Table 3: The amount of time, in seconds, each phase of Workflow 2 took using Intel Python and libraries (average of three tower
workstations). Source: Principled Technologies.
Task Pre-processing Processing Post-processing Total
Time to complete (seconds) 0.62 136.55 0.30 137.48
Table 4: The amount of time, in seconds, each phase of Workflow 2 took using Intel Python and libraries (average of three mobile
workstations). Source: Principled Technologies.
Task Pre-processing Processing Post-processing Total
Time to complete (seconds) 1.30 265.71 0.51 267.53
Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks May 2024 | 6
7. Workflow 3: Standardizing images to a common scale and precision
This workflow involved taking image data and
processing it into a form suitable for either training a
neural net to find features or using it against a trained
neural net to categorize its content. We used an open-
source medical data image set consisting of images
from lung CAT scans. Our workflow involved the tasks
prior to the compute-intensive AI analysis, which would
be more appropriate to execute using clusters of
systems with GPUs. Our workflow prepared the data
for training or inference or segmentation. The systems
identified images that were of low quality, e.g.,
had missing data. We standardized the images to a
standard scale and precision (dynamic range). Next, we
created new images from the existing ones that would
be useful for testing or training on image data, e.g.,
rotating, cropping, flipping, rescaling, and changing
the color of parts. Finally, we compressed and recoded
the images in preparation for further AI analysis.
Tower workstations
The processors in the tower workstations support both 16-bit floating point (FP) precision and 32-bit FP
precision. As Figure 3 shows, using 16-bit precision, the workstations completed the workflow in markedly less
time, a savings of 26.7 percent.
Workflow 3: Average time in seconds to complete task across three tower workstations
10 20 30 40 60
50
0
32-bit FP precision
52.11
38.18
16-bit FP precision
32-bit FP precision
ModelCheck ModelOpt Inputs Outputs
Figure 3: A comparison of the average time to execute Workflow 3 across the three tower workstations, using Intel AI tools, at 16-bit and
32-bit FP precision. Source: Principled Technologies.
Table 5 breaks down the time across the four phases of the workflow. As it shows, Outputs was the most time-
intensive phase of the workflow.
Table 5: The amount of time, in seconds, each phase of Workflow 3 took using Intel Python and libraries (average of three tower
workstations). Source: Principled Technologies.
Task ModelCheck ModelOpt Inputs Outputs Total
Time to complete using 32-bit
FP precision (seconds) 0.27 4.33 0.35 47.15 52.11
Time to complete using 16-bit
FP precision (seconds)
0.30 4.25 0.35 33.27 38.18
Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks May 2024 | 7
8. In addition to measuring the time to complete this workflow, we also monitored memory usage during the tasks.
As Figure 4 shows, using 16-bit precision not only allowed the workstations to execute the workflow in less time,
but also reduced memory usage dramatically.
Workflow 3: Average memory usage in GB across three tower workstations
20 40 60 80 120
100
0
32-bit FP precision
16-bit FP precision
95.26
53.02
32-bit FP precision
ModelCheck ModelOpt Inputs Outputs
Figure 4: A comparison of the average memory usage in GB during Workflow 3 across the three tower workstations, using Intel AI tools, at
32-bit and 16-bit FP precision. Source: Principled Technologies.
Table 6 breaks down the memory usage across the four phases of the workflow. As it shows, ModelOpt, Inputs,
and Outputs were the most memory-intensive phases of the workflow.
Table 6: The amount of memory each phase of Workflow 3 consumed using Intel Python and libraries (average of three tower workstations).
Source: Principled Technologies.
Task ModelCheck ModelOpt Inputs Outputs Total
Average memory usage when
using 32-bit FP precision (GB)
1.38 30.74 30.85 32.28 95.26
Average memory usage when
using 16-bit FP precision (GB) 1.38 16.29 16.42 18.92 53.02
Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks May 2024 | 8
9. Mobile workstations
The mobile workstations we tested support only 32-bit FP precision. Note: We were unable to complete this
workflow on one of the three mobile workstations we tested because the workflow required just over 32 GB
of RAM to run efficiently and this workstation had only 32 GB of RAM. Consequently, these results are the
average of the other two workstations. As Figure 5 shows, using 32-bit precision, the two mobile workstations
we tested completed the workflow in an average of 223.80 seconds, more than four times as long as the tower
workstations needed.
Workflow 3: Average time in seconds to complete task across two mobile workstations
50 100 150 200 250
0
223.80
32-bit FP precision
ModelCheck ModelOpt Inputs Outputs
Figure 5: The average time to execute Workflow 3 across the two mobile workstations, using Intel AI tools, at 32-bit FP precision.
Source: Principled Technologies.
Table 7 breaks down the time across the four phases of the workflow. As it shows, Outputs was the most time-
intensive phase of the workflow. As was the case with Workflows 1 and 2, we judge these times to be acceptable
for completing these tasks, and believe these workstations are good choices for this workflow.
Table 7: The amount of time, in seconds, each phase of Workflow 3 took using Intel Python and libraries (average of two mobile
workstations). Source: Principled Technologies.
Task ModelCheck ModelOpt Inputs Outputs Total
Time to complete (seconds) 0.35 5.70 0.37 217.36 223.80
Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks May 2024 | 9
10. Conclusion
We executed three AI development workflows on tower workstations
and mobile workstations from three vendors, with each workflow utilizing
only the Intel CPU cores, and found that these platforms were suitable
for carrying out various AI tasks. For two of the workflows, we learned
that completing the tasks on the tower workstations took roughly half
as much time as on the mobile workstations. This supports the idea
that the tower workstations would be appropriate for a development
environment for more complex models with a greater volume of data
and that the mobile workstations would be well-suited for data scientists
fine-tuning simpler models. In the third workflow, we explored tower
workstation performance with different precision levels and learned
that using 16-bit floating point precision allowed the workstations to
execute the workflow in less time and also reduced memory usage
dramatically. For all three AI workflows we executed, we consider the
time the workstations needed to complete the tasks to be acceptable,
and believe that these workstations can be appropriate, cost-effective
choices for these kinds of activities.
1. Intel, “Intel®
Xeon®
W Processors,” accessed April 10, 2024, https://www.
intel.com/content/www/us/en/products/details/processors/xeon/w.html.
2. Intel, “AI Tools,” accessed April 8, 2024, https://www.intel.com/content/
www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html.
3. Intel, “AI Tools.”
Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
For additional information, review the science behind this report.
Principled
Technologies®
Facts matter.®
Principled
Technologies®
Facts matter.®
This project was commissioned by Intel.
Read the science behind this report at https://facts.pt/W6LrlMS
Workstations powered by Intel can play a vital role in CPU-intensive AI developer tasks May 2024 | 10