Replacing an aging desktop fleet is an important investment in your business. Faster and more reliable systems that offer the latest technology provide your employees an opportunity to be more productive with less waiting while they work. New desktops also benefit your IT staff, with significantly lower power costs and support for out-of-band management to reduce costly deskside visits.
Compared to aging desktops, new Dell OptiPlex desktops can allow employees to be more productive with faster and more reliable hardware while providing significantly lower power costs. Improved management technology with the new desktops can support the efforts of your IT staff and can reduce costly deskside visits. Upgrading your aging desktops with the Dell OptiPlex 9030 All-in-One or the Dell OptiPlex 9020 Micro desktops brings important improvements to your business through both hardware and software.
User's data and desk top can share between Windows XP, Windows Vista and Windows 7.
No need migration when start to use new PC with any version of Widows.
StruxureWare is Schneider Electric's DCIM software suite that integrates various data center management applications. It provides visibility and control of infrastructure assets from the building level down to the server. The software suite monitors and manages key metrics like power, cooling capacity, and IT asset usage. It helps optimize data center performance and efficiency through features like real-time monitoring, capacity planning, and energy analytics. Schneider Electric is a leading DCIM provider due to its comprehensive product portfolio, expertise, and ability to deliver an end-to-end solution for data center management.
DCIM tools help data center managers improve planning, lower costs, and speed up information delivery by providing visibility into data center infrastructure and automation of manual processes. They solve problems around maintaining availability and responding quickly to business needs while lowering costs during consolidation and cloud computing initiatives. DCIM tools provide concise summaries of capacity, assets, and performance to help answer questions around planning, operations, and analytics. Their use is important for reducing operational expenses by 10-30% annually while improving productivity.
A data center infrastructure management (DCIM) system collects and manages information about a data center's assets, resource use, and operational status. This information is analyzed and distributed to help optimize the data center's performance and meet business goals. Implementing DCIM solutions such as instrumentation, monitoring, and analytics can improve efficiency, reduce costs, and enable proactive management of the physical infrastructure and IT systems. Emerson Network Power provides a comprehensive portfolio of DCIM hardware and software products to help organizations gain visibility and control over their data center resources.
In 2014, more than 75 vendors self-identified as DCIM providers, including many with individual point solutions or proprietary monitoring hardware. When Gartner began defining DCIM as an integrated solutions suite, the list quickly shrank to less than two dozen. So what are the “must-haves” necessary to be considered a serious player in the DCIM arena?
View this presentation to find out.
DCIM: An Integral Part of the Software Defined Data CentreConcurrentThinking
Do you know what DCIM is? Discover with Concurrent Thinking how to improve your data centre efficiency, how to overcome challenges in data centres and what the future of DCIM is.
Find out more here: http://www.concurrent-thinking.com/
This document summarizes the key steps in calculating power and cost savings from virtualizing servers. It discusses identifying needs and developing a baseline by measuring current hardware usage. It then covers calculating power savings based on virtual infrastructure configurations and average wattages. Finally, it outlines how to determine total cost of ownership savings from reductions in hardware, facilities, support and other expenses. The overall goal is to demonstrate savings across multiple stakeholder groups from customers to C-level executives to operations managers to end users.
Compared to aging desktops, new Dell OptiPlex desktops can allow employees to be more productive with faster and more reliable hardware while providing significantly lower power costs. Improved management technology with the new desktops can support the efforts of your IT staff and can reduce costly deskside visits. Upgrading your aging desktops with the Dell OptiPlex 9030 All-in-One or the Dell OptiPlex 9020 Micro desktops brings important improvements to your business through both hardware and software.
User's data and desk top can share between Windows XP, Windows Vista and Windows 7.
No need migration when start to use new PC with any version of Widows.
StruxureWare is Schneider Electric's DCIM software suite that integrates various data center management applications. It provides visibility and control of infrastructure assets from the building level down to the server. The software suite monitors and manages key metrics like power, cooling capacity, and IT asset usage. It helps optimize data center performance and efficiency through features like real-time monitoring, capacity planning, and energy analytics. Schneider Electric is a leading DCIM provider due to its comprehensive product portfolio, expertise, and ability to deliver an end-to-end solution for data center management.
DCIM tools help data center managers improve planning, lower costs, and speed up information delivery by providing visibility into data center infrastructure and automation of manual processes. They solve problems around maintaining availability and responding quickly to business needs while lowering costs during consolidation and cloud computing initiatives. DCIM tools provide concise summaries of capacity, assets, and performance to help answer questions around planning, operations, and analytics. Their use is important for reducing operational expenses by 10-30% annually while improving productivity.
A data center infrastructure management (DCIM) system collects and manages information about a data center's assets, resource use, and operational status. This information is analyzed and distributed to help optimize the data center's performance and meet business goals. Implementing DCIM solutions such as instrumentation, monitoring, and analytics can improve efficiency, reduce costs, and enable proactive management of the physical infrastructure and IT systems. Emerson Network Power provides a comprehensive portfolio of DCIM hardware and software products to help organizations gain visibility and control over their data center resources.
In 2014, more than 75 vendors self-identified as DCIM providers, including many with individual point solutions or proprietary monitoring hardware. When Gartner began defining DCIM as an integrated solutions suite, the list quickly shrank to less than two dozen. So what are the “must-haves” necessary to be considered a serious player in the DCIM arena?
View this presentation to find out.
DCIM: An Integral Part of the Software Defined Data CentreConcurrentThinking
Do you know what DCIM is? Discover with Concurrent Thinking how to improve your data centre efficiency, how to overcome challenges in data centres and what the future of DCIM is.
Find out more here: http://www.concurrent-thinking.com/
This document summarizes the key steps in calculating power and cost savings from virtualizing servers. It discusses identifying needs and developing a baseline by measuring current hardware usage. It then covers calculating power savings based on virtual infrastructure configurations and average wattages. Finally, it outlines how to determine total cost of ownership savings from reductions in hardware, facilities, support and other expenses. The overall goal is to demonstrate savings across multiple stakeholder groups from customers to C-level executives to operations managers to end users.
A brief presentation that explains the origin of Data Center Infrastructure Management (DCIM) Software and its role in Data Center operations. With example of a DCIM Software's functional capabilities, the presentation highlights the business benefits and value it brings to an enterprise. The presentation concludes with Case Study of a DCIM Software deployment in a leading financial services enterprise in India.
Strategic Uses of Virtual Desktop Technologies in Small BusinessErik Murphy
The document provides information about virtual desktop infrastructure (VDI) including the what, why, when, how and case studies. It discusses how VDI allows users to access desktops, applications and data from any device by running them virtually from a centralized datacenter. Case studies of the Richards Group and Cheshire Medical Center are presented, outlining how they implemented VDI to improve desktop management and support remote work. Considerations for adopting VDI like licensing costs, performance and compatibility are also covered. The presentation concludes with an overview of Dell's cloud client computing solutions and services for deploying VDI.
ASHRAE Prediction Is Better Than Cure – CFD Simulation For Data Center OperationRobert Schmidt
Advancingtechnology has brought us more powerful computer hardware for our data centers.In addition,it bringsthe opportunity to instrument and monitorthe Datacom hallsto better understand where the ITload is and the resulting environment,at least in terms of air temperature at an array of selected locations. Instrumentation and monitoringis,without doubt,a step forward that helps the operator in their quest to understand and control energy use of their undoubtedly complicated asset. Butis measurement enough?
This document discusses the utility and limitations of PUE (Power Usage Effectiveness) as a data center efficiency metric. While PUE is a useful high-level metric, it does not provide enough detail to optimize efficiency. PUE only measures the ratio of total facility power to IT equipment power, but does not account for factors like server utilization, resilience, or diversity of the IT load. The document argues that more detailed energy monitoring data is needed at the server, rack, and application level over time to properly evaluate efficiency and enable tangible efficiency actions.
Microsoft System Center virtual environment comparison: Dell PowerEdge server...Principled Technologies
When repetitive and admin-intensive management tasks are quicker and easier to complete, that’s a win for your datacenter and IT administrators. We found that using Dell Lifecycle Controller Integration for SCVMM took 74.6 percent less time to complete four key server management use cases on a single server, compared to performing the same use cases with HP OneView for System Center. When managing many servers, data extrapolated to 100 servers from testing on a second server shows the Dell solution would take 95.8 percent fewer steps and 96.5 percent less time than the HP solution. Dell Lifecycle Controller Integration for Microsoft System Center Virtual Machine Manager also enabled IT admins to perform all of the server management operations through a single console—and staged firmware updates without necessitating a server power-down. Easier to use and less time-intensive, the results of our testing showed that the DLCI for SCVMM and Microsoft System Center Virtual Machine Manager can be a more efficient and effective combination for your virtualized datacenter.
The document provides guidance on developing an effective disaster recovery (DR) plan. It recommends designating a DR Coordinator and team to lead recovery efforts. It also stresses analyzing current DR technologies, understanding recovery objectives, ensuring the ability to meet objectives, training employees on response, and regularly testing the plan. The overall message is that a thorough DR plan with executive support, clear roles and recovery goals is key to resuming operations after a disaster.
ThyssenKrupp Presta AG manufactures automotive components for many global car makers. To boost productivity and IT security, it deployed Intel Core i5 and i7 vPro processors across desktops and notebooks. This allows the IT team to remotely manage devices and perform tasks like software updates without disrupting work. Using Intel vPro technology improved the company's uptime, reduced costs, and enhanced employee productivity.
This presentation shows that Data Center Infrastructure Management (DCIM) Software to a Data Center Manager is what ERP software is to a VP - Manufacturing. This is the 2nd presentation from a series of 3-part series from GreenField Software on the subject: DCIM for High Availability.
DCIM Software charts out the relationship maps for assets by identifying various dependencies among them. Threshold-based alerts on critical parameters, combined with impact analysis of Move-Add-Change, mitigates risks of DC failures.
GreenField Software’s Mission is to help Data Centers control capital expenditures reduce operating expenses and mitigate the risks of Data Center failures. Besides DCIM Software, GFS offers Data Center Advisory Services in the areas of best practices, capacity planning, energy efficiency and business continuity of data centers.
Data Center Infrastructure Management Demystified Sunbird DCIM
DCIM software is becoming core to datacenter operations by providing centralized monitoring and management of infrastructure assets. Where manual processes were previously used, DCIM solutions integrate data on inventory, capacity, power, cooling and workflows. This allows issues to be proactively addressed, utilization to be optimized, and productivity to increase. The document outlines typical DCIM components, problems it addresses, benefits around time savings and competitive advantage, and steps to get started with DCIM.
The document discusses creating a disaster recovery (DR) plan with 10 steps: 1) Build a team, 2) Analyze existing DR technology, 3) Do a business impact analysis, 4) Prioritize operations, 5) Set recovery goals. It then details 6) Identifying technology gaps, 7) Designing a recovery environment, 8) Creating recovery manuals and protocols, 9) Documenting important information, and 10) Implementing, testing and revising the plan. The key aspects of a DR plan include conducting a business impact analysis to understand downtime costs, prioritizing critical systems, and setting recovery time and data loss objectives that available DR technologies can meet.
The document discusses optimizing facility efficiency in federal mission-critical environments. It recommends taking a long-term approach to planning by understanding organizational goals and bridging IT and facilities. Key steps include assessing existing facilities, selecting efficient equipment, right-sizing capacity, and establishing monitoring, maintenance, and benchmarking programs to ensure optimization over time. Regular maintenance is emphasized as critical for sustained efficiency gains and reliability.
The document discusses strategies for making data centers more energy efficient without compromising performance or reliability. It outlines 10 best practices including using more efficient processors and power supplies, server virtualization, improved cooling practices, and monitoring systems. Implementing these holistic strategies from the IT equipment level upwards can significantly reduce energy usage through cascading effects while freeing up capacity.
Key Note Session IDUG DB2 Seminar, 16th April London - Julian Stuhler .Trito...Surekha Parekh
This document discusses technology themes for DB2 in 2014 and beyond, including cost reduction, high availability, in-memory computing, skills availability, database commoditization, and big data. It outlines current capabilities and future directions for DB2 on both z/OS and LUW platforms, emphasizing ongoing focus on reducing costs while improving availability, performance and analytics capabilities through techniques like in-memory computing and integration with big data technologies. The future of DB2 skills and the changing IT landscape are also addressed.
Implementing the Poughkeepsie Green Data CenterElisabeth Stahl
The document summarizes IBM's implementation of an energy efficient data center transformation project at their Poughkeepsie facility. Key steps included conducting thermal analysis to identify issues, reorganizing the data center into hot and cold aisles, adding rear door heat exchangers and water cooling, implementing energy management techniques like virtualization, and instrumenting the data center to monitor energy usage and results. The transformation improved power usage effectiveness while increasing computing capacity in the same footprint and providing flexibility for future growth.
The document discusses information technology (IT) and its role in the smart grid. It describes how IT systems process large amounts of data and enable real-time communication and sharing of information. The development of the internet and ubiquitous connectivity has led to new conveniences but also creates challenges around storing vast amounts of "Big Data." The document outlines strategies that data centers and IT professionals can use to improve energy efficiency, such as storage consolidation, virtualization, optimized hardware, and energy-efficient software development practices. Implementing these strategies can significantly reduce data center power consumption and energy costs.
This document analyzes the business value of deploying VMware View for centralized virtual desktops (CVD). It finds that organizations using VMware View realized significant cost savings and return on investment compared to traditional desktop management. Organizations saved over $610 per user annually on lower device/IT costs and improved productivity. Those leveraging additional VMware View features saved an additional $122 per user. The document also discusses VMware View editions, features, use cases, benefits, challenges, and methodology for quantifying savings.
The document discusses how data center infrastructure management (DCIM) software can help with operations, planning, and analytics for data centers. It provides examples of common issues that can occur without DCIM tools, such as accidentally overloading circuits or racks. The cheat sheet also lists questions that DCIM tools can answer, such as identifying hot spots or excess capacity. DCIM software allows monitoring of equipment power usage, generating audit trails, and calculating power usage effectiveness. It enables more efficient provisioning, load balancing, and capacity planning to optimize data center resources.
This document provides an overview of cloud computing and datacenter software infrastructure. It discusses how cloud computing works using the Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) models. It also describes the basic elements of a datacenter including computing architecture, energy usage, and dealing with failures. Key components of datacenter software infrastructure are explained, including the MapReduce framework for distributed computing across large clusters.
Databases power your business, so the performance of the hardware behind your databases is crucial. In our datacenter, the Dell EMC VxRail P470F with VMware vSAN handled more orders per minute, delivered higher IOPS, responded more quickly, and utilized resources more efficiently than the HPE Hyper Converged 380 with HPE StoreVirtual VSA. Plus, scaling up and tuning performance for our specific workload was considerably easier with the Dell EMC VxRail solution than the HPE solution. These differences translate to real-world benefits. With the Dell EMC VxRail solution, you can achieve shorter wait times for customers and staff, which could increase sales and innovation at your company; reduce work for your IT staff, so they can spend more time on other critical tasks; and save space, potentially helping you delay capital expenditures. To find out more about the Dell EMC VxRail P470F, visit DellEMC.com/VxRail.
Organizations are looking to cloud computing solutions to improve the flexibility of their IT. But in today’s marketplace you still need to control operating expense, improve performance and maintain reliability. You need a cloud infrastructure solution that can help you manage costs and risk.
As information technology evolves, your data center faces a range of pressures from growing complexity. Energy usage and costs are rising; keeping your systems reliable is becoming more difficult; managing the whole system is time-consuming and expensive. You need your IT environment to be simple to manage, responsive to changing needs and cost effective.
This document discusses green computing or green IT, which aims to maximize energy efficiency during a product's lifetime through approaches like virtualization, power management, and proper recycling. It describes how virtualization allows combining multiple physical systems into virtual machines on a single powerful system to reduce power usage. Using LCD/LED displays and terminal servers connected to thin clients also decreases energy costs. Implementing power management at the administrative level allows automatically turning off hardware like monitors during inactivity to save energy. Data centers and their high energy usage are mentioned as areas green IT approaches could be applied through virtualization and power usage effectiveness methods.
A brief presentation that explains the origin of Data Center Infrastructure Management (DCIM) Software and its role in Data Center operations. With example of a DCIM Software's functional capabilities, the presentation highlights the business benefits and value it brings to an enterprise. The presentation concludes with Case Study of a DCIM Software deployment in a leading financial services enterprise in India.
Strategic Uses of Virtual Desktop Technologies in Small BusinessErik Murphy
The document provides information about virtual desktop infrastructure (VDI) including the what, why, when, how and case studies. It discusses how VDI allows users to access desktops, applications and data from any device by running them virtually from a centralized datacenter. Case studies of the Richards Group and Cheshire Medical Center are presented, outlining how they implemented VDI to improve desktop management and support remote work. Considerations for adopting VDI like licensing costs, performance and compatibility are also covered. The presentation concludes with an overview of Dell's cloud client computing solutions and services for deploying VDI.
ASHRAE Prediction Is Better Than Cure – CFD Simulation For Data Center OperationRobert Schmidt
Advancingtechnology has brought us more powerful computer hardware for our data centers.In addition,it bringsthe opportunity to instrument and monitorthe Datacom hallsto better understand where the ITload is and the resulting environment,at least in terms of air temperature at an array of selected locations. Instrumentation and monitoringis,without doubt,a step forward that helps the operator in their quest to understand and control energy use of their undoubtedly complicated asset. Butis measurement enough?
This document discusses the utility and limitations of PUE (Power Usage Effectiveness) as a data center efficiency metric. While PUE is a useful high-level metric, it does not provide enough detail to optimize efficiency. PUE only measures the ratio of total facility power to IT equipment power, but does not account for factors like server utilization, resilience, or diversity of the IT load. The document argues that more detailed energy monitoring data is needed at the server, rack, and application level over time to properly evaluate efficiency and enable tangible efficiency actions.
Microsoft System Center virtual environment comparison: Dell PowerEdge server...Principled Technologies
When repetitive and admin-intensive management tasks are quicker and easier to complete, that’s a win for your datacenter and IT administrators. We found that using Dell Lifecycle Controller Integration for SCVMM took 74.6 percent less time to complete four key server management use cases on a single server, compared to performing the same use cases with HP OneView for System Center. When managing many servers, data extrapolated to 100 servers from testing on a second server shows the Dell solution would take 95.8 percent fewer steps and 96.5 percent less time than the HP solution. Dell Lifecycle Controller Integration for Microsoft System Center Virtual Machine Manager also enabled IT admins to perform all of the server management operations through a single console—and staged firmware updates without necessitating a server power-down. Easier to use and less time-intensive, the results of our testing showed that the DLCI for SCVMM and Microsoft System Center Virtual Machine Manager can be a more efficient and effective combination for your virtualized datacenter.
The document provides guidance on developing an effective disaster recovery (DR) plan. It recommends designating a DR Coordinator and team to lead recovery efforts. It also stresses analyzing current DR technologies, understanding recovery objectives, ensuring the ability to meet objectives, training employees on response, and regularly testing the plan. The overall message is that a thorough DR plan with executive support, clear roles and recovery goals is key to resuming operations after a disaster.
ThyssenKrupp Presta AG manufactures automotive components for many global car makers. To boost productivity and IT security, it deployed Intel Core i5 and i7 vPro processors across desktops and notebooks. This allows the IT team to remotely manage devices and perform tasks like software updates without disrupting work. Using Intel vPro technology improved the company's uptime, reduced costs, and enhanced employee productivity.
This presentation shows that Data Center Infrastructure Management (DCIM) Software to a Data Center Manager is what ERP software is to a VP - Manufacturing. This is the 2nd presentation from a series of 3-part series from GreenField Software on the subject: DCIM for High Availability.
DCIM Software charts out the relationship maps for assets by identifying various dependencies among them. Threshold-based alerts on critical parameters, combined with impact analysis of Move-Add-Change, mitigates risks of DC failures.
GreenField Software’s Mission is to help Data Centers control capital expenditures reduce operating expenses and mitigate the risks of Data Center failures. Besides DCIM Software, GFS offers Data Center Advisory Services in the areas of best practices, capacity planning, energy efficiency and business continuity of data centers.
Data Center Infrastructure Management Demystified Sunbird DCIM
DCIM software is becoming core to datacenter operations by providing centralized monitoring and management of infrastructure assets. Where manual processes were previously used, DCIM solutions integrate data on inventory, capacity, power, cooling and workflows. This allows issues to be proactively addressed, utilization to be optimized, and productivity to increase. The document outlines typical DCIM components, problems it addresses, benefits around time savings and competitive advantage, and steps to get started with DCIM.
The document discusses creating a disaster recovery (DR) plan with 10 steps: 1) Build a team, 2) Analyze existing DR technology, 3) Do a business impact analysis, 4) Prioritize operations, 5) Set recovery goals. It then details 6) Identifying technology gaps, 7) Designing a recovery environment, 8) Creating recovery manuals and protocols, 9) Documenting important information, and 10) Implementing, testing and revising the plan. The key aspects of a DR plan include conducting a business impact analysis to understand downtime costs, prioritizing critical systems, and setting recovery time and data loss objectives that available DR technologies can meet.
The document discusses optimizing facility efficiency in federal mission-critical environments. It recommends taking a long-term approach to planning by understanding organizational goals and bridging IT and facilities. Key steps include assessing existing facilities, selecting efficient equipment, right-sizing capacity, and establishing monitoring, maintenance, and benchmarking programs to ensure optimization over time. Regular maintenance is emphasized as critical for sustained efficiency gains and reliability.
The document discusses strategies for making data centers more energy efficient without compromising performance or reliability. It outlines 10 best practices including using more efficient processors and power supplies, server virtualization, improved cooling practices, and monitoring systems. Implementing these holistic strategies from the IT equipment level upwards can significantly reduce energy usage through cascading effects while freeing up capacity.
Key Note Session IDUG DB2 Seminar, 16th April London - Julian Stuhler .Trito...Surekha Parekh
This document discusses technology themes for DB2 in 2014 and beyond, including cost reduction, high availability, in-memory computing, skills availability, database commoditization, and big data. It outlines current capabilities and future directions for DB2 on both z/OS and LUW platforms, emphasizing ongoing focus on reducing costs while improving availability, performance and analytics capabilities through techniques like in-memory computing and integration with big data technologies. The future of DB2 skills and the changing IT landscape are also addressed.
Implementing the Poughkeepsie Green Data CenterElisabeth Stahl
The document summarizes IBM's implementation of an energy efficient data center transformation project at their Poughkeepsie facility. Key steps included conducting thermal analysis to identify issues, reorganizing the data center into hot and cold aisles, adding rear door heat exchangers and water cooling, implementing energy management techniques like virtualization, and instrumenting the data center to monitor energy usage and results. The transformation improved power usage effectiveness while increasing computing capacity in the same footprint and providing flexibility for future growth.
The document discusses information technology (IT) and its role in the smart grid. It describes how IT systems process large amounts of data and enable real-time communication and sharing of information. The development of the internet and ubiquitous connectivity has led to new conveniences but also creates challenges around storing vast amounts of "Big Data." The document outlines strategies that data centers and IT professionals can use to improve energy efficiency, such as storage consolidation, virtualization, optimized hardware, and energy-efficient software development practices. Implementing these strategies can significantly reduce data center power consumption and energy costs.
This document analyzes the business value of deploying VMware View for centralized virtual desktops (CVD). It finds that organizations using VMware View realized significant cost savings and return on investment compared to traditional desktop management. Organizations saved over $610 per user annually on lower device/IT costs and improved productivity. Those leveraging additional VMware View features saved an additional $122 per user. The document also discusses VMware View editions, features, use cases, benefits, challenges, and methodology for quantifying savings.
The document discusses how data center infrastructure management (DCIM) software can help with operations, planning, and analytics for data centers. It provides examples of common issues that can occur without DCIM tools, such as accidentally overloading circuits or racks. The cheat sheet also lists questions that DCIM tools can answer, such as identifying hot spots or excess capacity. DCIM software allows monitoring of equipment power usage, generating audit trails, and calculating power usage effectiveness. It enables more efficient provisioning, load balancing, and capacity planning to optimize data center resources.
This document provides an overview of cloud computing and datacenter software infrastructure. It discusses how cloud computing works using the Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) models. It also describes the basic elements of a datacenter including computing architecture, energy usage, and dealing with failures. Key components of datacenter software infrastructure are explained, including the MapReduce framework for distributed computing across large clusters.
Databases power your business, so the performance of the hardware behind your databases is crucial. In our datacenter, the Dell EMC VxRail P470F with VMware vSAN handled more orders per minute, delivered higher IOPS, responded more quickly, and utilized resources more efficiently than the HPE Hyper Converged 380 with HPE StoreVirtual VSA. Plus, scaling up and tuning performance for our specific workload was considerably easier with the Dell EMC VxRail solution than the HPE solution. These differences translate to real-world benefits. With the Dell EMC VxRail solution, you can achieve shorter wait times for customers and staff, which could increase sales and innovation at your company; reduce work for your IT staff, so they can spend more time on other critical tasks; and save space, potentially helping you delay capital expenditures. To find out more about the Dell EMC VxRail P470F, visit DellEMC.com/VxRail.
Organizations are looking to cloud computing solutions to improve the flexibility of their IT. But in today’s marketplace you still need to control operating expense, improve performance and maintain reliability. You need a cloud infrastructure solution that can help you manage costs and risk.
As information technology evolves, your data center faces a range of pressures from growing complexity. Energy usage and costs are rising; keeping your systems reliable is becoming more difficult; managing the whole system is time-consuming and expensive. You need your IT environment to be simple to manage, responsive to changing needs and cost effective.
This document discusses green computing or green IT, which aims to maximize energy efficiency during a product's lifetime through approaches like virtualization, power management, and proper recycling. It describes how virtualization allows combining multiple physical systems into virtual machines on a single powerful system to reduce power usage. Using LCD/LED displays and terminal servers connected to thin clients also decreases energy costs. Implementing power management at the administrative level allows automatically turning off hardware like monitors during inactivity to save energy. Data centers and their high energy usage are mentioned as areas green IT approaches could be applied through virtualization and power usage effectiveness methods.
The document discusses green computing, which aims to reduce the environmental impact of computers and data centers. It outlines various approaches like virtualization, power management, recycling, and telecommuting. These can improve energy efficiency and reduce costs. The document also discusses implementing green computing through server consolidation, replacing CRT monitors, and keeping equipment longer to reduce waste. Future trends may include more efficient and recyclable computer components to further minimize environmental impact.
This document analyzes cost savings from using thin clients and CRT monitors over traditional desktops and LCD/TFT monitors for an office IT infrastructure. It estimates that thin clients can reduce IT infrastructure costs by 60% through lower power usage and replacement costs compared to desktop computers. Using CRT monitors rather than LCD/TFT monitors can also reduce costs by 60% while still lasting 5 years if second-hand monitors are used. An embedded computer is recommended for digital signage due to its low cost, maintenance needs, and ability to meet software licensing requirements.
ICT Space, Security and Energy OptimisationSuste-Tech
The document discusses a new technology from FG Technology that allows personal computers to be remotely operated and placed up to 20 meters away from the desk. This solves issues with PCs taking up desk space, generating heat and noise. It estimates energy savings of up to 2000kWh per PC per year by allowing full shutdown of PCs when not in use and removing PC heat from air conditioned office spaces. Case studies show returns on investment of under 12 months for organizations that implement the technology.
The document discusses how Intel vPro technology can provide benefits to managed service providers (MSPs) like reduced hardware and software repair times, lower deskside visits, and power savings for customers. It provides real-world examples from MSPs of reductions in repair times of 33-92% for hardware and 50-83% for software using vPro. MSPs can see margins around 5% higher on average for clients using vPro systems. The document encourages MSPs to start activating and utilizing vPro capabilities on Intel-based PCs and notebooks through their Kaseya management console.
Green computers (NIRAJ KUMAR FROM BIHAR)Niraj Kumar
Over the last few years, interest in “green computing” has motivated research into energy-saving techniques for enterprise systems, from network proxies and virtual machine migration to the return of thin clients.
When selecting computers, there are many other considerations than just energy, such as computational resources, and price.
Choosing the right tool for the job - Ten reasons why workstations trump your PCServium
The document discusses 10 reasons why workstations are better than PCs for many users. Workstations are designed for high performance and heavy workloads, with customizable components. They offer comparable costs to PCs but provide significantly better performance, multi-tasking ability, reliability, storage capacity, memory capacity, cooling, and return on investment. Workstations are better suited for tasks like complex data analysis, modeling, and collaboration. They can help improve productivity and save time and money compared to underpowered PCs struggling with demanding workloads.
Upgrading laptops and desktops to the new HP EliteBook Folio 9470m and HP Eli...Principled Technologies
Upgrading from older laptops and desktops to newer ones can save employees time on a number of tasks, which can really add up over the course of the workday, the week, and even the year.
We found that the HP EliteBook Folio 9470m laptop delivered 30.3 percent longer battery life than a legacy system and also performed most common tasks more quickly, which would save workers time. Similarly, the HP EliteDesk 800 G1 Desktop Mini improved upon an older system in most common worker tasks, including converting a file to an MPEG in less than one-third the time. This means that for workers who do similar tasks, upgrading to an HP EliteBook Folio 9470m laptop or HP EliteDesk 800 G1 Desktop Mini could reduce wait time and increase productivity.
Energy Efficient Power Management in Virtualized Data CenterIRJET Journal
This document discusses energy efficient power management techniques in virtualized data centers. It begins with an introduction to cloud computing and virtualization. Virtualization allows multiple virtual machines to run on a single physical server, improving efficiency. The main goal is to minimize power consumption in data centers through techniques like VM migration and resource scheduling. This can improve performance and build efficient data centers that use less power. The document reviews different algorithms for resource selection and VM migration, and calculates total power consumption. It discusses open issues around reducing memory power usage. Implementation details cover using CloudSim, a simulation toolkit, to test energy efficient algorithms like VM allocation and migration approaches.
Learn about IBM PureFlex System: The Future of Data center Management.The ‘Expert Integrated System’ delivers a combination of hardware, software and built-in expertise that makes implementing and applying the power of computing simpler, easier, faster and more effective than ever before.For more information, visit http://ibm.co/J7Zb1v.
Learn about IBM PureFlex System: The Future of Datacenter Management. the system has unique management capabilities of the IBM Flex System Manager. IBM’s goals in designing and building these systems were to return agility, efficiency, simplicity and control to data center operations. To know more, visit http://ibm.co/J7Zb1v.
Modular blade server architectures address many challenges facing modern data centers by consolidating computing components into smaller, modular form factors that share resources to lower costs and complexity. Blades can satisfy computing needs for servers, desktops, networking and storage. They provide world-class solutions by delivering high performance, reliability, efficiency and scalability without disruption. Proper planning is required, but blade servers are highly efficient platforms for consolidating distributed servers into a common data center through their small size and ability to maximize resource utilization through virtualization.
Fujitsu-Environmental-Benefits-of-Desktop-Virtual-Computing-VCSLee Stewart
Virtual desktop solutions allow organizations to centralize desktop computing resources in data centers, replacing individual desktop computers with thin clients. This shift leads to significant energy savings from reduced power consumption. A typical desktop uses 230 kWh annually costing $58.50 while a thin client uses only 29 kWh costing $7.31. Across 5,000 users, virtual desktops could save over $255,000 and reduce CO2 emissions by 900 tonnes annually. Virtual desktops also enable longer lifecycles, simplified disposal, and support for flexible work environments like activity-based working and telecommuting.
Learn how Cloud is enabled for fast implementation of new workloads, secure application, available with modular and economically efficient design. For more information visit, http://ibm.co/Lx6hfc.
The document discusses consolidating IBM Lotus Domino workloads from multiple x86 servers onto fewer IBM Power Systems servers. This can provide significant cost savings through reduced hardware, energy, floor space, software licensing and staffing costs. Specifically, consolidating 41 HP ProLiant servers running 164 cores at 17% utilization onto a single Power 750 server utilizing 57% of its cores can save over $995k in total cost of ownership over three years. Upgrading to Lotus Domino 8.5 can also reduce storage requirements by up to 50%. The document recommends a Power 750 Express server as a solution, highlighting its POWER7 processor performance, scalability, virtualization and energy efficiency capabilities.
Device management can be a challenging and time-consuming affair for your administrators to handle. But with the right tools, admins could finish their management tasks faster and dedicate their valuable time to other mission-critical work.
We found that the tools Command Suite offers can speed up and simplify device management compared to management through just Intel AMT on an HP device. The ease of configuring Intel AMT combined with task sequences, out-of-band management capabilities, one-to-many administration, and other features make Command Suite an attractive management solution for fleets equipped with Intel AMT.
This document discusses how to design data centers for maximum energy efficiency. It identifies the main culprits of inefficiency as power equipment, cooling equipment, lighting, oversized equipment, and poor configuration. It recommends 8 characteristics of highly efficient data centers, including using scalable power and cooling, row-based cooling, high-efficiency UPS, high voltage distribution, variable speed drives, capacity management tools, and room layout tools. The document outlines 7 key elements of efficient data center design that incorporate these characteristics and can save up to 40% on energy costs.
Why are you paying for wasted energy in IT ?
Energy costs continue to climb and yet up to a third of the money companies spend on power could be wasted due to inneficient IT infrastructure. Take a serious look at your IT energy use.
Similar to Change your desktops, change your business (20)
Help skilled workers succeed with Dell Latitude 7030 and 7230 Rugged Extreme ...Principled Technologies
Instead of equipping consumer-grade tablets with rugged cases
Conclusion
In our hands-on testing, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets showed that they are better equipped to help skilled workers than consumer-grade Apple iPad Pro and Samsung Galaxy Tab S9 tablets in multiple ways. They provide more built-in capabilities and features than the consumer-grade tablets we tested. And, while they were more expensive than the rugged-case fortified consumer-grade options we tested, their rugged claims were more than skin deep.
In our performance and durability tests, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets performed better in demanding manufacturing, logistics, and field service environments than consumer-grade tablets with rugged cases. Both Rugged Extreme Tablets, with their greater thermal range, suffered less performance degradation in extreme temperatures, never failed and were merely scuffed after 26 hard drops, survived a 10 minute drenching with no ill effects, and were easier to view in direct sunlight than Apple iPad Pro and Samsung Galaxy Tab S9 tablets.
Bring ideas to life with the HP Z2 G9 Tower Workstation - InfographicPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to a similarly configured Dell Precision 3660 Tower Workstation in its out-of-box performance mode
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
A tale of scale & speed: How the US Navy is enabling software delivery from l...
Change your desktops, change your business
1. MARCH 2015 (Revised)
A PRINCIPLED TECHNOLOGIES REPORT
Commissioned by Intel Corp.
Your business faces a difficult decision with an aging desktop fleet. You
ultimately have two choices: You can replace the systems now, or you can push the
capital expense to the next fiscal year. How do you know when it’s the right time to
invest in a boost for your business?
To help you make the best decision, we compared two new desktops powered
by Intel processors—an All-in-One PC and a Mini Desktop—to a five-year-old legacy
desktop tower much like the ones your business might have in place. With thorough
hands-on testing and research, we looked at the benefits that replacing the systems
could provide for your business when it replaces a fleet of legacy desktop towers with
new All-in-One PCs or Mini Desktops.
Your employees have the opportunity to be more productive with faster and
more reliable new desktops. New desktops are also more efficient, and your business
can realize significantly lower power costs. By leveraging both improved and new
technology, IT staff and users benefit from simplified, out-of-band management while
gaining advantages in connectivity and workspace savings.
2. A Principled Technologies test report 2Change your desktops, change your business
ENABLE EMPLOYEES TO DO MORE WITH FASTER SYSTEMS
Promote productivity with up to 183% better system performance
Waiting on a slow PC can be painful. A faster, more responsive desktop means
your employees can finish the same tasks in less time, ultimately providing your
employees an opportunity to be more productive. We used three industry-standard
benchmarks to show the performance boost your employees can get with a new Intel
processor-based All-in-One PC or Mini Desktop. The new Intel processor-powered
desktops outperformed the legacy desktop tower on all of the performance
benchmarks.
For detailed specifications of the test systems, see Appendix A. For step-by-step
details on how we performed our benchmark testing, see Appendix D. For more on the
benchmarks, see Appendix E.
As Figure 1 shows, we found the All-in-One PC and Mini Desktop provided
significantly better system performance than the legacy desktop tower—183 percent
and 135 percent, respectively. Note: We calculated the percentage wins for the All-in-
One PC and Mini Desktop on each benchmark, and then calculated system performance
by taking the geometric mean of those percentage wins versus the legacy desktop
tower. For more details on how we calculated the system performance, see Appendix B.
Figure 1: System performance
improvement for the All-in-One PC
and Mini Desktop vs. the legacy
desktop tower. Higher numbers are
better.
3. A Principled Technologies test report 3Change your desktops, change your business
So what does better system performance mean for your business? If your
average employee spends 10 minutes a day waiting to do the types of tasks included in
our benchmarks, 183 percent better system performance means your average
employee could save over 6 minutes per day or nearly 25 hours per year when moving
from the legacy desktop tower to the All-in-One PC. As Figure 2 shows, replacing
desktops represents a significant opportunity for increased productivity as the size of
the desktop fleet increases. For example, a business replacing 10,000 legacy desktop
towers with 10,000 new All-in-One PCs could see significant savings over four years—
6.475 minutes saved daily per employee would add up to over 990,000 total hours
saved that employees could use for other tasks.1 2
Figure 2: Time saved in hours over
four years when replacing a fleet of
legacy desktop towers with All-in-
One PCs.
PAY LESS FOR POWER WITH MORE EFFICIENT SYSTEMS
Save at the outlet with up to 60% fewer watts at idle and up to 59% fewer watts under load
Why waste energy and dollars with inefficient older desktops? We found that
both the All-in-One PC and Mini Desktop consumed significantly less power than the
legacy desktop tower—and notably, consumed less power under load than the legacy
desktop tower consumed at idle. The All-in-One PC consumed 55 percent fewer average
watts while idle and 53 percent fewer average watts under load than did the legacy
desktop tower. The Mini Desktop consumed 60 percent fewer average watts while idle
1
Based on a conservative estimate that a hypothetical mix of productivity tasks would take an average of 10 minutes per employee
per day with the legacy desktop tower. With 183.7 percent better system performance, the same mix of tasks would take an average
of 3.525 minutes per employee per day with the All-in-One PC, saving 6.475 minutes per employee per day.
2
Based on a 46-week work year per employee, which includes holidays, vacation, and sick leave (www.bls.gov/opub/btn/volume-
2/paid-leave-in-private-industry-over-the-past-20-years.htm).
Faster desktops for your
employees mean less
waiting while working
and an opportunity for
increased productivity.
The All-in-One PC and
Mini Desktop consumed
less power under load
than the legacy desktop
tower consumed at idle.
4. A Principled Technologies test report 4Change your desktops, change your business
and 59 percent fewer average watts under load than did the legacy desktop tower.
Figure 3 shows the power usage of all three systems while idle and under load.
Figure 3: The average number of
watts each system consumed while
idle and under load. Lower numbers
are better.
As Figure 4 shows, replacing older inefficient desktops can significantly affect
the bottom line. For example, a business replacing 10,000 legacy desktop towers with
10,000 new Mini Desktops could realize significant savings over four years—$8.88 saved
annually per employee would add up to $355,200 total savings in power costs.3
Figure 4: Power consumption savings
in dollars over four years when
replacing a fleet of legacy desktop
towers with Mini Desktops.
3
Based on an eight-hour SYSmark load for average power consumption per employee per day, 46-week work year per employee,
and average US commercial power costs of $0.1075 per kilowatt-hour from the U.S. Energy Information Administration
(www.eia.gov/electricity/monthly/pdf/epm.pdf).
5. A Principled Technologies test report 5Change your desktops, change your business
MAKE IT STAFF MORE EFFECTIVE AND KEEP BUSINESS RUNNING
Lower repair costs and reduce employee downtime with out-of-band support
Replacing desktops can also enable IT to lower costs, reduce employee
downtime, and support more systems. The All-in-One PC and Mini Desktop in our study
included the latest Intel vPro™ technology and the latest version of Intel Active
Management Technology (Intel AMT).4
The legacy desktop tower in our study shipped
with an older release of Intel AMT (5.2), which meant that it did not support hardware-
based Keyboard-Video-Mouse (KVM) Remote Control.5
With a fleet of new All-in-One PCs or Mini Desktops, your IT staff can access the
graphical user interface (GUI) and control desktops remotely—regardless of power
state. For non-operational desktops that are out-of-band, IT staff can resolve some
problems remotely that previously would have required an expensive deskside visit with
a fleet of legacy desktop towers. The travel time for deskside visits with legacy desktop
towers ultimately increases the cost for IT to make repairs and increases the amount of
employee downtime and lost productivity. In our testing, it took only 15 seconds to
launch a KVM session on either the All-in-One PC or Mini Desktop, which translates to
significantly less overhead for your IT staff.
The ability to use KVM Remote Control to support non-operational systems
provides the following advantages for your IT:
Fewer truck rolls to remote sites – IT can spend less time behind the wheel. For
example, MSPs have reported a 90 percent reduction in deskside visits per
month to a typical customer when they use KVM Remote Control to support
Intel vPro systems.6
Reduce time to fix non-operational systems – IT can diagnose and fix non-
operational systems more quickly using KVM Remote Control, which can mean
more time for other tasks.
Fix hardware problems in one visit instead of two – IT can diagnose problems
with KVM Remote Control and deploy a technician with parts, rather than one
visit for diagnosis and a second visit to make the repair.
Reduce downtime for your employees – Faster problem resolution ultimately
means less lost productivity with less downtime for your employees.
Figures 5 through 8 illustrate potential savings for your business with a single
support call for a non-operational system. In this scenario, an employee at a remote site
calls the help desk to report a problem with her desktop, which is non-operational with
4
For more information on Intel AMT, visit www.intel.com/content/www/us/en/architecture-and-technology/intel-active-
management-technology.html?wapkw=amt.
5
software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/default.htm?turl=WordDocuments%2Fkvma
ndintelamt.htm
6
Based on field testing: msp.intel.com/assets/CircleComputerResourcesCS.pdf
New desktops with the
latest Intel technology
can enable your IT to
support more systems
remotely instead of
making expensive
deskside visits.
6. A Principled Technologies test report 6Change your desktops, change your business
a critical failure; IT would need 10 minutes to resolve the problem with KVM Remote
Control session or a deskside visit.
Figure 5 illustrates the potential savings for employee downtime with the All-in-
One PC or Mini Desktop. With the legacy desktop tower in this scenario, the employee
would have to wait until the following business day for a deskside visit, representing
eight business hours of downtime after the initial call. With the All-in-One PC and Mini
Desktop, the employee would have to wait only 15 seconds for IT to initiate the KVM
Remote Control session.7
In this scenario, replacing desktops would enable IT to reduce
employee downtime by almost eight business hours for a single repair.
Figure 5: Lost productivity due to
employee downtime with a non-
operational system, in hours. Lower
numbers are better.
Replacing desktops would enable IT to reduce employee downtime significantly
when supporting non-operational systems like the one in our scenario. Instead of
waiting eight business hours for the repair, employees would be able to get back to
work much sooner with a faster resolution of the critical software problem. Figure 6
shows the value of the reduced employee downtime with the All-in-One PC or Mini
Desktop. The All-in-One PC or Mini Desktop would reduce the cost of employee
downtime by $215.08 for the 10-minute software repair—a savings of nearly 98
percent.8
7
We assumed an equal amount of time for IT to generate the support ticket with KVM Remote Control and the deskside visit, and
did not include that time in our analysis. We also assumed that the employee was unproductive while IT staff completed the repair.
8
We estimated an average end-user cost of $.4483 per minute, based on an average annual total compensation of $55,954.00,
which includes an average 2013 US salary of $43,041.39 (www.ssa.gov/OACT/COLA/AWI.html) plus 30 percent in benefits.
7. A Principled Technologies test report 7Change your desktops, change your business
Figure 6: Cost of lost productivity due
to employee downtime with a non-
operational system, in US dollars.
Lower numbers are better.
Figure 7 illustrates the potential savings for IT labor with the All-in-One PC or
Mini Desktop. With the legacy desktop tower in this scenario, IT would see significantly
more overhead with one hour of total travel time to and from the remote site for the
deskside visit. With the All-in-One PC or Mini Desktop, IT would have only 15 seconds of
overhead equal to the time it would take to initiate the KVM Remote Control session.
Replacing desktops would enable IT to reduce the time required to complete the repair
by almost one hour in this scenario.
Figure 7: IT labor to repair a non-
operational system, in hours. Lower
numbers are better.
8. A Principled Technologies test report 8Change your desktops, change your business
Reducing the amount of time IT staff need to repair non-operational systems
can enable IT to support more systems and complete other tasks. Figure 8 shows the
value of the time IT could save by replacing legacy desktop towers with new desktops.
The 10-minute software repair would cost $39.65 less with the All-in-One PC or Mini
Desktop due to the elimination of travel time—a cost savings of 85 percent.9
Figure 8: Cost of IT labor to repair a
non-operational system, in US
dollars. Lower numbers are better.
Reduce repair costs
An aging desktop fleet ultimately means that your IT staff and employees are
using systems that are less reliable. Compared to new desktops, IT can spend more time
to support or repair the aging systems and users can see more downtime.
We looked at several studies that compared repair rates for older and newer
systems. They showed different failure rates in specific years, but they all suggested that
costs trend upwards as systems age, indicating that businesses could see lower repair
rates and costs for the All-in-One PC and Mini Desktop compared to our legacy desktop
tower.For example, an Intel TCO analysis shows desktop failure rates of just 3 percent in
the year three of ownership, 10 percent in year four, and 18 percent in year five.
Older systems like the legacy desktop tower can add more repair costs when
they happen outside the original three-to-five year OEM warranty period. Once older
systems are out of warranty, businesses can assume repairs internally or purchase out-
of-warranty coverage from the OEM or a third party—both at a cost. Internal repairs
add costs for parts, desktop support staff to diagnose and repair problems, and parts
9
We estimated an IT cost of $.6635 per minute, based on an average annual total compensation estimate from Salary.com of
$82,809 for a Help Desk Support senior-level position.
9. A Principled Technologies test report 9Change your desktops, change your business
inventory management. Dell estimates that a service repair not covered by warranty can
cost between $150 and $699.10
Much of the cost for out-of-warranty repairs comes
from parts—a replacement hard drive for the legacy desktop tower could cost $90 or
more,11
and costs to replace multiple parts on a system can exceed $600.12
The alternative of extending warranty coverage from the vendor would add
$119 per year per system for the legacy desktop tower in this study.13
Extended
warranties, however, are ultimately an imperfect solution. Some vendors limit extended
coverage to within the first three to five years from the date of purchase, and might not
provide coverage for a legacy system the age of the one we studied.14
Extended
warranties also do not cover all repair costs, leaving businesses to pay for some costs
out of pocket. For example, parts coverage in the extended warranty for the legacy
desktop tower in our study “is limited to one major part per product per 12-month
period commencing from the Care Pack start date.”15
LEVERAGE THE NEWEST TECHNOLOGY
Gain the benefits of touch
Touch-interactive technology is becoming as popular in the office as it is at
home. An Intel study shows respondents overwhelmingly preferred laptops with
touch,16
and shipments of touchscreen monitors are seeing growth.17
Microsoft® Windows® 8.1 does not require touch technology, but a touch-
enabled version of the operating system can allow users to access information,
collaborate, and make presentations through quick touch gestures. For the All-in-One PC
with the included touchscreen display, users can engage content such as Web pages,
images, videos, pdfs, and email with touch-enabled apps such as Microsoft Office 365™,
and then switch to the keyboard and mouse for typing and editing content, choosing the
more comfortable interface for each task.
10
www.dell.com/content/topics/segtopic.aspx/service_ext_warranty?c=us&l=en&cs=19
11
Based on the price of an individual drive of the model on the legacy desktop tower
www.newegg.com/Product/Product.aspx?Item=9SIA67S2G40001
12
www.zdnet.com/article/pc-laptops-and-accidental-damage-best-and-worst-warranties-2014/
13
Cost for HP 1y PW 4h 9x5 LE-Desktop CPU HW Support - LE-Desktop CPU Only onsite response, 8am-5pm - standard business days
excluding HP holidays h30094.www3.hp.com/product/sku/2551404/mfg_partno/U4858PE
14
www.dell.com/learn/my/en/mydhs1/campaigns/warranty-extension-sg
15
h30094.www3.hp.com/product/sku/2551404/mfg_partno/U4858PE
16
www.neowin.net/news/intel-80-percent-of-pc-users-prefer-touch-screens and www.intelfreepress.com/news/do-people-want-
touch-on-laptop-screens/197/
17
www.businesswire.com/news/home/20141215006340/en/Worldwide-PC-Monitor-Market-Aided-Touch-Screen#.VJPVqCsgBg
If a touchscreen delivers
a productivity benefit of
one minute per day
with the All-in-One PC,
it could pay back its
added cost in under 20
months.
10. A Principled Technologies test report 10Change your desktops, change your business
For the All-in-One PC that we tested, the touch-enabled display added $186 to
the starting price over a similarly configured model without touch.18
If the touchscreen
on the All-in-One PC can deliver a productivity benefit of just one minute per day, the
optional touch capability could pay back its added cost in under 20 months.19, 20
Connect via USB 3.0, DisplayPort, HDMI, and Wireless AC
Replacing desktops can enable your business to take advantage of the latest
USB technology. The All-in-One PC and Mini Desktop included USB 3.0 ports—eight and
six, respectively. The legacy desktop tower offered USB 2.0 ports. USB 3.0 ports, such as
those on the two Intel processor-powered desktops, can provide up to 4.8 Gbps transfer
rates to USB 3.0 devices—up to 10 times faster than the transfer rate for USB 2.0
connections to USB 2.0 devices that you could get with the legacy desktop tower.
Replacing desktops can also bring the latest display technology to your business.
In addition to the touchscreen display on the All-in-One PC, both the All-in-One PC and
Mini Desktop included DisplayPort, which can support higher performance and lower
power monitor displays than the legacy desktop tower we tested. The All-in-One PC also
offered HDMI—either DisplayPort or HDMI on the All-in-One PC can be used to add a
second display.
Replacing desktops can also boost your business with the latest in wireless
technology. The All-in-One PC and Mini Desktop both included Intel Dual Band Wireless-
AC 7260 cards, while the legacy desktop tower we tested did not include a wireless card.
Replacing a fleet of legacy desktop towers without wireless cards provides more
flexibility by removing the requirement of an Ethernet port and cable for each desktop.
Save workspace with All-in-One and Mini form factors
New desktops are available with the latest Intel technology in improved form
factors that take up less workspace than big desktop towers. As Figure 9 shows, the All-
in-One PC and Mini Desktop required significantly less workspace than the legacy
desktop tower in our study—59 and 60 percent less square inches, respectively. Note:
For more details on how we calculated workspace for the three systems, see Appendix
C.
18
The starting price ($1,598) + three-year ProSupport Service brought the price to $1,677.57 for non-touch-enabled display with
Intel Core i5 processor and 8 GB memory for the All-in-One PC. The starting price (including three-year ProSupport Service) for the
All-in-One PC with the same processor and memory and touch-enabled display was $1,863.29 via
www.dell.com/us/business/p/optiplex-9030-aio/fs?pf=v on 12/19/2014.
19
A minute a day, valued at $9.72 ($350/36), per month could provide payback for a $186 cost in 19.1 months.
20
Note: We tested the All-in-One PC installed with Windows 7 so that system configuration matched closely to the legacy desktop
tower. The All-in-One PC and Mini Desktop are available with either Windows 7 or Windows 8.1.
11. A Principled Technologies test report 11Change your desktops, change your business
Figure 9: Total workspace based on
footprint for each system. Lower
numbers are better.
As Figure 10 shows, the workspace savings that replacing desktops can bring
your business can have an enormous impact at scale. For example, a business replacing
10,000 legacy desktop towers with 10,000 new Mini Desktops could save 7,708 sq. ft. of
valuable workspace.
Figure 10: Workspace saved in square
feet when replacing a fleet of legacy
desktop towers with Mini Desktops.
CONCLUSION
Replacing an aging desktop fleet is an important investment in your business.
Faster and more reliable systems that offer the latest technology provide your
employees an opportunity to be more productive with less waiting while they work.
New desktops also benefit your IT staff, with significantly lower power costs and support
for out-of-band management to reduce costly deskside visits.
12. A Principled Technologies test report 12Change your desktops, change your business
APPENDIX A – SYSTEM CONFIGURATION INFORMATION
Figure 11 details the systems we used in our tests.
System All-in-One PC Mini Desktop Legacy Desktop Tower
General
Number of processor packages 1 1 1
Number of cores per processor 4 4 2
Number of hardware threads per
core
1 1 1
Total number of processor threads
in system
4 4 2
System power management policy Balanced Balanced Balanced
Processor power-saving option EIST EIST EIST
Dimensions (h × l × w in inches) 20.50 × 22.25 × 8.50 1.50 × 7.38 × 7.19 17.63 × 17.38 × 7.00
Weight (lbs.) 24.1 2.5 23.8
CPU
Vendor Intel Intel Intel
Name Core™ i5 Core i5 Core 2 Duo
Model number 4590S 4590T E8400
Stepping C0 C0 E0
Socket type Socket 1150 LGA Socket 1150 LGA Socket 775 LGA
Core frequency (GHz) 3.0 / 3.7 Turbo 2.0 / 3.0 Turbo 3.0
Bus frequency DMI2 5 GT/s DMI2 5 GT/s 1,333 MHz
L1 cache 4 × 32 KB + 4 × 32 KB 4 × 32 KB + 4 × 32 KB 2 × 32 KB + 2 × 32 KB
L2 cache 4 × 256 KB 4 × 256 KB 6 MB
L3 cache 6 MB 6 MB N/A
Platform
Vendor and system name
Dell™ OptiPlex™ 9030
All-in-One
Dell OptiPlex 9020
Micro
HP Compaq 8000 Elite
Convertible Minitower
Motherboard model number 0VNGWR 0Y5DDC 3647h
Motherboard chipset Intel Q87 Intel Q87 Intel Q45/Q43
BIOS name and version Dell A03 (09/15/2014) Dell A02 (08/11/2014)
Hewlett-Packard
786G7 v01.13
(07/20/2011)
AMT Version 5.2.50 9.1.0 9.1.0
13. A Principled Technologies test report 13Change your desktops, change your business
System All-in-One PC Mini Desktop Legacy Desktop Tower
Memory module(s)
Vendor and model number
Samsung®
M471B5173DB0-YK0
Hyundai Electronics
HMT41GS6BFR8A-PB
Samsung
M378B5673EH1-CH9
Type PC3-12800 PC3-12800 PC3-10600
Speed (MHz) 1,600 1,600 1,333
Speed running in the system (MHz) 1,600 1,600 1,066
Timing/Latency (tCL-tRCD-tRP-
tRASmin)
11-11-11-28 11-11-11-28 7-7-7-20
Size (MB) 4,096 8,192 2,048
Number of memory module(s) 2 1 2
Total amount of system RAM (GB) 8 8 4
Channel (single/dual) Dual Single Dual
Hard disk
Vendor and model number Seagate® ST500LT012 Samsung PM851 SSD Seagate ST3320418AS
Number of disks in system 1 1 1
Size (GB) 500 128 320
Buffer size (MB) 16 N/A 16
RPM 5,400 N/A 7,200
Type SATA 6.0 Gb/s SATA 6.0 Gb/s SATA 3.0 Gb/s
Controller
Intel 8 Series/C220
Chipset Family SATA
AHCI
Intel 8 Series/C220
Chipset Family SATA
AHCI
Intel Express Chipset
SATA RAID Controller
Driver
Intel 12.8.7.1000
(10/18/2013)
Intel 13.0.0.1098
(02/05/2014)
Intel 10.1.0.1008
(11/06/2010)
Operating system
Name
Windows 7
Professional x64
Windows 7
Professional x64
Windows 7
Professional x64
Build number 7600 7600 7600
Service Pack 1 1 1
File system NTFS NTFS NTFS
Kernel ACPI x64-based PC ACPI x64-based PC ACPI x64-based PC
Language English English English
Microsoft DirectX® version 11 11 11
14. A Principled Technologies test report 14Change your desktops, change your business
System All-in-One PC Mini Desktop Legacy Desktop Tower
Graphics
Vendor and model number Intel HD Graphics 4600 Intel HD Graphics 4600 Intel GMA 4500
Type Integrated Integrated Integrated
Chipset
Intel HD Graphics
Family
Intel HD Graphics
Family
Intel 4 Series Express
Chipset Family
BIOS version 2179.5 2179.1 2066.0
Total available graphics memory
(MB)
1,696 1,696 1,695
Dedicated video memory (MB) 64 64 64
System video memory (MB) 0 0 0
Shared system memory (MB) 1,632 1,632 1,631
Resolution 1,920 × 1,080 1,920 × 1,080 1,920 × 1,080
Driver
Intel 10.18.10.3412
(01/29/2014)
Intel 10.18.10.3412
(01/29/2014)
Intel 8.15.10.2413
(06/03/2011)
Sound card/subsystem
Vendor and model number
Realtek® High
Definition Audio
Realtek High Definition
Audio
Realtek High Definition
Audio
Driver
Realtek 6.0.1.6053
(10/16/2014)
Realtek 6.0.1.6032
(04/25/2014)
Realtek 6.0.1.6383
(05/31/2011)
Ethernet
Vendor and model number
Intel Ethernet
Connection I217-LM
Intel Ethernet
Connection I217-LM
Intel 82567LM-3
Gigabit Controller
Driver
Intel 12.12.50.4
(06/12/2014)
Intel 12.12.50.7205
(07/31/2014)
Intel 12.10.13.0
(12/16/2013)
Wireless
Vendor and model number
Intel Dual Band
Wireless-AC 7260
Intel Dual Band
Wireless-AC 7260
N/A
Driver
Intel 17.0.5.8
(06/18/2014)
Intel 17.0.5.8
(06/18/2014)
N/A
Optical drive(s)
Vendor and model number TSSTcorp SU-208FB N/A HP TS-H653R
Type DVD-RW N/A DVD-RW
USB ports
Number 8 6 10
Type 3.0 3.0 2.0
Other
Media card reader,
DisplayPort,
1 × HDMI In,
1 × HDMI Out
DisplayPort N/A
15. A Principled Technologies test report 15Change your desktops, change your business
System All-in-One PC Mini Desktop Legacy Desktop Tower
Monitor
Model
Dell All-in-One Touch
Display
Dell P2314H ViewSonic® VG730m
LCD type Backlit LED Backlit LED SXGA LCD
Screen size 23″ 23″ 17″
Refresh rate (Hz) 60 60 60
Maximum resolution 1,920 × 1,080 1,920 × 1,080 1,280 × 1,024
Figure 11: Detailed information for our test systems.
Figure 12 describes the configuration of the server that we used to access the desktops for management tasks.
System Dell PowerEdge R620
Power supplies
Total number 2
Vendor and model number Dell D750E-S1
Wattage of each (W) 750
Cooling fans
Total number 7
Vendor and model number Delta® GFC0412DS
Dimensions (h × w) of each 1-1/2″ × 1-3/4″
Volts 12
Amps 1.82
General
Number of processor packages 2
Number of cores per processor 8
Number of hardware threads per core 2
System power management policy Default
CPU
Vendor Intel
Name Xeon®
Model number E5-2660
Socket type LGA 2011
Core frequency (GHz) 2.20
Bus frequency 8
L1 cache 32 KB + 32 KB (per core)
L2 cache 256 KB (per core)
L3 cache 20 MB
Platform
Vendor and model number Dell PowerEdge™ R620
BIOS name and version Dell 2.4.3 (10/10/2014)
BIOS settings Default
16. A Principled Technologies test report 16Change your desktops, change your business
System Dell PowerEdge R620
Memory module(s)
Total RAM in system (GB) 128
Vendor and model number Micron® MT36KSF1G72PZ
Type PC3-12800R
Speed (MHz) 1,600
Speed running in the system (MHz) 1,600
Timing/Latency (tCL-tRCD-tRP-tRASmin) 9-9-9-24
Size (GB) 8
Number of RAM module(s) 20
Chip organization Double-sided
Rank Dual
Operating system
Name Microsoft Windows Server® 2012 R2 Datacenter
Build number 9600
File system NTFS
Kernel 6.2.9600.16452
Language English
RAID controller
Vendor and model number Dell PERC H710P Mini
Firmware version 21.3.1-0009 06/23/2014
Driver version 6.802.21.00 08/27/2014
Cache size (MB) 1,024
Local storage
Vendor and model number Seagate ST9900806SS
Number of drives 4
Size (GB) 900
RPM 10K
Type SAS
Ethernet adapters
Vendor and model number Intel Gigabit I350 4-port
Type Integrated
Driver 12.11.97.1 (08/13/2014)
USB 2.0 ports 4
Figure 12: Detailed information for our test server.
17. A Principled Technologies test report 17Change your desktops, change your business
APPENDIX B – HOW WE CALCULATED OVERALL SYSTEM PERFORMANCE
We included the following benchmarks in our performance/productivity calculation:
HDXPRT 2012
SYSmark® 2014
3DMark® Ice Storm
We ran the three benchmarks on the All-in-One PC, Mini Desktop, and legacy desktop tower, repeating each test
three times. For our calculations, we selected the median of the three results for each benchmark and system. We
calculated the percentage wins for the All-in-One PC and Mini Desktop versus the legacy desktop tower on each
benchmark. The All-in-One PC and Mini Desktop outperformed the legacy desktop tower on all three benchmarks.
We calculated system performance by taking the geometric mean of the percentage wins; that gave us a 183.7
percent system performance improvement for the All-in-One PC versus the legacy desktop and a 135.9 percent system
performance improvement for the Mini Desktop (see Figure 13).
All-in-One PC Mini Desktop
Legacy Desktop
Tower
Score
%
improvement
over legacy
desktop tower
Score
% improvement
over legacy
desktop tower
Score
HDXPRT (Create HD score) 229 129% 193 93% 100
SYSmark (Overall Rating) 1,417 133% 1,334 120% 607
3DMark Ice Storm (Median) 57,050 361% 40,258 225% 12,385
Geometric mean of percentage wins 183.7% 135.9%
Figure 13: System performance improvement based on our benchmark testing.
18. A Principled Technologies test report 18Change your desktops, change your business
APPENDIX C – HOW WE CALCULATED WORKSPACE
Figure 14 shows the dimensions for the three systems and the workspace savings for the All-in-One PC and Mini
Desktop. The base of the All-in-One PC was roughly the same size as the 9″ circular base of the monitor that we paired
with the legacy desktop tower. The legacy desktop tower chassis (17.6″ × 17.4″ × 7.0″) took up 121.7 square inches of
desk space. The circular base of the monitor added 63.6 square inches, for a total footprint of 185.3 square inches. With
a footprint of 76.5 square inches, the All-in-One PC was 108.8 square inches smaller than the legacy desktop tower—a
savings of 59 percent.
The Mini Desktop chassis (1.5″ × 7.4″ × 7.2″) took up 11.1 square inches of desk space. We used monitors with
similar footprints for the Mini Desktop (62.2 square inches) and the legacy desktop tower (63.6 squares inches).
Including the monitors, the footprint of the Mini Desktop was 111.0 square inches smaller than the footprint of the
legacy desktop tower—a savings of 60 percent.
Monitor System Total
Length ×
width
(inches)
Area
(sq.
inches)
Length ×
width
(inches)
Area
(sq.
inches)
Total
Area
(sq.
inches)
Advantage
vs. legacy
(sq. inches)
Advantage
(%)
All-in-One PC 8.5 × 9.0 76.5 76.5 108.8 59%
Mini Desktop
(on its narrow side)
7.1 × 8.9 63.2 7.4 × 1.5 11.1 74.3 111.0 60%
Legacy Desktop Tower
(circular base)
9.0 63.6 17.4 × 7 121.7 185.3
Figure 14: Dimensions of the three systems and the workspace savings for the All-in-One PC and Mini Desktop.
19. A Principled Technologies test report 19Change your desktops, change your business
APPENDIX D – HOW WE TESTED
Measuring system performance
HDXPRT 2012
Setting up the test
1. Insert the HDXPRT v1.1 DVD-ROM into your DVD drive.
2. At the HDXPRT Install screen, click Install HDXPRT.
3. Accept the HDXPRT end-user license agreement.
4. After the setup is complete, select Yes, I want to restart my computer now, and click Finish.
Running the test
1. On the desktop, click the HDXPRT 2012 shortcut.
2. Click Run HDXPRT.
3. Enter a test name, choose 3 iterations, and click Run.
4. The Results Screen automatically appears at the end of a successful run.
BAPCo® SYSmark 2014 v1.0.1.21
Antivirus software conflicts
SYSmark 2014 is not compatible with any virus-scanning software, so we uninstalled any such software that was
present on the desktop PCs before we installed the benchmark.
Pre-installed software conflicts
SYSmark 2014 installs the following applications, which its test scripts employ:
Adobe® Acrobat® XI Pro
Adobe Photoshop® CS6 Extended
Adobe Premiere® Pro CS6
Google Chrome™
Microsoft Excel® 2013
Microsoft OneNote® 2013
Microsoft Outlook® 2013
Microsoft PowerPoint® 2013
Microsoft Word 2013
Trimble® SketchUp® Pro 2013
WinZip® Pro 17.5
Setting up the test
1. Disable the User Account Control.
2. Click StartControl Panel.
3. At the User Accounts and Family Safety settings screen, click Add or remove user account.
4. At the User Account Control screen, click Continue.
5. Click Go to the main User Accounts page.
6. At the Make changes to your user account screen, click Turn User Account Control on or off.
7. At the User Account Control screen, click Continue.
8. Uncheck Use User Account Control to help protect your computer, and click OK.
9. At the You must restart your computer to apply these changes screen, click Restart Now.
10. Purchase SYSmark 2014 v1.0.1.21 from bapco.com/products/sysmark-2014 and install with default settings.
11. To launch SYSmark 2014, double-click the desktop icon, and select Configuration.
20. A Principled Technologies test report 20Change your desktops, change your business
12. Select all options, and click Save.
Running the test
1. Double-click the desktop icon to launch SYSmark 2014.
2. Make sure Office Productivity, Media Creation, and Data/Financial Analysis are selected.
3. Enter a Project name.
4. Select 3 Iterations, check the box beside Conditioning Run and beside Process Idle Tasks, and click Run
Benchmark.
5. When the benchmark completes and the main SYSmark 2014 menu appears, click Save FDR to create a
report.
3DMark
Installing 3Dmark v1.4.828
1. Download the 3DMark installer from www.futuremark.com/benchmarks/3dmark/all.
2. To install 3DMark with the default options, double-click the 3DMark installer exe file.
3. At the Welcome screen, click Next.
4. At the License Agreement screen, click I accept the terms of the license agreement, and click Next.
5. At the Setup Type screen, click Express, and click Next.
6. At the Ready to Install the Program screen, click Install.
7. At the Setup Complete screen, click Finish.
8. To launch 3DMark, double-click the 3DMark desktop icon. Enter the registration code, and click Register.
9. Exit 3DMark.
Running 3DMark v1.4.828
1. Boot the system and bring up a command prompt:
a. Type CMD into the Start menu search bar.
b. Right-click the Command Prompt app.
c. Click the Run as administrator button.
2. Type Cmd.exe /c start /wait Rundll32.exe advapi32.dll,ProcessIdleTasks
3. Do not interact with the system until the command completes.
4. After the command completes, wait 5 minutes before running the test.
5. To launch the benchmark, double-click the 3DMark desktop icon.
6. At the 3DMark Benchmark screen, click Run Ice Storm Unlimited.
7. When the benchmark run completes, record the results.
8. Complete steps 1 through 7 two more times, and report the median of the three runs.
Measuring power consumption
Test requirements
ExTech 380801 True RMS Power Analyzer
BAPCo SYSmark 2014 v1.0.1.21
Thermometer
Measuring system temperature while idle
Setting up the test
1. Using a power strip, plug the system under test (and monitor if used) into the ExTech.
2. Plug the ExTech into its own circuit.
3. On the system used to monitor the ExTech, open the Power Analyzer.
4. From the drop-down Option menu, select Sample Rate.
5. For the sample rate, enter 1.0, and select OK.
21. A Principled Technologies test report 21Change your desktops, change your business
6. On the systems under test, set the power plan to the manufacturer’s default setting. Set the display
brightness to 100 percent:
a. Click Start.
b. In the Start menu’s quick search field, type Power Options
c. Move the Screen brightness slider all the way to the right.
7. Set the remaining power plan settings as follows:
Dim the display: Never
Turn off the display: Never
Put the computer to sleep: Never
8. Disable the screen saver.
Running the test
1. Boot the system and bring up an administrative command prompt:
2. Type Cmd.exe /c start /wait Rundll32.exe advapi32.dll,ProcessIdleTasks
3. Do not interact with the system until the command completes.
4. After the command completes, wait 5 minutes before running the test.
5. On the ExTech power monitor, enter a name for the test run.
6. Report the room temperature, and in the ExTech Power Analyzer window, click Start Recording
7. After 30 minutes, record the room temperature.
8. After 1 hour, record the room temperature and in the ExTech Power Analyzer window, click Stop Recording.
9. Record the average watts of the run as Idle Power consumption.
10. Record the average of the recorded room temperatures as the Idle Room Temperature.
Complete steps 1 through 10 two more times, and report the median of the three runs.Measuring system
temperature during the SYSmark test
Antivirus software conflict
SYSmark 2014 is not compatible with any virus-scanning software, so we uninstalled any such software from the
notebook PCs before we installed the benchmark.
Pre-installed software conflicts
SYSmark 2014 installs the following applications, which its test scripts employ:
Adobe® Acrobat® XI Pro
Adobe® Photoshop® CS6 Extended
Adobe® Premiere® Pro CS6
Google Chrome
Microsoft® Excel® 2013
Microsoft® OneNote® 2013
Microsoft® Outlook® 2013
Microsoft® PowerPoint® 2013
Microsoft® Word 2013
Trimble SketchUp™ Pro 2013
WinZip® Pro 17.5
If any of these applications are already on the system under test, they will conflict with the benchmark and
cause problems. Before we installed the benchmark, we uninstalled all conflicting pre-installed software applications,
including different versions of the programs SYSmark 2014 uses, to avoid such issues.
22. A Principled Technologies test report 22Change your desktops, change your business
Setting up the test
1. Disable the User Account Control.
a. Click StartControl Panel.
b. At the User Accounts and Family Safety settings screen, click Add or remove user account.
c. At the User Account Control screen, click Continue.
d. Click Go to the main User Accounts page.
e. At the Make changes to your user account screen, click Turn User Account Control on or off.
f. At the User Account Control screen, click Continue.
g. Uncheck Use User Account Control to help protect your computer, and click OK.
h. At the You must restart your computer to apply these changes screen, click Restart Now.
2. Purchase and install SYSmark 2014 v1.0.1.21 with default settings from bapco.com/products/sysmark-2014.
3. Launch SYSmark 2014 by double-clicking the desktop icon and select Configuration.
4. Select all options and click Save.
Running the test
1. Double-click the SYSmark 2014 desktop icon.
2. Make sure Office Productivity, Media Creation, and Data/Financial Analysis are selected.
3. Select 3 iterations and check the box beside Process Idle Tasks.
4. Enter a Project name.
5. On the ExTech power monitor, enter a name for the SYSmark test run.
6. Report the room temperature, and simultaneously click Start Recording in the ExTech Power Analyzer
window, and click Run Benchmark in the SYSmark 2014 window.
7. After 30 minutes, record the room temperature.
8. Continue to record the room temperature every 30 minutes for the duration of the run.
9. When the SYSmark 2014 run completes and displays the results window, click Stop Recording in the Power
Analyzer window.
10. Record the average watts of the run as Under Load Power Consumption.
11. Record the average of the recorded room temperature as the Under Load Room Temperature.
Configuring the SCCM management environment
Starting from our base Microsoft System Center 2012 Configuration Manager build, we configured our
environment to test KVM Remote Control capability on our target clients. Figure 15 shows our isolated testing
environment, which comprised of one Dell PowerEdge R620 server running Windows Server 2008 R2 Hyper-V® with four
virtual machines. Before configuration, we installed all available Windows updates on our clients and joined each
desktop to the test.local domain. We created Active Directory accounts for AMT provisioning and installed Intel Setup
and Configuration Software (Intel SCS). Next, we provisioned the desktops using Configurator, provided by Intel, to apply
our custom management profile from SCS to each client’s management engine. We provide detailed steps below for
each of these procedures. Finally, we installed vendor-specific vPro management tools, which we used to connect to the
All-in-One PC and Mini Desktop using KVM Remote Control to complete our timed testing.
23. A Principled Technologies test report 23Change your desktops, change your business
Description
Computer
name
Operating system
Roles Installed
Assigned IP vCPU vRAM
Domain controller dc.test.local
Windows Server
2008 R2 Standard
Active Directory,
Domain Controller,
DHCP
192.168.1.10 1 8 GB
Certificate authority ca.test.local
Windows Server
2008 R2 Enterprise
Certification Authority
192.168.1.15 1 8 GB
Database server db.test.local
Windows Server
2008 R2 Standard
SQL Server 2012
Enterprise
192.168.1.20 1 16 GB
Management server cm.test.local
Windows Server
2008 R2 Standard
System Center 2012
Configuration Manager
SP1
192.168.1.50 2 16 GB
Figure 15: The details of our isolated testing environment.
Constructing the infrastructure
Creating Active Directory accounts for AMT provisioning
1. Log into the Domain Controller using the domainadministrator account.
2. Open Active Directory Administrative Center.
3. Under test(local), click NewGroup.
4. On the Create Group window, for Group name, use Kerberos Admins; for Group type, use security.
5. Add Kerberos Admins as a member of the Domain Admins group.
6. Add the computer account of the SCCM server to the Kerberos Admins security group.
7. Create an Organizational Unit for AMT managed systems. We used AMT
8. Create a security group called AMT
9. Add the Kerberos Admins group to the AMT security group.
Creating certificate templates for OOB management
1. Log into the Windows 2008 R2 Enterprise server designated for Certificate Authority as
domainadministrator.
2. Click StartAdministrative ToolsCertification Authority.
3. Right-click on test-CA-CA, and click Properties.
4. On the General tab, click View Certificate.
5. On the Details tab, scroll to and select Thumbprint. Copy the 40-character code displayed in the details. You
will add this information to the AMT BIOS later.
6. To close the Certificate Authority properties, click OK.
7. Expand the Certification Authority, and select Certificate Templates.
8. Right-click Certificate Templates, and select Manage.
9. Locate Web Server in the list of available certificate templates. Right-click the template, and select Duplicate
Template.
10. Select Windows 2003 Enterprise, and click OK.
11. Change the template name for the AMT Provisioning certificate. We used AMT Provisioning.
12. On the Subject Name tab, select Build from this Active Directory Information. Select Common Name, and
choose the option UPN.
13. On the Security tab, add the security group created for the SCCM site server. We used the Kerberos Admin
group. Add the Enroll permission for the security group. Ensure Domain Admins and Enterprise Admins have
Enroll permissions.
14. On the Extensions tab, select Application Policies, and click Edit.
24. A Principled Technologies test report 24Change your desktops, change your business
15. Click Add. Click New. Type AMT Provisioning for the name, and 2.16.840.1.113741.1.2.3 as
the Object Identifier. Click OK.
16. Ensure AMT Provisioning and Server Authentication are listed, and click OK.
17. Click OK to close the template properties.
18. Right-click the AMT Web Server Certificate template, and select Duplicate Template.
19. Select Windows 2003 Enterprise, and click OK.
20. Change the template name for the AMT Web Server Certificate. We used AMT Web Server
Certificate Template.
21. On the General tab, choose the option Publish Certificate in Active Directory.
22. On the Subject Name tab, select Supply in the request.
23. On the Security tab, ensure Domain Admins and Enterprise Admins have Enroll permissions.
24. To close the template properties, click OK.
25. In Certification Authority, navigate to Certificate Templates.
26. For both the AMT Provisioning Template, and the AMT Web Server Certificate Template, repeat the
following steps:
27. Right-click the central panel, and select NewCertificate Template to Issue.
28. Select the AMT Provisioning Template.
29. Click OK.
30. Log into the management server as domainadministrator.
31. Click StartRun. Type mmc and press Enter.
32. In the mmc console, click FileAdd/Remove Snap-in…
33. Select Certificates, and click Add. Select Computer account. Click Next.
34. Select Local computer, and click Finish.
35. Click OK.
36. Expand Certificates (Local Computer)PersonalCertificates.
37. In the right panel, click More ActionsAll TasksRequest a new certificate…
38. Click Next.
39. Accept the defaults, and click Next.
40. Select the new AMT Provisioning certificate. Click Enroll.
41. Click FileAdd/Remove Snap-in…
42. Select Certificates, and click Add. Select Computer account. Click Next.
43. Select My user account, and click Finish.
44. Click OK.
45. Expand CertificatesPersonalCertificates.
46. In the right panel, click More ActionsAll TasksRequest a new certificate…
47. Click Next.
48. Accept the defaults, and click Next.
49. Select the new AMT Provisioning certificate. Click Enroll.
50. Click FileAdd/Remove Snap-in…
51. Select Certificates, and click Add. Select My user account. Click Next.
52. Select Local computer, and click Finish.
53. Click OK.
54. Expand Certificates – Current UserPersonalCertificates.
55. From Certificates (Local)PersonalCertificates, click and drag the certificate created using the AMT
Provisioning template into Certificates – Current UserPersonalCertificates.
56. Click Close.
25. A Principled Technologies test report 25Change your desktops, change your business
Installing Intel Setup and Configuration Software (SCS) 9.1
1. Download IntelSCS_9.1.zip from downloadcenter.intel.com/Detail_Desc.aspx?DwnldID=20921.
2. Extract the contents to C:IntelSCS_9.1.
3. Browse to C:IntelSCS_9.1IntelSCSRCS.
4. Run IntelSCSInstaller.exe.
5. At the Welcome screen, click Next.
6. Select I accept the terms of the license agreement, and click Next.
7. Check the Boxes for Remote Configuration Service (RCS), Database Mode, and Console.
8. Enter the credentials of the Domain account that will run the service. We used
test.localadministrator. Click Next.
9. Select db.test.local as the location for the SCS database. This information may populate automatically. Select
Windows Authentication, and click Next.
10. On the Create Intel SCS Database pop-up, click Create Database.
11. On the confirmation screen, click Close.
12. On the confirmation screen, leave the default Installation Folder, and click Install.
13. Once the installation is complete, click Next.
14. Click Finish.
Setting up AMT provisioning with Intel SCS Remote Configuration Service
Creating the configuration profile
1. On the management server, launch the Intel Setup and Configuration Console.
2. Click Profiles.
3. To construct a profile for deployment, click New.
4. For Profile Name, enter a description of the target clients. We used inteltest. Click OK.
5. On the Getting Started Screen, choose Configuration / Reconfiguration.
6. On the Optional Settings screen, choose the options Active Directory Integration, Access Control List (ACL),
and Transport Layer Security (TLS), and click Next.
7. On the AD Integration screen, browse for the OU created for the AMT managed devices. We used OU=AMT,
DC=test, DC=local. Check the box for Always use host name, and click Next.
8. On the Access Control List screen, click Add.
9. Select Active Directory User/Group. Click Browse.
10. Add Kerberos Admin, Domain Admins, or other administrative users groups. Click OK.
11. For Access Type, select Remote.
12. Choose the option for PT Administration. Click OK.
13. Click Next.
14. On the TLS screen, from the drop-down menu, select the Enterprise Certificate Authority, ca.test.local.
15. Select the Server Certificate Template to be used to generate certificates for the AMT devices. We selected
AMTWebServerCertificate. Click Next.
16. On the System Settings screen, choose the options Web UI, Serial Over LAN, IDE Redirection, and KVM
Redirection.
17. Select Use the following password for all systems. Enter the password for use after provisioning is complete.
We used P@ssw0rd
18. Click KVM Settings…
19. Enter the RFB Password for KVM sessions. We used P@ssw0rd
20. Enter the MEBX password. We used P@ssw0rd
21. Uncheck User Consent required before beginning KVM session, and click OK.
22. Choose the options Enable Intel AMT to respond to ping requests and Enable Fast Call for Help (within the
enterprise network).
26. A Principled Technologies test report 26Change your desktops, change your business
23. To Edit IP and FQDN settings, click Set.
24. In the Network Settings window, select Use the following as the FQDN, and choose Primary DNS FQDN from
the drop-down menu.
25. Choose the option, the device and the OS will have the same FQDN (Shared FQDN).
26. Select Get the IP from the DHCP server.
27. Select Update the DNS directly or via DHCP option 81. Click OK.
28. Click Next.
29. Click Finish.
Configuring the clients
Repeat these steps for each client.
Reserving an IP address in DHCP
1. On the Domain Controller, run dhcpmgmt.msc.
2. Expand FQDN IPv4Scope, and click Reservations.
3. Click More Actions, and click New Reservation.
4. For Reservation Name, enter the host name of the target client.
5. Enter an IP address to reserve.
6. Enter the MAC address of the target client’s Ethernet port.
7. Click Add.
Configuring policy on the target client
1. Log onto the target client using domainadministrator.
2. Download and apply applicable driver packages from the manufacturer’s Web site.
3. Open Windows Firewall with Advanced Security.
4. Click Firewall Properties.
5. On the Domain Profile, Private Profile, and Public Profile tabs, set the Firewall state to Off. Click OK.
6. Set the host name and IP of each virtual machine as described above.
7. Run lusrmgr.msc.
8. Select Groups.
9. Right-click Administrators, and click Properties.
10. Click Add.
11. Select Object Types, check the box for Computers, and click OK.
12. Disable the wireless adapter.
Configuring the Configuration Manager Client
1. On the management server, navigate to Program Files Microsoft Configuration Manager folder.
2. Copy the Client folder to the target client.
3. On the target client, run ccmsetup.exe. This task will run in the background and will take a few minutes to
complete.
4. In Control Panel, open Configuration Manager.
5. On the Site tab, click Configure Settings.
6. For Currently assigned to site code, enter PTL, and click Apply.
7. In the Actions panel, run the User Policy Retrieval & Evaluation Cycle and the Machine Policy Retrieval &
Evaluation Cycle.
8. After a few minutes, the Actions panel will populate with more tasks. Run each one of the tasks.
Adding the enterprise certificate authority to the AMT trusted root certification authorities
1. Power on the target client.
2. During POST, press CTRL-P to enter the Intel Management Engine BIOS settings.
3. When prompted for a password, type the default password admin
27. A Principled Technologies test report 27Change your desktops, change your business
4. Provide and confirm a new password. We used P@ssw0rd
5. Select Intel AMT Configuration. Press Enter.
6. Select SOL/IDER/KVM.
7. All features should be Enabled. To exit the menu, press Esc.
8. Select User Consent.
9. Change User Opt-in to None.
10. Change Opt-in Configurable from Remote IT to Disabled. To exit the menu, press Esc.
11. Change Password Policy to During Setup and Configuration.
12. Select Network Setup.
13. Select Intel ME Network Name Settings.
14. For Host Name, use the same host name used for the operating system.
15. For Domain Name, enter the domain name. We used test.local
16. For Shared/Dedicated FQDN, select Shared.
17. For Dynamic DNS Update, select Enabled. Press Esc to exit the menu.
18. Select Remote Setup and Configuration. Press Enter.
19. On Provisioning Server IPv4/IPv6, enter the IP address of the system center server. We used
192.168.1.50
20. For Provisioning Server FQDN, enter the FQDN of the management server. We entered cm.test.local.
For port number we used 9971
21. Select TLS PKI.
22. Select PKI DNS Suffix, and type the FQDN suffix. We used test.local. Press Enter.
23. Select Manage Hashes.
24. To add a new hash, press Insert.
25. Enter a descriptive name for the Enterprise Certificate Authority. We used test.local CA
26. Press Enter.
27. Following the syntax example provided in the prompt, enter the 40-character thumbprint previously copied
from the Enterprise CA root certificate. Press Enter.
28. To set the hash certificate as active, press Y. test.local CA will appear in the list of trusted root authorities
and should be active.
29. To return to the AMT Configuration Menu, press Esc.
30. Select Activate Network Access. To confirm, press Y.
31. To exit, press Esc until prompted. To confirm exit, press Y.
Executing the remote configuration script on the AMT managed client
1. Log into the AMT managed target client as domainadministrator.
2. Copy the Configurator folder from the SCS_9.1 directory located on the management server to C: on the
local host.
3. Open a command prompt as administrator.
4. Type cd C:Configurator and press Enter.
5. Execute the following command:
ACUConfig.exe /lowsecurity /output console /verbose ConfigViaRCSOnly
cm.test.local inteltest
6. The Configurator utility will contact the Remote Configuration Service and apply the settings configured in
the inteltest profile.
Setting up AMT discovery in SCCM
1. On the management server, open the SCCM Management Console. Locate the target client in the Devices
panel.
2. Right-click the headings bar in the Devices panel, and check the entries for AMT Status and AMT Version.
28. A Principled Technologies test report 28Change your desktops, change your business
3. Right-click the server, and select Manage Out of BandDiscover AMT Status. Click OK.
4. In the Home menu at the top of the panel, click Refresh.
5. The Client, Site Code, and Client Activity fields will populate. The AMT Status will change to Externally
provisioned.
Measuring the time to open a KVM session with the All-in-One PC and Mini Desktop
1. On the management server, from the Windows taskbar, open Dell Command | Intel vPro Out of Band and
start the timer.
2. Select Operations.
3. Select KVM Connect.
4. On the KVM Connect screen, select the target client, and click Connect.
5. The KVM will window will appear and connect automatically. When the login screen is visible, stop the
timer.
6. Complete steps 1 through 5 two more times, and report the median of the three runs.
APPENDIX E – BENCHMARK INFORMATION
About HDXPRT 2012
The High Definition Experience & Performance Ratings Test (HDXPRT) 2012 is a benchmark that evaluates the
capabilities of PCs in consumer digital media uses, divided into the following categories:
Media Organizer
Media Creator
Photo Blogger
Video Producer
Music Maker
For more information on HDXPRT 2012, see
principledtechnologies.com/benchmarkxprt/whitepapers/2012/HDXPRT_2012_White_Paper.pdf.
About BAPCo SYSmark 2014
According to BAPCo, “SYSmark 2014 is an application-based benchmark that reflects usage patterns of business
users in the areas of office productivity, media creation, and data/financial analysis.” The benchmark features notable
applications from these fields. SYSmark 2014 generates a score from the times the tested system takes to complete each
individual operation in each scenario. For more information on this benchmark, see bapco.com/products/sysmark-2014.
About Futuremark 3DMark (2013) v1.4.778
Futuremark 3DMark v1.4.778 is a collection of benchmarks designed to rate the graphics performance of
smartphones, tablets, notebooks, laptops, desktops, or high-performance gaming PCs. 3DMark includes benchmarks
designed for Windows, Apple®, and Android™ devices. The Ice Storm portion of 3DMark includes 720p graphics tests to
measure GPU performance and a physics test to stress CPU performance. Ice Storm is available for Windows, iOS, and
Android smart phones and tablets. It uses a DirectX® 11 feature level 9 for Windows and OpenGL® ES 2.0 for iOS and
Android systems. Ice Storm Extreme increases the rendering resolution to 1080p and uses higher quality textures and
post-processing effects in the Graphics tests.
For more information on 3DMark, see www.futuremark.com/benchmarks/3dmark.
29. A Principled Technologies test report 29Change your desktops, change your business
ABOUT PRINCIPLED TECHNOLOGIES
Principled Technologies, Inc.
1007 Slater Road, Suite 300
Durham, NC, 27703
www.principledtechnologies.com
We provide industry-leading technology assessment and fact-based
marketing services. We bring to every assignment extensive experience
with and expertise in all aspects of technology testing and analysis, from
researching new technologies, to developing new methodologies, to
testing with existing and new tools.
When the assessment is complete, we know how to present the results to
a broad range of target audiences. We provide our clients with the
materials they need, from market-focused data to use in their own
collateral to custom sales aids, such as test reports, performance
assessments, and white papers. Every document reflects the results of
our trusted independent analysis.
We provide customized services that focus on our clients’ individual
requirements. Whether the technology involves hardware, software, Web
sites, or services, we offer the experience, expertise, and tools to help our
clients assess how it will fare against its competition, its performance, its
market readiness, and its quality and reliability.
Our founders, Mark L. Van Name and Bill Catchings, have worked
together in technology assessment for over 20 years. As journalists, they
published over a thousand articles on a wide array of technology subjects.
They created and led the Ziff-Davis Benchmark Operation, which
developed such industry-standard benchmarks as Ziff Davis Media’s
Winstone and WebBench. They founded and led eTesting Labs, and after
the acquisition of that company by Lionbridge Technologies were the
head and CTO of VeriTest.
Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
Disclaimer of Warranties; Limitation of Liability:
PRINCIPLED TECHNOLOGIES, INC. HAS MADE REASONABLE EFFORTS TO ENSURE THE ACCURACY AND VALIDITY OF ITS TESTING, HOWEVER,
PRINCIPLED TECHNOLOGIES, INC. SPECIFICALLY DISCLAIMS ANY WARRANTY, EXPRESSED OR IMPLIED, RELATING TO THE TEST RESULTS AND
ANALYSIS, THEIR ACCURACY, COMPLETENESS OR QUALITY, INCLUDING ANY IMPLIED WARRANTY OF FITNESS FOR ANY PARTICULAR PURPOSE.
ALL PERSONS OR ENTITIES RELYING ON THE RESULTS OF ANY TESTING DO SO AT THEIR OWN RISK, AND AGREE THAT PRINCIPLED
TECHNOLOGIES, INC., ITS EMPLOYEES AND ITS SUBCONTRACTORS SHALL HAVE NO LIABILITY WHATSOEVER FROM ANY CLAIM OF LOSS OR
DAMAGE ON ACCOUNT OF ANY ALLEGED ERROR OR DEFECT IN ANY TESTING PROCEDURE OR RESULT.
IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES, INC. BE LIABLE FOR INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN
CONNECTION WITH ITS TESTING, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES,
INC.’S LIABILITY, INCLUDING FOR DIRECT DAMAGES, EXCEED THE AMOUNTS PAID IN CONNECTION WITH PRINCIPLED TECHNOLOGIES, INC.’S
TESTING. CUSTOMER’S SOLE AND EXCLUSIVE REMEDIES ARE AS SET FORTH HEREIN.