In today’s highly competitive manufacturing environment, success requires a constant focus on cost cutting while maintaining production throughput and employee safety. For manufacturers, this includes finding new ways to lower operating expenses, a large part of which are the purchase and support of industrial systems. A significant cost stems from the inefficiencies created by the growing numbers and varieties of systems on the factory floor.
This white paper describes how virtualization technology running on multi-core Intel Core vPro processors can be used in industrial automation to consolidate computing devices for motion control, programmable logic control (PLC), human machine interface (HMI), machine vision, data acquisition, functional safety and so forth. This approach can help manufacturers reduce cost and complexity on the factory floor.
Intel Gateway Solutions for the Internet of ThingsIntel IoT
Intel Gateway Solutions for the Internet of Things (IoT) is a family of platforms that enables companies to seamlessly interconnect industrial infrastructure devices and secure data flow between devices and the cloud. Intel Gateway Solutions for IoT enables customers to securely aggregate, share,and filter data for analysis.
Intel IT Experts Tour Cyber Security - Matthew Rosenquist 2013Matthew Rosenquist
The document discusses cyber security trends, solutions from Intel and McAfee, and opportunities for hardware-enhanced security. It notes that the threat landscape and attack surfaces are growing in complexity. Intel and McAfee aim to deliver security at all levels including the silicon, operating system, virtualized environments, and applications. Examples are given of how hardware features can accelerate encryption and provide more robust protection for devices, servers, and cloud environments against viruses, malware, and advanced threats.
Accelerating Our Path to Multi Platform BenefitsIntel IT Center
This is a time of tremendous change for IT organizations everywhere.
Intel IT realized we need to enable enterprise applications to support the devices of today (touch) and also develop the applications so they are ready for the next big thing (voice and gesture). We’ve kicked-off a new initiative that focuses on accelerating delivery of applications to our business partners and employees on their mobile platform(s) of choice.
Disrupting the Data Center: Unleashing the Digital Services EconomyIntel IT Center
The document discusses disrupting the traditional data center model to enable the digital services economy. It argues that the legacy data center model is insufficient due to limitations like siloed resources and a lack of resiliency. It promotes "breaking the box" by moving to a software-defined infrastructure with optimized service delivery and silicon customization. This involves pooling resources, exposing capabilities, and adopting open standards to optimize workloads and better meet customers' unique needs.
Infographic: SDN, BYOD and Cloud! Oh my!SolarWinds
This document discusses the growing complexity of networks due to factors like BYOD, security concerns, virtualization, and cloud computing. IT professionals ranked the top drivers of complexity as smarter equipment, computer virtualization, security issues, BYOD, mobility, and public cloud/SaaS. They say the skills needed now are understanding the business, information security, and cloud/SaaS knowledge, while in 5 years network engineering will be most important. Companies can help IT manage complexity by training staff, adding management tools, prioritizing resources, increasing budgets and staff. For training, IT recommends peer learning, vendor learning, online resources, and distance learning when budgets are limited.
CASE STUDY
4th Generation Intel®Core™i5 and i7 vPro™Processors
Enterprise Security
McAfee ePolicy Orchestrator Deep Command* with Intel® Active Management Technology opens up new enterprise security revenue streams for COMGUARD
On October 26th, C/D/H presented on Windows Intune to a group of IT professionals at TechKNOWLEDGEy 2011. Attendees learned an overview of Intune and how it can simply PC management.
View the slide deck and find out the benefits of Intune, if it’s right for your business, pricing basics and how to take advantage of a free trial.
For more information on this or other topics, visit our blog at www.cdhtalkstech.com.
Sumo Logic IT Operations Solutions BriefManish Kalra
Sumo Logic is a cloud-native service that provides end-to-end visibility into IT infrastructures and applications through a single unified view. It ingests machine data from across the IT environment to deliver insights into performance, availability, configurations, capacity, and security. Sumo Logic helps eliminate monitoring silos, discover issues faster, and optimize resource utilization to improve system uptime and reliability.
Intel Gateway Solutions for the Internet of ThingsIntel IoT
Intel Gateway Solutions for the Internet of Things (IoT) is a family of platforms that enables companies to seamlessly interconnect industrial infrastructure devices and secure data flow between devices and the cloud. Intel Gateway Solutions for IoT enables customers to securely aggregate, share,and filter data for analysis.
Intel IT Experts Tour Cyber Security - Matthew Rosenquist 2013Matthew Rosenquist
The document discusses cyber security trends, solutions from Intel and McAfee, and opportunities for hardware-enhanced security. It notes that the threat landscape and attack surfaces are growing in complexity. Intel and McAfee aim to deliver security at all levels including the silicon, operating system, virtualized environments, and applications. Examples are given of how hardware features can accelerate encryption and provide more robust protection for devices, servers, and cloud environments against viruses, malware, and advanced threats.
Accelerating Our Path to Multi Platform BenefitsIntel IT Center
This is a time of tremendous change for IT organizations everywhere.
Intel IT realized we need to enable enterprise applications to support the devices of today (touch) and also develop the applications so they are ready for the next big thing (voice and gesture). We’ve kicked-off a new initiative that focuses on accelerating delivery of applications to our business partners and employees on their mobile platform(s) of choice.
Disrupting the Data Center: Unleashing the Digital Services EconomyIntel IT Center
The document discusses disrupting the traditional data center model to enable the digital services economy. It argues that the legacy data center model is insufficient due to limitations like siloed resources and a lack of resiliency. It promotes "breaking the box" by moving to a software-defined infrastructure with optimized service delivery and silicon customization. This involves pooling resources, exposing capabilities, and adopting open standards to optimize workloads and better meet customers' unique needs.
Infographic: SDN, BYOD and Cloud! Oh my!SolarWinds
This document discusses the growing complexity of networks due to factors like BYOD, security concerns, virtualization, and cloud computing. IT professionals ranked the top drivers of complexity as smarter equipment, computer virtualization, security issues, BYOD, mobility, and public cloud/SaaS. They say the skills needed now are understanding the business, information security, and cloud/SaaS knowledge, while in 5 years network engineering will be most important. Companies can help IT manage complexity by training staff, adding management tools, prioritizing resources, increasing budgets and staff. For training, IT recommends peer learning, vendor learning, online resources, and distance learning when budgets are limited.
CASE STUDY
4th Generation Intel®Core™i5 and i7 vPro™Processors
Enterprise Security
McAfee ePolicy Orchestrator Deep Command* with Intel® Active Management Technology opens up new enterprise security revenue streams for COMGUARD
On October 26th, C/D/H presented on Windows Intune to a group of IT professionals at TechKNOWLEDGEy 2011. Attendees learned an overview of Intune and how it can simply PC management.
View the slide deck and find out the benefits of Intune, if it’s right for your business, pricing basics and how to take advantage of a free trial.
For more information on this or other topics, visit our blog at www.cdhtalkstech.com.
Sumo Logic IT Operations Solutions BriefManish Kalra
Sumo Logic is a cloud-native service that provides end-to-end visibility into IT infrastructures and applications through a single unified view. It ingests machine data from across the IT environment to deliver insights into performance, availability, configurations, capacity, and security. Sumo Logic helps eliminate monitoring silos, discover issues faster, and optimize resource utilization to improve system uptime and reliability.
Software Defined Network Based Internet on thing Eco System for ShopfloorIRJET Journal
This document proposes a software defined network (SDN) based architecture for internet of things (IoT) devices on a manufacturing shop floor. It aims to achieve high availability, security, and real-time data transfer. The SDN architecture separates the control plane from the data plane, allowing for centralized, programmable network management. IoT sensors, actuators, and mobile devices are integrated with machines to collect and transmit production data. The proposed system uses SDN to securely connect IoT devices to cloud servers via an IoT controller, addressing challenges around IoT security, scalability, and data handling on the manufacturing network.
Presentation of Connectorio's building's technical systems integrations expertise, services, and the ConnectorIO multi-protocol gateway with Industrial IoT and Building Management & Automation System Cloud platform.
More information:
🔹 Our website: https://connectorio.com
Social:
◼️ Linkedin: https://www.linkedin.com/company/12662346/
◼️ Facebook: https://www.facebook.com/connectorio
◼️ Twitter: https://twitter.com/connectorio
Contact us:
🔹 https://connectorio.com/contact/
The TDi Defense Foundation is an integrated platform that helps secure organizations from insider threats and external breaches. It establishes control over privileged interfaces to securely monitor, log, and gain visibility into infrastructure components. Key features include role-based security for interfaces, event detection and logging, and providing remote access. It uses various protocols to connect to infrastructure data sources and intelligent modules to provide context to cryptic events.
Key Security Insights: Examining 2014 to predict emerging threats Dell World
Cyber-crimes are alive and well on the global stage and will only continue to be pervasive as long as organizations prolong taking the necessary defense measures to stop threats from slipping through the cracks. In this session, we’ll present the most common attacks Dell SonicWALL observed since 2014 and the ways we expect emergent threats to affect small and medium businesses, as well as large enterprises moving forward. This session is perfect for anybody who is interested in learning more about the state of the union in security.
Software Development Tools for Intel® IoT PlatformsIntel® Software
This talk familiarizes participants with the benefits of using the Intel® software development tools and libraries for developing end-to-end IoT solutions.
1) The document discusses developing future proof IoT using composable semantics, security, functional safety, and quality of service. It covers developing autonomic agents and validating them.
2) Formal model-based systems engineering is proposed using an object-oriented analysis and design process, the SysML modeling language, and modeling tools. This allows generating all views, reports, code and tests from a single system model.
3) Data is proposed to become the primary driver, replacing existing internet and computing approaches. Applications would be stateless and controllable software and hardware layers exposed for control via autonomous agents and smart contracts.
This document defines cloud computing and provides a taxonomy for cloud service and deployment models. It describes the five essential characteristics of cloud computing as on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. It outlines three cloud service models - Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). It also describes four deployment models for operating cloud services - Private cloud, Community cloud, Public cloud, and Hybrid cloud. The purpose is to establish a common framework for understanding and comparing cloud computing technologies and services.
Hosted desktop and evolution of hardware server technologies - 2015 editionAhmed Sallam
Three key server hardware technologies are shaping the future of Desktop Virtualization:
1. Hardware-Assisted System Virtualization.
2. Hardware-Assisted System Security
3. Hardware Servers Physicalization.
This paper covers the three of them.
Government Webinar: Low-Cost Log, Network Configuration, and IT Monitoring So...SolarWinds
In this webinar, our SolarWinds sales engineer discussed the basic network management tools you need to operate and troubleshoot your network and help improve security. He also reviewed the key factors to consider, where to start, and provided details on some of the tools our government customers need today.
During this interactive webinar, attendees learned how:
SolarWinds® ipMonitor® provides visibility into availability and performance of your network, servers, and applications
Kiwi Syslog® Server centralizes and simplifies log message management across network devices and servers
Kiwi CatTools® provides powerful network automation and configuration management
Taming Multi-Cloud, Hybrid Cloud, Docker, and KubernetesSolarWinds
This document discusses best practices for optimizing log management in modern, complex IT environments. It notes the rise of technologies like microservices, containers, and serverless computing have greatly increased infrastructure complexity. It recommends taking a unified approach to log management, monitoring both logs and metrics across bare metal, virtual machines, Kubernetes, and serverless platforms. This will help increase visibility, simplify instrumentation for developers, and provide a basis for continuous compliance.
Unlock Hidden Potential through Big Data and AnalyticsIT@Intel
Kim Stevenson, Intel's Chief Information Officer, discussed how big data and analytics are driving innovation through increased data volumes, lower computing costs, and new tools. Big data allows for improved customer experiences, more intelligent systems, and richer data analysis. Corporations are using analytics to increase efficiency, assist campaigns, and reduce costs. Intel's data platform aims to enable massive computing power, build an open ecosystem, and reduce complexity to fuel data-driven innovation. Stevenson highlighted opportunities from traffic optimization to personalized healthcare and ways analytics can provide operational efficiency, revenue growth, and cost reduction.
This document summarizes new features and updates for QualysGuard modules in 2010, including the Vulnerability Management, Policy Compliance, PCI Compliance, Web Application Scanning, and Malware Detection modules. Key updates include new vulnerability discovery methods, Microsoft patch reporting, custom port scanning, Oracle OS checks, file integrity monitoring, and a free malware detection service. It also introduces the new Qualys GO SECURE service and SECURE seal to help merchants validate their website security.
Windows Intune provides administrators tools to manage devices enrolled in the service. It includes dashboards to view usage statistics, device details, critical update status, and mobile device management. Administrators can check health, update status, security details for each device. Windows Intune aims to simplify management of updates and policies across all device types. It uses alerts and role-based access to efficiently address issues.
Government and Education Webinar: Conquering Remote Work IT Challenges SolarWinds
In this webinar, we discussed how SolarWinds® solutions can help you overcome remote work IT challenges.
During this interactive webinar, attendees learned about:
Improve network monitoring, configuration, and VPN management with SolarWinds Network Performance Monitor (NPM) and SolarWinds Network Configuration Manager (NCM)
Monitor the server and application performance of your collaboration systems with SolarWinds Server & Application Monitor (SAM)
Utilize configuration management to efficiently deploy upgrades and improve compliance with NCM
Support users and systems remotely with tools such as SolarWinds Dameware® Remote Support (DRS) and SolarWinds Dameware Remote Everywhere (DRE)
Improve IT request management, ticket tracking, and asset management with tools like SolarWinds Web Help Desk® and SolarWinds Service Desk
Automate provisioning and permissions management with SolarWinds Access Rights Manager™ (ARM)
Locate users and devices on your network with SolarWinds User Device Tracker (UDT)
SolarWinds Product Management Technical Drilldown on Deep Packet Inspection a...SolarWinds
In this webinar, Group Product Manager Rob Hock and Network Management Head Geek Leon Adato will show how SolarWinds deep packet inspection and analysis and the new Quality of Experience dashboard within Network Performance Monitor (NPM) version 11 can help you solve both common and complex application and network performance issues.
This document discusses domain data security on cloud computing. It begins by defining domains as a way to partition data for security, notifications, and reporting purposes. Data is highly secure within a domain. The document then discusses how distributing data across different domains based on regions can improve security and access. It analyzes security issues in cloud environments and discusses authentication, encryption, and other techniques used for data security in cloud computing. Segregating data by domain allows for faster access, easier maintenance, and higher security according to the document.
Dhana Raj Markandu: Control System Cybersecurity - Challenges in a New Energy...Dhana Raj Markandu
Conference on Electricity Power Supply Industry (CEPSI) 2012, Bali, Indonesia
(Accepted for presentation but not published due to unforeseen withdrawal of author)
Are your industrial networks protected...Ethernet Security Firewalls Schneider Electric
Security incidents rise at an alarming rate each year. As the complexity of the threats increases, so do the security measures required to protect industrial networks. Plant operations personnel need to understand security basics as plant processes integrate with outside networks. This paper reviews network security fundamentals, with an emphasis on firewalls specific to industry applications. The variety of firewalls is defined, explained, and compared.
Industrial Control System Cyber Security and the Employment of Industrial Fir...Schneider Electric
This presentation provides an overview of industrial control systems and typical system topologies, identifies typical threats and vulnerabilities to these systems, and provides recommended security countermeasures to mitigate the risks.
This document summarizes a presentation given by Bianca Jiang and Ginny Ghezzo at the O'Reilly Software Architecture Conference on March 18, 2015 about re-architecting maintenance for continuous delivery. The presentation discussed the challenges faced with traditional maintenance approaches and how adopting DevOps practices like automation, continuous delivery and testing can help improve maintenance by providing quicker, higher quality updates on a more predictable schedule with lower costs. Key aspects covered included implementing maintenance as incremental patches in an automated pipeline, generating documentation, and ensuring quality through continuous testing. The goal is to make maintenance a seamless process for customers.
The New Generation of IT Optimization and Consolidation PlatformsBob Rhubart
This document discusses Oracle's enterprise architecture approach and solutions. It begins with an overview of Oracle's results-driven enterprise architecture methodology. It then provides examples of enterprise architecture case studies involving IT optimization through portfolio rationalization, data center consolidation, and implementing shared services and cloud computing. The document discusses Oracle's enterprise architecture framework and process, and how Oracle guides customers' enterprise architecture efforts through strategic roadmapping and proven best practices.
Software Defined Network Based Internet on thing Eco System for ShopfloorIRJET Journal
This document proposes a software defined network (SDN) based architecture for internet of things (IoT) devices on a manufacturing shop floor. It aims to achieve high availability, security, and real-time data transfer. The SDN architecture separates the control plane from the data plane, allowing for centralized, programmable network management. IoT sensors, actuators, and mobile devices are integrated with machines to collect and transmit production data. The proposed system uses SDN to securely connect IoT devices to cloud servers via an IoT controller, addressing challenges around IoT security, scalability, and data handling on the manufacturing network.
Presentation of Connectorio's building's technical systems integrations expertise, services, and the ConnectorIO multi-protocol gateway with Industrial IoT and Building Management & Automation System Cloud platform.
More information:
🔹 Our website: https://connectorio.com
Social:
◼️ Linkedin: https://www.linkedin.com/company/12662346/
◼️ Facebook: https://www.facebook.com/connectorio
◼️ Twitter: https://twitter.com/connectorio
Contact us:
🔹 https://connectorio.com/contact/
The TDi Defense Foundation is an integrated platform that helps secure organizations from insider threats and external breaches. It establishes control over privileged interfaces to securely monitor, log, and gain visibility into infrastructure components. Key features include role-based security for interfaces, event detection and logging, and providing remote access. It uses various protocols to connect to infrastructure data sources and intelligent modules to provide context to cryptic events.
Key Security Insights: Examining 2014 to predict emerging threats Dell World
Cyber-crimes are alive and well on the global stage and will only continue to be pervasive as long as organizations prolong taking the necessary defense measures to stop threats from slipping through the cracks. In this session, we’ll present the most common attacks Dell SonicWALL observed since 2014 and the ways we expect emergent threats to affect small and medium businesses, as well as large enterprises moving forward. This session is perfect for anybody who is interested in learning more about the state of the union in security.
Software Development Tools for Intel® IoT PlatformsIntel® Software
This talk familiarizes participants with the benefits of using the Intel® software development tools and libraries for developing end-to-end IoT solutions.
1) The document discusses developing future proof IoT using composable semantics, security, functional safety, and quality of service. It covers developing autonomic agents and validating them.
2) Formal model-based systems engineering is proposed using an object-oriented analysis and design process, the SysML modeling language, and modeling tools. This allows generating all views, reports, code and tests from a single system model.
3) Data is proposed to become the primary driver, replacing existing internet and computing approaches. Applications would be stateless and controllable software and hardware layers exposed for control via autonomous agents and smart contracts.
This document defines cloud computing and provides a taxonomy for cloud service and deployment models. It describes the five essential characteristics of cloud computing as on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. It outlines three cloud service models - Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). It also describes four deployment models for operating cloud services - Private cloud, Community cloud, Public cloud, and Hybrid cloud. The purpose is to establish a common framework for understanding and comparing cloud computing technologies and services.
Hosted desktop and evolution of hardware server technologies - 2015 editionAhmed Sallam
Three key server hardware technologies are shaping the future of Desktop Virtualization:
1. Hardware-Assisted System Virtualization.
2. Hardware-Assisted System Security
3. Hardware Servers Physicalization.
This paper covers the three of them.
Government Webinar: Low-Cost Log, Network Configuration, and IT Monitoring So...SolarWinds
In this webinar, our SolarWinds sales engineer discussed the basic network management tools you need to operate and troubleshoot your network and help improve security. He also reviewed the key factors to consider, where to start, and provided details on some of the tools our government customers need today.
During this interactive webinar, attendees learned how:
SolarWinds® ipMonitor® provides visibility into availability and performance of your network, servers, and applications
Kiwi Syslog® Server centralizes and simplifies log message management across network devices and servers
Kiwi CatTools® provides powerful network automation and configuration management
Taming Multi-Cloud, Hybrid Cloud, Docker, and KubernetesSolarWinds
This document discusses best practices for optimizing log management in modern, complex IT environments. It notes the rise of technologies like microservices, containers, and serverless computing have greatly increased infrastructure complexity. It recommends taking a unified approach to log management, monitoring both logs and metrics across bare metal, virtual machines, Kubernetes, and serverless platforms. This will help increase visibility, simplify instrumentation for developers, and provide a basis for continuous compliance.
Unlock Hidden Potential through Big Data and AnalyticsIT@Intel
Kim Stevenson, Intel's Chief Information Officer, discussed how big data and analytics are driving innovation through increased data volumes, lower computing costs, and new tools. Big data allows for improved customer experiences, more intelligent systems, and richer data analysis. Corporations are using analytics to increase efficiency, assist campaigns, and reduce costs. Intel's data platform aims to enable massive computing power, build an open ecosystem, and reduce complexity to fuel data-driven innovation. Stevenson highlighted opportunities from traffic optimization to personalized healthcare and ways analytics can provide operational efficiency, revenue growth, and cost reduction.
This document summarizes new features and updates for QualysGuard modules in 2010, including the Vulnerability Management, Policy Compliance, PCI Compliance, Web Application Scanning, and Malware Detection modules. Key updates include new vulnerability discovery methods, Microsoft patch reporting, custom port scanning, Oracle OS checks, file integrity monitoring, and a free malware detection service. It also introduces the new Qualys GO SECURE service and SECURE seal to help merchants validate their website security.
Windows Intune provides administrators tools to manage devices enrolled in the service. It includes dashboards to view usage statistics, device details, critical update status, and mobile device management. Administrators can check health, update status, security details for each device. Windows Intune aims to simplify management of updates and policies across all device types. It uses alerts and role-based access to efficiently address issues.
Government and Education Webinar: Conquering Remote Work IT Challenges SolarWinds
In this webinar, we discussed how SolarWinds® solutions can help you overcome remote work IT challenges.
During this interactive webinar, attendees learned about:
Improve network monitoring, configuration, and VPN management with SolarWinds Network Performance Monitor (NPM) and SolarWinds Network Configuration Manager (NCM)
Monitor the server and application performance of your collaboration systems with SolarWinds Server & Application Monitor (SAM)
Utilize configuration management to efficiently deploy upgrades and improve compliance with NCM
Support users and systems remotely with tools such as SolarWinds Dameware® Remote Support (DRS) and SolarWinds Dameware Remote Everywhere (DRE)
Improve IT request management, ticket tracking, and asset management with tools like SolarWinds Web Help Desk® and SolarWinds Service Desk
Automate provisioning and permissions management with SolarWinds Access Rights Manager™ (ARM)
Locate users and devices on your network with SolarWinds User Device Tracker (UDT)
SolarWinds Product Management Technical Drilldown on Deep Packet Inspection a...SolarWinds
In this webinar, Group Product Manager Rob Hock and Network Management Head Geek Leon Adato will show how SolarWinds deep packet inspection and analysis and the new Quality of Experience dashboard within Network Performance Monitor (NPM) version 11 can help you solve both common and complex application and network performance issues.
This document discusses domain data security on cloud computing. It begins by defining domains as a way to partition data for security, notifications, and reporting purposes. Data is highly secure within a domain. The document then discusses how distributing data across different domains based on regions can improve security and access. It analyzes security issues in cloud environments and discusses authentication, encryption, and other techniques used for data security in cloud computing. Segregating data by domain allows for faster access, easier maintenance, and higher security according to the document.
Dhana Raj Markandu: Control System Cybersecurity - Challenges in a New Energy...Dhana Raj Markandu
Conference on Electricity Power Supply Industry (CEPSI) 2012, Bali, Indonesia
(Accepted for presentation but not published due to unforeseen withdrawal of author)
Are your industrial networks protected...Ethernet Security Firewalls Schneider Electric
Security incidents rise at an alarming rate each year. As the complexity of the threats increases, so do the security measures required to protect industrial networks. Plant operations personnel need to understand security basics as plant processes integrate with outside networks. This paper reviews network security fundamentals, with an emphasis on firewalls specific to industry applications. The variety of firewalls is defined, explained, and compared.
Industrial Control System Cyber Security and the Employment of Industrial Fir...Schneider Electric
This presentation provides an overview of industrial control systems and typical system topologies, identifies typical threats and vulnerabilities to these systems, and provides recommended security countermeasures to mitigate the risks.
This document summarizes a presentation given by Bianca Jiang and Ginny Ghezzo at the O'Reilly Software Architecture Conference on March 18, 2015 about re-architecting maintenance for continuous delivery. The presentation discussed the challenges faced with traditional maintenance approaches and how adopting DevOps practices like automation, continuous delivery and testing can help improve maintenance by providing quicker, higher quality updates on a more predictable schedule with lower costs. Key aspects covered included implementing maintenance as incremental patches in an automated pipeline, generating documentation, and ensuring quality through continuous testing. The goal is to make maintenance a seamless process for customers.
The New Generation of IT Optimization and Consolidation PlatformsBob Rhubart
This document discusses Oracle's enterprise architecture approach and solutions. It begins with an overview of Oracle's results-driven enterprise architecture methodology. It then provides examples of enterprise architecture case studies involving IT optimization through portfolio rationalization, data center consolidation, and implementing shared services and cloud computing. The document discusses Oracle's enterprise architecture framework and process, and how Oracle guides customers' enterprise architecture efforts through strategic roadmapping and proven best practices.
These training materials are confidential and restricted for use only by Accenture employees who have attended Siebel training. The materials may only be used to help clients who are implementing Siebel. They cannot be used if the Accenture employee is involved in developing a competitive product to Siebel or for a Siebel competitor. The materials also cannot be provided to any third parties without Siebel's permission. If discussing Siebel with a client using these materials, Accenture must have a nondisclosure agreement in place.
Application Consolidation and RetirementIBM Analytics
Originally Published: Feb 04, 2015
Multiple, disconnected systems or an outdated application infrastructure can negatively impact your business and increase your costs. Consolidating applications, retiring outdated databases and modernizing systems can streamline your infrastructure and free resources to focus on important new projects.
“Technical debt” refers to any quality issues within the implementation of an IT solution that hampers your ability to work with or evolve that solution. Technical debt is often thought of as a source code problem, but it also occurs in your user interface design, in your data sources, in your network architecture, and in many other places. This presentation explores disciplined agile strategies to avoid technical debt in the first place, to remove existing technical debt, and how to fund the removal of technical debt. Industry data regarding technical debt will be shared.
Reengineering involves improving existing software or business processes by making them more efficient, effective and adaptable to current business needs. It is an iterative process that involves reverse engineering the existing system, redesigning problematic areas, and forward engineering changes by implementing a redesigned prototype and refining it based on feedback. The goal is to create a system with improved functionality, performance, maintainability and alignment with current business goals and technologies.
Introduction To Server Virtualisation Planning And Implementing A Virtualisat...Alan McSweeney
The document provides an overview of planning and implementing a server virtualization project. It discusses analyzing infrastructure needs, designing a virtualization platform using VMware, migrating physical servers to virtual machines, implementing backup and monitoring, and establishing ongoing management processes. The goal is to consolidate servers, improve flexibility, reduce costs, and ensure high availability through virtualization.
Infrastructure And Application Consolidation Analysis And DesignAlan McSweeney
This document summarizes an infrastructure and application consolidation analysis and design project. The objectives are to understand the existing IT landscape, identify consolidation options and costs, produce an optimized architecture design, and provide all information needed to understand if server virtualization will deliver benefits. The analysis will inventory servers and applications, define a virtualization architecture including disaster recovery, and produce an implementation plan and cost-benefit analysis to quantify savings from consolidating infrastructure. The deliverables will document findings and provide a roadmap for a virtualization implementation.
Software re-engineering is a process of examining and altering a software system to restructure it and improve maintainability. It involves sub-processes like reverse engineering, redocumentation, and data re-engineering. Software re-engineering is applicable when some subsystems require frequent maintenance and can be a cost-effective way to evolve legacy software systems. The key advantages are reduced risk compared to new development and lower costs than replacing the system entirely.
Embedded systems developers can reduce costs by consolidating multiple systems onto a single multicore processor hardware platform. Each CPU core can be dedicated to a real-time task, such as motion control or vision processing. This allows real-time and general-purpose operating systems to coexist on individual cores without performance penalties. A virtualization-enabled real-time OS is needed to provide hardware-enforced isolation between OS environments on different cores. Using virtual devices and shared memory, applications can then communicate across OS environments with low interrupt latency comparable to a single-core system.
Virtualization allows organizations to reduce hardware costs and improve efficiency by running multiple virtual machines on a single physical server. This allows applications to be isolated from one another while sharing common resources. Virtualization provides benefits like faster deployment times, reduced maintenance costs, increased availability, and better performance. While virtualization introduces dependencies on vendors, it provides clear returns on investment for testing environments through lower costs and faster setup times.
TCO is the assessment of all life time costs from owning ertain kind.pdfnareshsonyericcson
TCO is the assessment of all life time costs from owning ertain kinds of assets.TCO looks three
major types of costs initial cost to obtain hardware/software second one is operational expenses
for the assets suh as installation and managment of assets third one is indirect cost that affects the
bussiness for instance the cost of system downtime from an outage.
virtualization can reduce tco in many ways.for example hardware virtualization can reduce the
cost associated with owing multiple physical servers due to ability to have them
virtualized.Virtualization enables multiple operating systems to run on the same physical
platform.
It is not unusual to achieve 10:1 virtual to physical machine consolidation. This means that ten
server applications can be run on a single machine that had required as many physical computers
to provide the unique operating system and technical specification environments in order to
operate. Server utilization is optimized and legacy software can maintain old OS configurations
while new applications are running in VMs with updated platforms.
Use of a VM enables rapid deployment by isolating the application in a known and controlled
environment. Unknown factors such as mixed libraries caused by numerous installs can be
eliminated. Severe crashes that required hours of reinstallation now take moments by simply
copying a virtual image.
As server workloads vary, virtualization provides the ability for virtual machines that are over
utilizing the resources of a server to be moved to underutilized servers. This dynamic load
balancing creates efficient utilization of server resources.
Disaster recovery is a critical component for IT, as system crashes can create huge economic
losses. Virtualization technology enables a virtual image on a machine to be instantly re-imaged
on another server if a machine failure occurs.
Multinational flexibility provides seamless transitions between different operating systems on a
single machine reducing desktop footprint and hardware expenditure.
Virtualization of systems helps prevent system crashes due to memory corruption caused by
software like device drivers. VT-d for Directed I/O Architecture provides methods to better
control system devices by defining the architecture for DMA and interrupt remapping to ensure
improved isolation of I/O resources for greater reliability, security, and availability.
Solution
TCO is the assessment of all life time costs from owning ertain kinds of assets.TCO looks three
major types of costs initial cost to obtain hardware/software second one is operational expenses
for the assets suh as installation and managment of assets third one is indirect cost that affects the
bussiness for instance the cost of system downtime from an outage.
virtualization can reduce tco in many ways.for example hardware virtualization can reduce the
cost associated with owing multiple physical servers due to ability to have them
virtualized.Virtualization enables multiple operat.
This document provides an overview of virtualization, including:
- Virtualization allows running multiple operating systems on a single physical system, sharing underlying hardware resources. This improves utilization rates and reduces costs.
- There are two main approaches to virtualization - hosted architectures which run on a standard OS, and hypervisor ("bare metal") architectures which have direct hardware access for better performance.
- Virtualization provides benefits beyond partitioning like hardware independence, mobility of virtual machines between servers, and adaptive resource management. This transforms individual servers into a pooled computing resource.
IRJET- A Survey on Virtualization and Attacks on Virtual Machine Monitor (VMM)IRJET Journal
This document discusses virtualization and attacks on virtual machine monitors (VMMs). It begins with an introduction to cloud computing and virtualization. Virtualization allows multiple operating systems to run concurrently on a single computer by abstracting physical resources. A VMM or hypervisor manages access to underlying physical resources for virtual machines. There are different types of virtualization including application, desktop, hardware, network, and storage virtualization. The document also discusses the two types of hypervisors - type 1 hypervisors install directly on hardware while type 2 hypervisors run on a host operating system. It concludes by noting that while virtualization improves efficiency, it can also introduce vulnerabilities that attackers may exploit.
This is a paper was written by David Reine, an IT analyst for The Clipper Group, and highlights IBM’s SAN Volume Controller new features, capabilities and benefits. These new capabilities were announced on October 20, 2009Virtualization is at the center of all 21st Century IT systems, yet many CIOs fail to fully understand all of the benefits it can deliver to the data center operation. When we think of virtualization, we think compute, network, and storage—and we mostly think about driving up utilization on each. Storage controllers have always offered the ability to carve out pieces of real storage from a large pool and deliver them efficiently to a number of hosts, but it is storage virtualization itself that offers improvements that drive operational efficiency. IBM has been quietly addressing storage virtualization with SAN Volume Controller (SVC) for the last six years, building up a significant technical lead in this space.
Strange but true: most infrastructure architectures are deliberately designed from the outset to need little or no change over their lifetimes. There are two main reasons for this:
1. Change often means outages and customer impact and must be avoided
2. Budgets are set at the beginning of a project and getting more cash later is tough
Typically, then, applications are configured with all of the storage capacity they need to support the wildest dreams of their business sponsors (and then some extra is added for contingency by IT). Equally, storage is always configured with the performance level (storage tier) set to cope with the wildest transactional dreams of the business sponsor (and guess what? IT generally adds a bit more for good measure.).
No wonder storage is now one of the largest cost components involved in delivering and running a business application.
Virtualization allows multiple operating systems to run on a single physical machine by abstracting the physical hardware and presenting virtual hardware instead. This allows for greater flexibility, efficiency, and cost savings by consolidating servers. There are different types of virtualization including server, desktop, application, and presentation virtualization. Server virtualization allows multiple virtual machines to run isolated operating systems on a single physical server. Cloud computing takes virtualization further by providing on-demand access to computing resources over the internet.
Cloud Technology and Virtualization
"Project Deliverable 4: Cloud Technology and Virtualization"
Christopher Nevels
Dr. Darcel Ford
CIS 590
11-24-13
Cloud Technology and Virtualization
There are many reasons companies and organizations are investing in server virtualization. Some of the reasons are financially motivated, while others address technical concerns. Server virtualization conserves space through consolidation. It's common practice to dedicate each server to a single application. If several applications only use a small amount of processing power, the network administrator can consolidate several machines into one server running multiple virtual environments. For companies that have hundreds or thousands of servers, the need for physical space can decrease significantly. Server virtualization provides a way for companies to practice redundancy without purchasing additional hardware. Redundancy refers to running the same application on multiple servers. It's a safety measure -- if a server fails for any reason, another server running the same application can take its place. This minimizes any interruption in service. It wouldn't make sense to build two virtual servers performing the same application on the same physical server. If the physical server were to crash, both virtual servers would also fail. In most cases, network administrators will create redundant virtual servers on different physical machines. Virtual servers offer programmers isolated, independent systems in which they can test new applications or operating systems. Rather than buying a dedicated physical machine, the network administrator can create a virtual server on an existing machine. Because each virtual server is independent in relation to all the other servers, programmers can run software without worrying about affecting other applications (Strickland 2013).
Cloud computing is ideal for small companies, as it’s cost-effective, saves time and energy, and it allows for a high level of customization. According to Forbes, a 2009 study found that cloud computing could save up to 67% of the lifecycle cost for server deployment on a large scale. Another study found that using cloud solutions generally results in higher investment returns (when compared to an on-site system). There are further cost saving benefits, such as less need for expensive hardware and software, and no need for physical networks or IT maintenance. Also, cloud systems are usually ‘pay-as-you-go’, so you only pay for what you use. There are no upfront investments, and IT requirements can be easily budgeted for. Also, various cloud services can either be added or scaled back, depending on where your business is, and how much growth is taking place. The cloud is also highly customizable: you can select what platform you want, which payroll software to use, and what email marketing tools you require – all from different vendors, and all individually configurable (K2 SEO 2013).
The c.
This document discusses using virtualization to optimize resource utilization and reduce costs. It introduces virtualization and describes how virtual machines allow multiple environments to run isolated on the same physical machine. Virtualization can reduce hardware costs by converting physical servers into logical resources that are allocated as needed. The document then presents an experimental study comparing the hardware costs of configuring different servers physically versus virtualizing them on a single system using Oracle VM VirtualBox. The results show virtualization significantly reduced costs by consolidating multiple servers onto one physical machine.
Embedded systems are increasingly integral parts of technology that perform dedicated functions with minimal user interaction. They are used in applications like GPS, ATMs, networking equipment, and more. Embedded systems combine dedicated hardware and software to provide specialized functionality. Their design must consider aspects like performance, cost, power consumption, and being integrated into other devices long-term. As embedded systems become connected to the internet, they will transform how people interact with devices and appliances. This will create an environment of ubiquitous connected devices that communicate for functions like remote monitoring and maintenance.
Short Economic EssayPlease answer MINIMUM 400 word I need this.docxbudabrooks46239
This document provides an introduction to cloud computing, discussing its key attributes of scalable, shared computing resources delivered over a network with pay-per-use pricing. It describes the different delivery models of cloud computing including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The document also discusses virtualization techniques that enable cloud computing and how cloud computing enables highly available and resilient systems through capabilities like workload migration and rapid disaster recovery.
This document discusses various types of virtualization technologies. It begins by describing characteristics of virtualized environments such as sharing, aggregation, emulation, and isolation. It then discusses different virtualization techniques including hardware-assisted virtualization, full virtualization, paravirtualization, operating system-level virtualization, programming language-level virtualization, and application-level virtualization. For each technique, it provides examples and discusses advantages and performance implications. It also includes diagrams illustrating the virtualization reference model and taxonomy of virtualization techniques.
This document discusses virtualization in embedded multicore systems. It begins by introducing virtualization and how it is being adopted in embedded systems to consolidate applications and functions onto single multicore chips for improved efficiency. It then discusses challenges with virtualization including partitioning resources fairly between applications. The document explores two approaches to virtualization - OS-hosted and bare-metal hypervisor. It states that the bare-metal hypervisor approach is best for embedded systems as it offers highest performance within tight power budgets. Finally, it examines options for implementing a bare-metal hypervisor, stating that hardware-assisted virtualization provides the best performance while minimizing the hypervisor footprint.
The document discusses VMware's strategy and solutions for virtualization. It highlights virtualization as the top strategic technology for 2009 according to Gartner. It outlines VMware's virtualization solutions like server consolidation, virtual desktop infrastructure, and disaster recovery. It also discusses VMware's strategy to evolve its virtualization platform into a "Virtual Datacenter Operating System" to provide services and automation across the entire datacenter.
The document discusses the benefits of virtual desktops including improved data security, simplified data backup, simplified disaster recovery, reduced time to deployment, simplified PC maintenance, and flexibility of access. It notes that virtual desktops can enable thinner clients, move computational requirements to the datacenter, and allow access from anywhere there is authorized connectivity.
Datacenter virtualization has several benefits from reduced costs to increased agility along with a well-managed IT infrastructure. Check out the list of advantages of virtualization here!
IBM Flex System offers a brand new platform for creating solutions to address emerging market applications, such as Cloud, Big Data, Analytics, and Smarter Planet. In this paper, we described how to create a custom private cloud configuration that uses Flex System. For more information on Pure Systems, visit http://ibm.co/18vDnp6.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Hardware Support for Efficient VirtualizationJohn Fisher-Osimisterchristen
Hardware Support for Efficient Virtualization
John Fisher-Ogden
University of California, San Diego
Abstract
Virtual machines have been used since the 1960’s in creative
ways. From multiplexing expensive mainframes to providing
backwards compatibility for customers migrating to new hard-
ware, virtualization has allowed users to maximize their usage of
limited hardware resources. Despite virtual machines falling by
the way-side in the 1980’s with the rise of the minicomputer,we
are now seeing a revival of virtualization with virtual machines
being used for security, isolation, and testing among others.
With so many creative uses for virtualization, ensuring high
performance for applications running in a virtual machine be-
comes critical. In this paper, we survey current research to-
wards this end, focusing on the hardware support which en-
ables efficient virtualization. Both Intel and AMD have incor-
porated explicit support for virtualization into their CPUde-
signs. While this can simplify the design of a stand alone virtual
machine monitor (VMM), techniques such asparavirtualization
and hosted VMM’s are still quite effective in supporting virtual
machines.
We compare and contrast current approaches to efficient vir-
tualization, drawing parallels to techniques developed byIBM
over thirty years ago. In addition to virtualizing the CPU, we
also examine techniques focused on virtualizing I/O and the
memory management unit (MMU). Where relevant, we identify
shortcomings in current research and provide our own thoughts
on the future direction of the virtualization field.
1 Introduction
The current virtualization renaissance has spurred excit-
ing new research with virtual machines on both the soft-
ware and the hardware side. Both Intel and AMD have
incorporated explicit support for virtualization into their
CPU designs. While this can simplify the design of a
stand alone virtual machine monitor (VMM), techniques
such asparavirtualizationand hosted VMM’s are still
quite effective in supporting virtual machines.
This revival in virtual machine usage is driven by many
motivating factors. Untrusted applications can be safely
sandboxed in a virtual machine providing added security
and reliability to a system. Data and performance isola-
tion can be provided through virtualization as well. Se-
curity, reliability, and isolation are all critical components
for data centers trying to maximize the usage of their hard-
ware resources by coalescing multiple servers to run on a
single physical server. Virtual machines can further in-
crease reliability and robustness by supporting live migra-
tion from one server to another upon hardware failure.
Software developers can also take advantage of virtual
machines in many ways. Writing code that is portable
across multiple architectures requires extensive testingon
each target platform. Rather than maintaining multiple
physical machines for each platform, testing can be done
within a virtual machi ...
Learn about Virtualization Performance on the IBM PureFlex System. the white paper shows that the IBM PureFlex system can deliver VM consolidation in a heterogeneous, self-contained environment capable of impressive levels of throughput performance. It can dramatically reduce time to production for virtualized data center application operations, providing multiple compute and operating system platforms, advanced storage, and integrated networking in a single manageable system.
Similar to Reducing Cost and Complexity with Industrial System Consolidation (20)
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Reducing Cost and Complexity with Industrial System Consolidation
1. WHITE PAPER
Multi-Core Virtualization Technology
Industrial Automation
Reducing Cost and Complexity with
Industrial System Consolidation
Virtualization on multi-core Intel® Core™ vPro™ processors helps lower overall
solution cost and reduce factory footprint and integration effort through
hardware consolidation.
Virtualization simplifies
the factory floor.
Summary
In today’s highly competitive manufacturing environment, success requires a constant
focus on cost cutting while maintaining production throughput and employee safety.
For manufacturers, this includes finding new ways to lower operating expenses, a
large part of which are the purchase and support of industrial systems. A significant
cost stems from the inefficiencies created by the growing numbers and varieties of
systems on the factory floor. For instance, system proliferation is consuming precious
space and straining IT resources, especially when systems have unique support
requirements for configuration, backups, spares and software patching.
Efficiency can be improved when multiple factory functions are consolidated onto
a single hardware platform, thus decreasing operating expense, factory footprint,
energy consumption, and integration and support effort. This can be done using
advanced multi-core processors along with proven virtualization technology, which
has been around since the 1960s1 and is most notably used in data centers where
many applications are consolidated onto a single server. Still, virtualization tools and
methods used in the server environment are different from what is appropriate for
the embedded environment.
This white paper describes how virtualization technology running on multi-core Intel®
Core™ vPro™ processors can be used in industrial automation to consolidate computing
devices for motion control, programmable logic control (PLC), human machine
interface (HMI), machine vision, data acquisition, functional safety and so forth. This
approach can help manufacturers reduce cost and complexity on the factory floor.
2. Reducing Cost and Complexity with Industrial System Consolidation
Virtualization Basics
In traditional industrial automation systems,
application software, the operating system (OS)
and the physical hardware are tightly coupled.
Virtualization breaks this link and provides the
ability to run multiple OSes and their associated
applications on the same physical board. This
is achieved by executing software in individual
partitions, called virtual machines (VMs), that are
managed by a new software layer, known as the
hypervisor or virtual machine monitor (VMM).
The hypervisor abstracts the board’s underlying
hardware resources (e.g., processor cores, memory
and I/O devices), so each VM runs as if it had its own
machine.
As a result, applications run on their native OSes
(referred to as “guest OSes” in virtualization
parlance), allowing them to easily migrate to a new
system – often with only minor or no changes. To illustrate
this capability, Figure 1 shows that four applications
running on their own OSes and boards can be consolidated
onto a single board with a multi-core processor and a
hypervisor. The hypervisor manages the execution of
guest OSes in much the same way an OS manages the
execution of the applications it hosts.
Virtualization in Industrial Automation
Some industrial control systems are designed with multiple
boards because they run applications like PLC, motion control,
and HMI with different sets of requirements. PLC and motion
control are time-critical applications, which are best served by
a real-time operating system (RTOS) that delivers deterministic
performance. In contrast, developers of HMI applications may
prefer a general-purpose operating system (GPOS) supported by
tools that ease the development of touch screen displays, rich
graphics and multimedia.
App 4
App 3
App 2
App 1
OS
OS
OS
OS
Core
App 2
App 3
App 4
OS
OS
OS
OS
Core
Core
Core
Hypervisor
Core
Core
Multiple Single-Processor Boards
Figure 1. General Virtualization Example
2
App 1
Core
Core
Single-Board Multi-core Processor
3. Reducing Cost and Complexity with Industrial System Consolidation
Figure 2 shows how a single board with
virtualization technology can address all these
requirements, as well as others discussed
later. Multi-core processors with virtualization
technology allow systems to simultaneously
run RTOSes and GPOSes, each on dedicated
processor cores. This configuration can increase
the determinism of time-critical applications,
because they operate unencumbered by nonreal-time tasks that would otherwise compete
for CPU resources.
Virtual Machine 1
Virtual Machine 2
Virtual Machine 3
Soft PLC
Data Acquisition
Other applications
(e.g., HMI)
Real-time
operating system
General-purpose
operating system
General-purpose
operating system
Hypervisor
®
®
Intel Core vPro processor with Intel Virtualization Technology
TM
TM
Figure 2. Industrial System Consolidation Example
Benefits from Consolidation
By consolidating devices using virtualization technology,
original equipment manufacturers (OEMs) developing industrial
automation solutions can provide substantial benefits to their
customers, such as:
•
•
Lower overall solution cost: Although a consolidated
device may cost more than any of the individual subsystems,
it should cost less to manufacture than the combined
subsystems because it has a smaller bill of materials (BOM).
In addition, virtualization makes it easier for OEMs to add
new functionality to a system and expand their offerings.
Smaller factory footprint: Consolidated equipment takes
up less factory floor space than the individual systems it
replaces.
•
Reduced overall energy consumption: The power
efficiency of Intel Core vPro processors, combined with
system consolidation, can yield a solution that consumes
less power than the individual systems combined.
•
Reduced integration cost: By consolidating subsystems,
OEMs effectively eliminate many integration tasks for their
customers. For instance, the networking, cabling, shielding
and configuration that connect multiple subsystems
together are handled within the system.
•
Simpler to secure: The consolidated approach decreases
the number of computing devices that require security
software and may eliminate some varieties of security
solutions the factory must support. In addition, there are
fewer devices for hackers to attempt to infiltrate, thus
reducing the attack surface of the factory floor.
•
Easier system management: When subsystems are
consolidated, factory IT personnel have a smaller number of
devices to install, provision and manage. Also, a consolidated
system is likely to have more capable hardware and
software than the subsystems it replaces, allowing for
additional manageability options and capabilities.
•
Higher reliability: The greater the number of systems, the
larger the number of devices that can fail. Consequently,
a consolidated system should have a better mean
time between failures (MTBF) than the combination of
subsystems it replaces. Furthermore, there are fewer
spares for factories to carry, and maintenance and repair
procedures are simpler – all ultimately leading to shorter
downtimes.
3
4. Reducing Cost and Complexity with Industrial System Consolidation
Consolidating Systems on Multi-Core Processors
Multi-core architectures, such as Intel Core vPro processors,
provide the computing power needed to consolidate industrial
systems and deliver real-time, deterministic performance. Multicore processor architecture allows OEMs to dedicate hardwarelevel computing resources to specific VMs, thereby enabling an
RTOS to behave deterministically regardless of the applications
running in the other VMs. In addition, developers can more
easily reallocate system resources across cores as system needs
change.
One of the key benefits of consolidation is improved resource
efficiency, which is achieved through a multi-core architecturebased platform. An industrial solution that combines multiple
subsystems on one platform requires just one computing
system and power supply, which results in a smaller footprint,
higher density, lower power consumption and a simpler design
compared to multiple subsystems with their own hardware.
Hardware-Assisted Virtualization Technologies
Although virtualization is generally viewed as a software
technology, hardware features have been added to processors
to improve the performance and security of virtualization. For
instance, Intel has enhanced the capabilities of virtualization
technology with a complementary hardware-assist technology
called Intel® Virtualization Technology (Intel® VT),2 an ingredient
of Intel® vPro™ technology. It performs various virtualization
tasks in hardware, like memory address translation, which
reduces the overhead and footprint of virtualization software,
and improves its performance. For instance, VM to VM switching
time is significantly faster when memory address translation is
performed in hardware instead of by software.
In addition, Intel VT increases the robustness of virtualized
environments by using hardware to prevent the software
running in one VM from interfering with the software running
in another VM. Along these lines, virtualization helps avoid
unintended interactions between applications by preventing
one from accessing another’s memory space. Some of the key
benefits of virtualization in industrial automation and other
embedded applications are listed in Table 1.
With respect to performance, Intel has developed three
different, yet complementary, virtualization acceleration
technologies that span multiple platform components, including
the processor, chipset and NICs:
Intel® Virtualization Technology (Intel® VT) for IA-32, Intel®
64 and Intel® Architecture (Intel® VT-x) speeds up the transfer
of platform control between the guest OSes and the hypervisor.
In Intel® processors, it reduces virtualization overhead by
eliminating the need for the hypervisor to listen, trap and
execute certain instructions on behalf of each guest OS. When
hypervisor interventions are required, it provides hardware
support so handoffs between the hypervisor and guest OSes are
faster and more secure.
Intel® Virtualization Technology (Intel® VT) for Directed
I/O (Intel® VT-d) accelerates data movement by enabling the
hypervisor to directly and securely assign I/O devices to specific
guest OSes. Each device is given a dedicated area in system
memory so data can travel directly and without hypervisor
involvement. I/O traffic flows more quickly, with more processor
cycles available to run applications. Security and availability are
also improved, since I/O data intended for a specific device or
guest OS cannot be accessed by any other hardware or guest
software component.
Intel® Virtualization Technology (Intel® VT) for Connectivity
(Intel® VT-c) performs PCI-SIG* Single Root I/O Virtualization
(SR-IOV) functions that allow the partitioning of a single Intel®
Ethernet Server Adapter port into multiple virtual functions.
These virtual functions may be allocated to VMs, each with
their own bandwidth allocation. They offer a high-performance,
low-latency path for data packets to get into the VM. Intel VT-c,
integrated in Intel® Ethernet NICs, enables improved networking
throughput with lower CPU utilization and reduced system
latency.
Improving Virtualization Performance
It’s possible to ensure the real-time performance necessary for
consolidated factory automation solutions using Intel VT and an
RTOS when several main issues are addressed. Foremost, it’s
necessary to minimize the interrupt
latency and the overhead associated
Capabilities
Benefits
with general-purpose processors.
A major source of performance loss
• Increases system reliability and stability
is from VM enters and exits, which
Isolates applications in secure partitions
• Eases software migration and consolidation
typically occur when the hypervisor
must service an interrupt or handle
• Decreases loop jitter
Runs RTOS on a dedicated processor core
a special event. These transitions
• Improves determinism
are expensive operations because
• Decreases hypervisor load on the processor
execution contexts must be saved
Performs virtualization tasks in hardware
• Reduces VM to VM switching time
and retrieved, and during this time the
Table 1. Intel® Virtualization Technology Capabilities and Benefits
guest is stalled.
4
5. Reducing Cost and Complexity with Industrial System Consolidation
Guest
Multiple Single-ProcessorRunning
Boards
Running
VM Exit
Host Enter
Host
Multiple Single-Processor Boards
VM Enter
Host Exit
Running
Running
VM Exit
VM Enter
Host Enter
Host Exit
Running
Interrupt
Figure 3. Interrupt Impact
Figure 3 depicts the VM/Host enters and exits that could result
from an external interrupt. In this case, the guest OS runs until
an external interrupt arrives. Subsequently, there are a total
of eight exits and enters before the guest OS is allowed to
restart its stalled process. This overhead can become substantial
since it’s not uncommon for I/O-intensive applications to have
hundreds or thousands of interrupts arriving in a second. These
constant disruptions cannot be tolerated with time-critical
control applications because of the resulting degradation in
performance, latency and determinism.
Intel has worked together with operating system vendors to
reduce the typical interrupt latency from between 300 and
700 uS to sub 20 uS,3,4 achieving near-native performance (i.e.,
similar to non-virtualized) in a virtualized environment. This is
possible through the implementation of hardware and software
®
mechanisms that minimize the interrupt overhead inherent in
a virtualized environment, some of which are described in the
following:
•
Intel® Virtualization Technology FlexPriority: When a
processor is performing a control task, it often receives
interrupts from other devices or applications. To minimize
the impact on performance, a special register, called the
APIC Task Priority Register (TPR), in the processor monitors
the priority of tasks to prevent the interruption of one
task by another with lower priority. Intel Virtualization
Technology FlexPriority (Figure 4) creates a virtual copy
of the TPR that can be read, and in some cases changed,
by guest OSes without hypervisor intervention. This
eliminates most VM exits due to guests accessing task
priority registers and thereby provides a major performance
improvement.
Without Intel Virtualization Technology FlexPriority
With Intel Virtualization Technology FlexPriority
Virtual Machine (VM)
Virtual Machine (VM)
Guest
Operating System
Guest
Operating System
No VM
Exits
VM
Exits
APIC-TPR access
in hardware
APIC-TPR access
in software
configure
Virtual Machine Monitor (VMM)
Virtual Machine Monitor (VMM)
• Fetch/decode instruction
• Emulate APIC-TPR behavior
• Thousands of cycles per exit
• Instruction executes directly
• Hardware emulates APIC-TPR access
• No VM exits
Figure 4. Depiction of Intel® Virtualization Technology FlexPriority
5
6. ForReducing Cost and Complexity with Industrial System Consolidation
•
Virtual Processor IDs (VPID): Previously, every time the
hypervisor performed content switching between VMs, the
active VM and its data structure had to be flushed out of
the transition look-aside buffers (TLB) associated with the
CPU caches. As a result, there was performance loss on all
VM exits because the hypervisor did not know which cache
line was associated with any particular VM.
With Virtual Processor IDs (VPID), the virtual machine
control structure (VMCS) contains a VM ID tag that
associates cache lines with each actively running VM on
the CPU. This permits the CPU to flush only the cache lines
associated with a particular VM when it is flushed from the
CPU, avoiding the need to reload cache lines for a VM that
was not migrated and resulting in lower overhead.
•
Guest Preemption Timer: Programmable by hypervisor,
this timer provides a mechanism to enable a hypervisor to
preempt (i.e., halt) the execution of a guest OS by causing a
VM exit when the timer expires. This feature makes it easier
to switch tasks, fulfill real-time control requirements or
allocate a certain amount of CPU cycles to a task.
•
Descriptor Table Exiting: This feature enables a hypervisor
to protect a guest OS from internal attack by preventing the
relocation of key system data structures. This mechanism
helps to better protect safety-critical applications.
•
Pause-Loop Exiting: Spin-locking code typically uses
PAUSE instructions in a loop. This feature detects when the
duration of a loop is longer than “normal” (a sign of lockholder preemption) and forces an exit into the hypervisor.
After the hypervisor takes control, it can schedule another
VM. Spin locks are often used in control applications for
inter-process synchronization.
•
Virtual Advanced Programmable Interrupt Controller
(vAPIC): The hypervisor previously had to maintain a
virtual APIC model in software for handling interrupts. This
functionality is now implemented with microcode, called the
vAPCI, which the guest can access without triggering a VM
exit, as shown in Figure 5.
Without Virtual APIC (vAPIC)
With Virtual APIC (vAPIC)
Virtual Machine (VM)
Virtual Machine (VM)
Guest
Operating System
Guest
Operating System
No VM
Exits
VM
Exits
vAPIC in CPU
(hardware microcode)
vAPIC model
in software
configure
Virtual Machine Monitor (VMM)
Virtual Machine Monitor (VMM)
• Fetch/decode instruction
• Emulate APIC behavior
• Approximately 15,000 cycles per exit
• Instruction executes directly
• Hardware and microcode emulate APIC
• No VM exits
Figure 5. The vAPIC Implemented in Hardware
6
7. Reducing Cost and Complexity with Industrial System Consolidation
Intel® Ethernet Adapter with VMDq
VM 1
Next
Generation
OS
Services
Nex2
VM 2
Generation
OS
Services
Intel Ethernet Adapter with SR-IOV Support
Nex2
VM 3
Generation
OS
Services
VM 1
Next
Generation
OS
Services
Nex2
VM 2
Generation
OS
Services
Nex2
VM 3
Generation
OS
Services
Virtual
Adapter
Virtual
Adapter
Virtual
Adapter
Queue
Queue
Queue
Hypervisor
Queue
Queue
Queue
Virtual Ethernet Bridge
Virtual Ethernet Bridge
Figure 6. Technologies for Improving Virtualized I/O
•
Single-Root I/O Virtualization (SR-IOV): The PCI Special
Deploying Intel® Virtualization Technology
Interest Group (PCI-SIG) specification, SR-IOV allows one
Intel Virtualization Technology is enabled by a number of
NIC to service multiple VMs, as shown in Figure 6. The
hardware and software components, including Intel VT-enabled
specification provides a standard mechanism for devices to
Intel processors and chipsets, which are listed in Table 2. Intel VT
advertise their ability to be simultaneously shared among
requires virtual machine monitor software and Intel VT-enabled
multiple virtual machines. It also allows for the partitioning
BIOS software.
of a PCI function into many virtual interfaces for the
purpose of sharing the resources of a PCI Express*
device in a virtual environment.
Platform Components
Required Capability
Each virtual function can support a unique and
separate data path for I/O-related functions within
the PCI Express hierarchy. Use of SR-IOV in factory
automation, for example, allows the bandwidth of
a NIC to be partitioned into smaller slices that may
be allocated to specific virtual machines, or guests,
via a standard interface. This resource sharing can
increase the total utilization of any given resource
presented on an SR-IOV-capable PCI Express
device, potentially reducing the cost of a virtual
system.
For additional information, please
http://www.intel.com/content/www/us/en/networkadapters/virtualization.html.
Intel® Core™ vPro™ Processor
Intel® Virtualization Technology (Intel® VT)enabled
Intel® Chipset
Intel VT-enabled
Virtual Machine Monitor
Software
Available from software vendors, such as
Green Hills*, LynuxWorks*, TenAsys*, Real-Time
Systems* and Wind River*
BIOS
Intel VT-enabled, available from AMI*, Phoenix*
and Insyde*
Table 2. Required Intel® Virtualization Technology Components
visit
7