This whitepaper discusses the existing software-based paging
virtualization solutions and their associated performance overheads. It
then introduces AMD-V™ Rapid Virtualization Indexing technology –
referred to as nested paging within this publication - highlights its
advantages and demonstrates the performance uplift that may be seen
with nested paging.
Alternative Architecture Overview 44956 AJames Price
T
his white paper focuses on four of the prevalent alternative architectures: Virtual Desktop Infrastructure (VDI), Terminal Services, operating system (OS) streaming, and Blade Personal Computers (PCs). This white paper presents a technical overview and potential use cases for each of these architectures, and explores AMD as the platform of choice for each by demonstrating both technical (performance) and non-technical (cost) advantages.
Citrix XenDesktop 5.5 vs. VMware View 5: User experience and bandwidth consum...Principled Technologies
The experience that virtual desktops provide for workers is critical. If a user’s desktop is sluggish, or worse, choppy and difficult to navigate, working becomes difficult. Choosing a virtual desktop solution that provides sluggish, choppy desktops to remote end-users in branch offices defeats the purpose of implementing such a solution in the first place.
In both the small and medium-sized branch office scenarios we tested, we found that Citrix XenDesktop 5.5 provided a better desktop experience for remote users than VMware View 5, and used as much as 37.1 percent less bandwidth delivering it. Using Citrix XenDesktop 5.5 with Citrix Branch Repeater provided an even better experience for remote users by optimizing bandwidth over the WAN and delivering local-like virtual desktop sessions in both our 10-and 100-user tests. When selecting a VDI solution to deploy virtual desktops over the WAN to users in remote offices, determining the type of user experience the solution provides is paramount.
Database performance: Dell PowerEdge R730xd vs. Lenovo ThinkServer RD650Principled Technologies
Microsoft SQL Server 2014 users, take note. In our datacenter, we found that the Dell PowerEdge R730xd server based on the Intel Xeon processor E5-2600 v3 product family with the Intel SSD DC S3610 Series handled up to 27.9 percent more orders per minute than the Lenovo ThinkServer RD650 did. With three times the SSDs, the PowerEdge R730xd delivered better response times—up to 24.6 percent for application latency and up to 93.1 percent for disk latency—than the ThinkServer RD650. Getting more performance per server and better response times means you can give customers a better, faster ecommerce experience. This can allow you to buy, store, and power fewer servers, helping stretch your IT budget further.
Administrators can spend a great deal of time deploying and managing computing resources, especially when dealing with ROBO environments. The Dell PowerEdge VRTX, powered by the Intel Xeon processor E5-2400 v2 product family and running Microsoft Windows Server 2012 R2, gives administrators centralized management tools to help them provide time saving benefits and integrated toolsets.
In our hands-on testing, we found that the Dell PowerEdge VRTX greatly simplified deployment through an easy, wizard-based setup of Microsoft Windows Server Failover Clusters across server nodes with the Dell OpenManage Cluster Configurator. It also provided versatile hardware resource reassignment through a shared PCIe bus and efficient centralized management through CMC and scripting. Finally, we found that the Dell System Update Utility worked seamlessly with Microsoft Cluster-Aware Updating to update server nodes while keeping the failover cluster online and minimizing downtime. These advantages make the Dell PowerEdge VRTX an attractive choice for those who seek to reduce the management overhead of their ROBO environments.
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments EMC
This white paper discusses the new features and functionality introduced with the Enginuity 5876 code release for EMC Symmetrix VMAX subsystems in the IBM System z environment.
Alternative Architecture Overview 44956 AJames Price
T
his white paper focuses on four of the prevalent alternative architectures: Virtual Desktop Infrastructure (VDI), Terminal Services, operating system (OS) streaming, and Blade Personal Computers (PCs). This white paper presents a technical overview and potential use cases for each of these architectures, and explores AMD as the platform of choice for each by demonstrating both technical (performance) and non-technical (cost) advantages.
Citrix XenDesktop 5.5 vs. VMware View 5: User experience and bandwidth consum...Principled Technologies
The experience that virtual desktops provide for workers is critical. If a user’s desktop is sluggish, or worse, choppy and difficult to navigate, working becomes difficult. Choosing a virtual desktop solution that provides sluggish, choppy desktops to remote end-users in branch offices defeats the purpose of implementing such a solution in the first place.
In both the small and medium-sized branch office scenarios we tested, we found that Citrix XenDesktop 5.5 provided a better desktop experience for remote users than VMware View 5, and used as much as 37.1 percent less bandwidth delivering it. Using Citrix XenDesktop 5.5 with Citrix Branch Repeater provided an even better experience for remote users by optimizing bandwidth over the WAN and delivering local-like virtual desktop sessions in both our 10-and 100-user tests. When selecting a VDI solution to deploy virtual desktops over the WAN to users in remote offices, determining the type of user experience the solution provides is paramount.
Database performance: Dell PowerEdge R730xd vs. Lenovo ThinkServer RD650Principled Technologies
Microsoft SQL Server 2014 users, take note. In our datacenter, we found that the Dell PowerEdge R730xd server based on the Intel Xeon processor E5-2600 v3 product family with the Intel SSD DC S3610 Series handled up to 27.9 percent more orders per minute than the Lenovo ThinkServer RD650 did. With three times the SSDs, the PowerEdge R730xd delivered better response times—up to 24.6 percent for application latency and up to 93.1 percent for disk latency—than the ThinkServer RD650. Getting more performance per server and better response times means you can give customers a better, faster ecommerce experience. This can allow you to buy, store, and power fewer servers, helping stretch your IT budget further.
Administrators can spend a great deal of time deploying and managing computing resources, especially when dealing with ROBO environments. The Dell PowerEdge VRTX, powered by the Intel Xeon processor E5-2400 v2 product family and running Microsoft Windows Server 2012 R2, gives administrators centralized management tools to help them provide time saving benefits and integrated toolsets.
In our hands-on testing, we found that the Dell PowerEdge VRTX greatly simplified deployment through an easy, wizard-based setup of Microsoft Windows Server Failover Clusters across server nodes with the Dell OpenManage Cluster Configurator. It also provided versatile hardware resource reassignment through a shared PCIe bus and efficient centralized management through CMC and scripting. Finally, we found that the Dell System Update Utility worked seamlessly with Microsoft Cluster-Aware Updating to update server nodes while keeping the failover cluster online and minimizing downtime. These advantages make the Dell PowerEdge VRTX an attractive choice for those who seek to reduce the management overhead of their ROBO environments.
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments EMC
This white paper discusses the new features and functionality introduced with the Enginuity 5876 code release for EMC Symmetrix VMAX subsystems in the IBM System z environment.
Red Hat Enterprise Virtualization is a powerful platform, and having solid data on the requirements of guests is extremely useful for planning a virtual desktop infrastructure. Using this data, you can properly plan your server configuration with confidence, knowing that the hardware and networking resources you select for your data center can support the number and type of users you require.
Tasks
■ Develop and set up a PoC (Proof-of-Concept) in order to simulate a standard user operations load on Windows 7 (x64) based VMs.
■ Focus on the technologies “VMware View with Linked Clones" and Citrix "XenDesktop Machine Creation Services".
■ Use of state-of-the-art local storage based on enterprise SSD technology.
■ Figure out VM density on PRIMERGY S7 generation based on Intel Romley-EP server architecture.
The new Dell PowerEdge R720 comes with more than just the power to handle your heavy mixed workloads – it offers many storage solutions to deliver the level of performance you need. In our tests, we found that that a configuration of all HDDs could support a total of 1,164 users accessing database, mail, and collaboration applications. The Dell PowerEdge R720 solution with CacheCade enabled increased the supported number of users to 2,929, an increase of 151.6 percent. Finally, the Hybrid solution increased the number of users to 7,574, or an increase of 550.7 percent over the HDD solution, providing you with numerous options and scalability to get the performance you need.
Nimboxx HCI AU-110x: A scalable, easy-to-use solution for hyperconverged infr...Principled Technologies
Hyperconvergence is a fresh way of looking at your data center. For small- and medium-sized businesses especially, it could be well worth your time to invest in a hyperconverged infrastructure. The MeshOS-operated Nimboxx HCI AU-110x offered scalability and great performance in our hands-on tests and was simple and straightforward to use, which could help your business meet user demands and potentially save money by avoiding things such as hiring expensive IT staff to maintain your data center.
IBM SmartCloud Entry+ for System X (delivered as IBM Starter Kit for Cloud x86
Edition) is an integrated cloud management platform that is designed to be quickly
installed and operational. The IBM SmartCloud Entry application is implemented as a
lightweight Web-based application that runs as an Open Services Gateway initiative
(OSGI) application.
VDI performance comparison: Dell PowerEdge FX2 and FC430 servers with VMware ...Principled Technologies
Replacing your legacy VDI servers with a new Intel Xeon processor E5-2670 v3-powered Dell PowerEdge FX2 solution using VMware Virtual SAN can be a great boon for your enterprise.
In the Principled Technologies (PT) labs, this space-efficient, affordable solution outperformed a five-year-old legacy server and traditional SAN by offering twice as many VDI users. Additionally, it achieved greater performance while using 91 percent less space and at a cost of only $167.89 per user in hardware costs.
By supporting more users, saving space, and its affordability, an upgrade to the Intel-powered Dell PowerEdge FX2 solution using VMware Virtual SAN can be a wise move when replacing your aging, older infrastructure.
Red Hat Enterprise Virtualization is a powerful platform, and having solid data on the requirements of guests is extremely useful for planning a virtual desktop infrastructure. Using this data, you can properly plan your server configuration with confidence, knowing that the hardware and networking resources you select for your data center can support the number and type of users you require.
Tasks
■ Develop and set up a PoC (Proof-of-Concept) in order to simulate a standard user operations load on Windows 7 (x64) based VMs.
■ Focus on the technologies “VMware View with Linked Clones" and Citrix "XenDesktop Machine Creation Services".
■ Use of state-of-the-art local storage based on enterprise SSD technology.
■ Figure out VM density on PRIMERGY S7 generation based on Intel Romley-EP server architecture.
The new Dell PowerEdge R720 comes with more than just the power to handle your heavy mixed workloads – it offers many storage solutions to deliver the level of performance you need. In our tests, we found that that a configuration of all HDDs could support a total of 1,164 users accessing database, mail, and collaboration applications. The Dell PowerEdge R720 solution with CacheCade enabled increased the supported number of users to 2,929, an increase of 151.6 percent. Finally, the Hybrid solution increased the number of users to 7,574, or an increase of 550.7 percent over the HDD solution, providing you with numerous options and scalability to get the performance you need.
Nimboxx HCI AU-110x: A scalable, easy-to-use solution for hyperconverged infr...Principled Technologies
Hyperconvergence is a fresh way of looking at your data center. For small- and medium-sized businesses especially, it could be well worth your time to invest in a hyperconverged infrastructure. The MeshOS-operated Nimboxx HCI AU-110x offered scalability and great performance in our hands-on tests and was simple and straightforward to use, which could help your business meet user demands and potentially save money by avoiding things such as hiring expensive IT staff to maintain your data center.
IBM SmartCloud Entry+ for System X (delivered as IBM Starter Kit for Cloud x86
Edition) is an integrated cloud management platform that is designed to be quickly
installed and operational. The IBM SmartCloud Entry application is implemented as a
lightweight Web-based application that runs as an Open Services Gateway initiative
(OSGI) application.
VDI performance comparison: Dell PowerEdge FX2 and FC430 servers with VMware ...Principled Technologies
Replacing your legacy VDI servers with a new Intel Xeon processor E5-2670 v3-powered Dell PowerEdge FX2 solution using VMware Virtual SAN can be a great boon for your enterprise.
In the Principled Technologies (PT) labs, this space-efficient, affordable solution outperformed a five-year-old legacy server and traditional SAN by offering twice as many VDI users. Additionally, it achieved greater performance while using 91 percent less space and at a cost of only $167.89 per user in hardware costs.
By supporting more users, saving space, and its affordability, an upgrade to the Intel-powered Dell PowerEdge FX2 solution using VMware Virtual SAN can be a wise move when replacing your aging, older infrastructure.
As data centers reach the upper limits of their power and cooling
capacity, efficiency has become the focus of extending the life
of existing data centers and designing new ones. As part of these
efforts, IT needs to refresh existing infrastructure with servers that
deliver more performance and scalability, more efficiently. The Intel®
Xeon® processor 5500Δ series provides a foundation for IT management
to refresh existing or design new data centers to achieve
greater performance while using less energy and space, and dramatically
reducing operating costs.
Where to focus event innovation? - An audience led approachLive Union
Presented by Live Union at Tech Fest in July 2013. In the face of so much new event technology and format deign, this presentation is designed to help event professionals identify where to focus their innovation.
Orange Israel presentation announcing the winners in their iPhone startAPP applications contest, as presented in the MobileMonday Tel Aviv January 2010 event.
Analizador sintáctico de Pascal escrito en BisonEgdares Futch H.
Este documento es un ejemplo de un analizador sintáctico (Parser) de Pascal, escrito en Bison. Incluye la generación de un Árbol de Sintaxis Abstracta (AST - Abstract Syntax Tree)
The partnership of Kegler Brown, the Ohio Small Business Development Center and the Ohio Development Services Agency, presented "Your Guide to Business + Legal Success in Latin America" as a panel discussion on July 28, 2015. The panel, consisting of local industry professionals, discussed best practices for conducting business in Brazil, Chile and Colombia.
Roberta Winch, international program director at the Ohio SBDC hosted by CSCC introduced the speakers and opened the program and Vinita Bahri-Mehra, global business attorney and Asia-Pacific team leader at Kegler Brown, moderated the panel. Speakers included: Amy Freedman , international trade specialist at U.S. Department of Commerce; Zoe Munro, program manager at Council of Great Lakes Governors; Fidel Quinones , senior director + general manager, Latin America consumer business group at The Scotts Miracle-Gro Company; Brian Sturtz, director, international tax services at GBQ Partners; and David M. Wilson, global business attorney at Kegler Brown.
Hardware Support for Efficient VirtualizationJohn Fisher-Osimisterchristen
Hardware Support for Efficient Virtualization
John Fisher-Ogden
University of California, San Diego
Abstract
Virtual machines have been used since the 1960’s in creative
ways. From multiplexing expensive mainframes to providing
backwards compatibility for customers migrating to new hard-
ware, virtualization has allowed users to maximize their usage of
limited hardware resources. Despite virtual machines falling by
the way-side in the 1980’s with the rise of the minicomputer,we
are now seeing a revival of virtualization with virtual machines
being used for security, isolation, and testing among others.
With so many creative uses for virtualization, ensuring high
performance for applications running in a virtual machine be-
comes critical. In this paper, we survey current research to-
wards this end, focusing on the hardware support which en-
ables efficient virtualization. Both Intel and AMD have incor-
porated explicit support for virtualization into their CPUde-
signs. While this can simplify the design of a stand alone virtual
machine monitor (VMM), techniques such asparavirtualization
and hosted VMM’s are still quite effective in supporting virtual
machines.
We compare and contrast current approaches to efficient vir-
tualization, drawing parallels to techniques developed byIBM
over thirty years ago. In addition to virtualizing the CPU, we
also examine techniques focused on virtualizing I/O and the
memory management unit (MMU). Where relevant, we identify
shortcomings in current research and provide our own thoughts
on the future direction of the virtualization field.
1 Introduction
The current virtualization renaissance has spurred excit-
ing new research with virtual machines on both the soft-
ware and the hardware side. Both Intel and AMD have
incorporated explicit support for virtualization into their
CPU designs. While this can simplify the design of a
stand alone virtual machine monitor (VMM), techniques
such asparavirtualizationand hosted VMM’s are still
quite effective in supporting virtual machines.
This revival in virtual machine usage is driven by many
motivating factors. Untrusted applications can be safely
sandboxed in a virtual machine providing added security
and reliability to a system. Data and performance isola-
tion can be provided through virtualization as well. Se-
curity, reliability, and isolation are all critical components
for data centers trying to maximize the usage of their hard-
ware resources by coalescing multiple servers to run on a
single physical server. Virtual machines can further in-
crease reliability and robustness by supporting live migra-
tion from one server to another upon hardware failure.
Software developers can also take advantage of virtual
machines in many ways. Writing code that is portable
across multiple architectures requires extensive testingon
each target platform. Rather than maintaining multiple
physical machines for each platform, testing can be done
within a virtual machi ...
This paper addresses the redundant array of
inexpensive/independent disks (RAID) in the field of diskless
clients’ where the centralized Disk Less NFS Server is present
to share the OS bit to diskless clients over TCP IP over Local
area network. Disk less client technology is very much
practical and useful where a cost efficient and low end clients
can be made useful without presence of a local disk itself. The
clients which has the whole low end Computer system i.e.
Keyboard, mouse, CPU + motherboard, monitor and may or
may not have a disk can still use the advantages of RAID
system through the diskless client server in such a way that
any disk faults can be tolerated online. The underlying present
disk (if any) in diskless client can also be used for specific
purposes.
The Next Generation of Intel: The Dawn of NehalemJames Price
Intel sets a new standard for business servers, high-performance computing (HPC) systems, and workstations built with the new generation of Intel® Microarchitecture, codenamed Nehalem, offers an unprecedented opportunity to dramatically advance the efficiency of IT infrastructure and provide unmatched business capabilities.
IDC Tech Spotlight: From Silicon To CloudJames Price
The topic of cloud computing has received a tremendous amount of attention in the last year. This whitepaper discusses some of the aspects of the delivery of IT Infrastructure as a service.
Virtualizing the Next Generation of Server Workloads with AMD™James Price
In this white paper, we discuss the potential bottlenecks that organizations typically encounter: high memory
utilization, high processor utilization, and high I/O traffic. We look at the performance characteristics of server
workloads that can be successfully virtualized, and we discuss how an awareness of the performance
characteristics of a particular workload can help inform an intelligent virtualization strategy. We also examine the
improvements in virtualization hardware that are making it possible to virtualize an increasingly wide range of
workloads.
AMD Putting Server Virtualization to WorkJames Price
E
nterprises have been using virtualization technology on mainframes and RISC-based systems
for years to enable better utilization of hardware resources. As x86 servers have become a mainstay
in the enterprise, more companies are exploring virtualization with these servers to enable more productive, flexible, and scalable datacenters while reducing costs and boosting data availability. Computing technologies from AMD are providing the foundation for today’s—and tomorrow’s—enterprise virtualization solutions.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
3. WHITE PAPER AMD-V™ Nested Paging
Executive Summary
Operating systems use processor paging to isolate the address space
of its processes and to efficiently utilize physical memory. Paging is the
process of converting a process-specific linear address to a system
physical address. When a processor is in paged mode, which is the
case for modern operating systems, paging is involved in every data
and instruction access. x86 processors utilize various hardware
facilities to reduce overheads associated with paging. However, under
virtualization, where the guest’s view of physical memory is different
from system’s view of physical memory, a second level of address
translation is required to convert guest physical addresses to machine
addresses. To enable this additional level of translation, the hypervisor
must virtualize processor paging. Current software-based paging
virtualization techniques such as shadow-paging incur significant
overheads, which result in reduced virtualized performance, increased
CPU utilization and increased memory consumption.
Continuing the leadership in virtualization architecture and
performance, AMD64 Quad-Core processors are the first x86
processors to introduce hardware support for a second or nested level
of address translation. This feature is a component of AMD
Virtualization technology (AMD-V™) and referred to as Rapid
Virtualization Indexing (RVI) or nested paging. Under nested paging,
the processor utilizes nested page tables, which are set up by the
hypervisor, to perform a second level address translation. Nested
Paging reduces the overheads found in equivalent shadow paging
implementations.
This whitepaper discusses the existing software-based paging
virtualization solutions and their associated performance overheads. It
then introduces AMD-V™ Rapid Virtualization Indexing technology –
referred to as nested paging within this publication - highlights its
advantages and demonstrates the performance uplift that may be seen
with nested paging.
Advanced Micro Devices, Inc.
4. WHITE PAPER AMD-V™ Nested Paging
Table of Contents:
1. Introduction ...................................................................... 5
2. System Virtualization Basics ................................................ 5
3. x86 Address Translation Basics ............................................ 6
4. Virtualizing x86 paging ....................................................... 8
4.1 Software techniques for virtualizing address translation ........ 9
4.2 AMD-V™ Nested Page Tables (NPT) ................................... 11
4.2.1 Details ........................................................................ 12
4.2.2 Cost of a Page Walk under Nested Paging ........................ 13
4.2.3 Using Nested Paging ..................................................... 14
4.2.4 Memory savings with NPT .............................................. 14
4.2.5 Impact of Page Size on Nested Paging Performance .......... 15
4.2.6 Micro-architecture Support for Improving Nested Paging
Performance......................................................................... 15
4.2.7 Address Space IDs ........................................................ 16
4.2.8 Nested Paging Benchmarks ............................................ 17
5 Conclusion ........................................................................ 19
Advanced Micro Devices, Inc.
5. WHITE PAPER AMD-V™ Nested Paging
1. Introduction
System virtualization is the abstraction and pooling of resources on a
platform. This abstraction decouples software and hardware and
enables multiple operating system images to run concurrently on a
single physical platform without interfering with each other.
Virtualization can increase utilization of computing resources by
consolidating workloads running on many physical machines into
virtual machines running on a single physical machine. This
consolidation can dramatically reduce power consumption and floor
space requirements in the data center. Virtual machines can be
provisioned on-demand, replicated and migrated using a centralized
management interface.
Beginning with 64-bit AMD Opteron™ Rev-F processors, AMD has
provided processor extensions to facilitate development of more
efficient, secure and robust software for system virtualization. These
extensions, collectively called AMD Virtualization™ or AMD-V™
technology, remove the overheads associated with software-only
virtualization solutions and attempt to reduce the performance gap
between virtualized and non-virtualized systems.
2. System Virtualization Basics
To allow multiple operating systems to run on the same physical
platform, a platform layer implemented in software decouples the
operating system from the underlying hardware. This layer is called
the hypervisor. In context of system virtualization, the operating
system being virtualized is referred to as guest.
To properly virtualize and isolate a guest, the hypervisor must control
or mediate all privileged operations performed by the guest. The
hypervisor can accomplish this using various techniques. The first
technique is called para-virtualization, where the guest source code is
modified to cooperate with the hypervisor when performing privileged
operations. The second technique is called binary translation, where at
run time the hypervisor transparently replaces privileged operations in
the guest with operations that allow the hypervisor to control and
emulate those operations. The third method is hardware-assisted
virtualization, where the hypervisor uses processor extensions such
AMD-V to intercept and emulate privileged operations in the guest. In
certain cases AMD-V technology allows the hypervisor to specify how
Advanced Micro Devices, Inc.
6. WHITE PAPER AMD-V™ Nested Paging
the processor should handle privileged operations in guest itself
without transferring control to the hypervisor.
A hypervisor using binary translation or hardware assisted
virtualization must provide the illusion to the guest that the guest is
running on physical hardware. For example, when the guest uses
processor’s paging support for address translation, the hypervisor
must ensure that the guest observes the equivalent behavior it would
observe on non-virtualized hardware.
3. x86 Address Translation Basics
A virtual address is the address a program uses to access data and
instructions. Virtual address is comprised of segment and offset fields.
The segment information is used to determine protection information
and starting address of segment. Segment translation cannot be
disabled, but operating systems generally use flat segmentation where
all segments are mapped to the entire physical address space. Under
flat segmentation the virtual address effectively becomes the linear
address. In this whitepaper we will use linear and virtual address
interchangeably.
If paging is enabled, the linear address is translated to a physical
address using processor paging hardware. To use paging, the
operating system creates and manages a set of page tables (See
figure 1). The page table walker or simply the page walker,
implemented in processor hardware, performs address translation
using these page tables and various bit fields in the linear address.
Figure 2 shows the high level algorithm used by the page walker for
address translation.
Advanced Micro Devices, Inc.
7. WHITE PAPER AMD-V™ Nested Paging
Linear/Virtual Address
Figure 1: 4KB page tables in long mode
Figure 2: Linear/Virtual to physical address translation algorithm
Advanced Micro Devices, Inc.
8. WHITE PAPER AMD-V™ Nested Paging
During the page walk, the page walker encounters physical addresses
in CR3 register and in page table entries which point to the next level
of the walk. The page walk ends when the data or leaf page is
reached.
Address translation is a very memory intensive operation because the
page walker must access the memory hierarchy many times. To
reduce this overhead, AMD processors automatically store recent
translations in an internal translation look-aside buffer (TLB). At every
memory reference, the processor first checks the TLB to determine if
the required translation is already cached; if it is cached, the processor
uses that translation; otherwise page tables are walked, the resulting
translation is saved in the TLB and the instruction is executed.
Operating systems are required to cooperate with the processor to
keep the TLB consistent with page tables in memory. For example,
when removing an address translation, the operating system must
request the processor to invalidate the TLB entry associated with that
translation. On SMP systems where an operating system may share
page tables between processes running on multiple processors, the
operating system must ensure that TLB entries on all processors
remain consistent. When removing a shared translation, operating
system software should invalidate the corresponding TLB entry on all
the processors.
The operating system updates the page table’s base pointer (CR3)
during a context switch. A CR3 change establishes a new set of
translations and therefore the processor automatically invalidates TLB
entries associated with the previous context.
The processor sets accessed and dirty bits in the page tables during
memory accesses. The ‘accessed’ bit is set in all levels of the page
table when the processor uses that translation to read or write
memory. The ‘dirty’ bit is set in the PTE when the processor writes to
the memory page mapped by that PTE.
4. Virtualizing x86 paging
To provide protection and isolation between guests and hypervisor, the
hypervisor must control address translation on the processor by
essentially enforcing another level of address translation when guests
Advanced Micro Devices, Inc.
9. WHITE PAPER AMD-V™ Nested Paging
are active. This additional level of translation maps the guest’s view of
physical memory to the system’s view of physical memory.
With para-virtualized guests, the hypervisor and the guest can utilize
para-virtual interfaces to reduce hypervisor complexity and overhead
in virtualizing x86 paging. However for unmodified guests, the
hypervisor must completely virtualize x86 address translation. This
could incur significant overheads which we discuss in following
sections.
4.1 Software techniques for virtualizing address
translation
Software-based techniques maintain a shadow version of page table
derived from guest page table (gPT). When the guest is active, the
hypervisor forces the processor to use the shadow page table (sPT) to
perform address translation. The sPT is not visible to the guest.
To maintain a valid sPT the hypervisor must keep track of the state of
gPT. This include modifications by the guest to add or remove
translation in the gPT, guest versus hypervisor induced page faults
(defined below), accessed and dirty bits in sPT; and for SMP guests,
consistency of address translation on processors.
Software can use various techniques to keep the sPT and gPT
consistent. One of the techniques is write-protecting the gPT. In this
technique the hypervisor write-protects all physical pages that
constitute the gPT. Any modification by the guest to add a translation
results in a page fault exception. On a page fault exception, the
processor control is transferred to the hypervisor so it can emulate the
operation appropriately. Similarly, the hypervisor gets control when
the guest edits gPT to remove a translation; the hypervisor removes
the translation from the gPT and updates the sPT accordingly.
A different shadow paging technique does not write-protect gPT but
instead depends on processor’s page-fault behavior and on guest
adhering to TLB consistency rules. In this technique, sometimes
referred to as Virtual TLB, the hypervisor lets the guest add new
translations to gPT without intercepting those operations. Then later
when the guest accesses an instruction or data which results in the
processor referencing memory using that translation, the processor
page faults because that translation is not present in sPT just yet. The
page fault allows the hypervisor to intervene; it inspects the gPT to
add the missing translation in the sPT and executes the faulting
Advanced Micro Devices, Inc.
10. WHITE PAPER AMD-V™ Nested Paging
instruction. Similarly when the guest removes a translation, it
executes INVLPG to invalidate that translation in the TLB. The
hypervisor intercepts this operation; it then removes the
corresponding translation in sPT and executes INVLPG for the removed
translation.
Both techniques result in large number of page fault exceptions. Many
page faults are caused due to normal guest behavior; such those as a
result of accessing pages that have been paged out to the storage
hierarchy by the guest operating system. We call such faults guest-
induced page faults and they must be intercepted by the hypervisor,
analyzed, and then reflected into the guest, which is a significant
overhead when compared to native paging. Page faults due to shadow
paging are called hypervisor-induced page faults. To distinguish
between these two faults, the hypervisor traverses the guest and
shadow page tables, which incurs significant software overheads.
When a guest is active, the page walker sets the accessed and dirty
bits in the sPT. But because the guest may depend on proper setting
of these bits in gPT, the hypervisor must reflect them back in the gPT.
For example, the guest may use these bits to determine which pages
can be moved to the hard disk to make room for new pages.
When the guest attempts to schedule a new process on the processor,
it updates processor’s CR3 register to establish the gPT corresponding
to the new process. The hypervisor must intercept this operation,
invalidate TLB entries associated with the previous CR3 value and set
the real CR3 value based on the corresponding sPT for the new
process. Frequent context switches within the guest could result in
significant hypervisor overheads.
Shadow paging can incur significant additional memory and
performance overheads with SMP guests. In an SMP guest, the same
gPT instance can be used for address translation on more than one
processor. In such a case the hypervisor must either maintain sPT
instances that can be used at each processor or share the sPT between
multiple virtual processors. The former results in high memory
overheads; the latter could result in high synchronization overheads.
It is estimated that for certain workloads shadow paging can account
for up to 75% of overall hypervisor overhead.
Advanced Micro Devices, Inc.
11. WHITE PAPER AMD-V™ Nested Paging
Figure 3: Guest and shadow page tables (showing two-level paging)
4.2 AMD-V™ Nested Page Tables (NPT)
To avoid the software overheads under shadow paging, AMD64 Quad-
Core processors add Nested Paging to the hardware page walker.
Nested paging uses an additional or nested page table (NPT) to
translate guest physical addresses to system physical addresses and
Advanced Micro Devices, Inc.
12. WHITE PAPER AMD-V™ Nested Paging
leaves the guest in complete control over its page tables. Unlike
shadow paging, once the nested pages are populated, the hypervisor
does not need to intercept and emulate guest’s modification of gPT.
Nested paging removes the overheads associated with shadow paging.
However because nested paging introduces an additional level of
translation, the TLB miss cost could be larger.
4.2.1 Details
Under nested paging both guest and the hypervisor have their own
copy of the processor state affecting paging such as the CR0, CR3,
CR4, EFER and PAT.
The gPT maps guest linear addresses to guest physical addresses.
Nested page tables (nPT) map guest physical addresses to system
physical addresses.
Guest and nested page tables are set up by the guest and hypervisor
respectively. When a guest attempts to reference memory using a
linear address and nested paging is enabled, the page walker performs
a 2-dimensional walk using the gPT and nPT to translate the guest
linear address to system physical address. See figure 4.
When the page walk is completed, a TLB entry containing the
translation from guest linear address to system physical address is
cached in the TLB and used on subsequent accesses to that linear
address.
AMD processors supporting nested paging use the same TLB facilities
to map from linear to system physical addresses, whether the
processor is in guest or in host (or hypervisor) mode. When the
processor is in guest mode, TLB maps guest linear addresses to
system physical addresses. When processor is in host mode, the TLB
maps host linear addresses to system physical addresses.
In addition, AMD processors supporting nested paging maintain a
Nested TLB which caches guest physical to system physical
translations to accelerate nested page table walks. Nested TLB
exploits the high locality of guest page table structures and has a high
hit rate.
Advanced Micro Devices, Inc.
13. WHITE PAPER AMD-V™ Nested Paging
63 48 47 39 38 30 29 21 20 12 11 0
Guest Virtual PML4 Offset PDP Offset PD Offset PT Offset Physical Page Off.
Page-Map Page Directory Page Directory Page Guest 4KB
Level-4 Table Pointer Table Table Table memory page
gPDPE
gData 4KB pages
addressed by
gPTE guest physical address
gPDE
51 12 gPML4E
gCR3
Nested Nested Nested Nested Nested
walk walk walk walk walk
63 48 47 39 38 30 29 21 20 12 11 0
PML4 Offset PDP Offset PD Offset PT Offset Physical Page Off.
Page-Map Page Directory Page Directory Page Guest 4KB
Level-4 Table Pointer Table Table Table memory page
Nested
walk
nPDPE
4KB pages
addressed by
nPTE system physical address
nPDE
51 12 nPML4E gPML4E
nCR3
Figure 4: Translating guest linear address to system physical address
using nested page tables
4.2.2 Cost of a Page Walk under Nested Paging
A TLB miss under nested paging could have a higher cost than a TLB
miss under non-nested paging. This is because under nested paging,
the page walker must not only walk gPT but also simultaneously walk
nPT to translate the guest physical addresses encountered during
guest page table walk (such as gCR3 and gPT entries) to system
physical addresses.
For example, a 4-level guest page table walk could invoke the nested
page walker 5 times, once for each guest physical address
encountered and once for the final translation of the GP of the datum
itself. Each nested page walk can require up to 4 cacheable memory
accesses to determine the guest physical to system physical mapping
Advanced Micro Devices, Inc.
14. WHITE PAPER AMD-V™ Nested Paging
and one more memory access to read the entry itself. In such a case
a TLB miss cost can increase from 4 memory references in non-nested
paging to 24 in nested paging unless caching is done. Figure 5 shows
the steps taken by the page walker with 4 levels in both guest and
nested page tables.
Figure 5: Address translation with nested paging. GPA is guest physical
address; SPA is system physical address; nL is nested level; gL is
guest level
4.2.3 Using Nested Paging
Nested paging is a feature intended for hypervisor’s use. The guest
cannot observe any difference (except performance) while running
under a hypervisor using nested paging. Nested paging does not
require any changes in guest software.
Nesting paging is an optional feature and not available in all
implementations of processors supporting AMD-V technology. Software
can use the CPUID instruction to determine if nested paging is
supported on that processor implementation.
4.2.4 Memory savings with NPT
Advanced Micro Devices, Inc.
15. WHITE PAPER AMD-V™ Nested Paging
Unlike shadow-paging, which requires the hypervisor to maintain an
sPT instance for each gPT, a hypervisor using nested paging can set up
a single instance of nPT to map the entire guest physical address
space. Since guest memory is compact, the nPT should typically
consume considerably less memory than an equivalent shadow-paging
implementation.
With nested paging, the hypervisor can maintain a single instance of
nPT which can be used simultaneously at one more processor in an
SMP guest. This is much more efficient than shadow paging
implementations where the hypervisor either incurs a memory
overhead to maintain per virtual processor sPT or incurs
synchronization overheads resulting from use of shared sPT.
4.2.5 Impact of Page Size on Nested Paging Performance
Besides other factors, address translation performance is reduced
when there is an increase in TLB miss cost. Other things being equal,
TLB miss cost (under nested and non-nested paging) decreases if
fewer pages need to be walked. Large page sizes reduce page levels
needed for address translation.
To improve nested paging performance, a hypervisor can chose to
populate nPT with large page sizes. Nested page table sizes can be
different for each page in each guest and can be changed during guest
execution. AMD64 Quad-Core processors support 4KB, 2MB, and 1GB
page sizes.
Like large nested page size, large guest page size also reduces TLB
miss cost. Many workloads such as database workloads typically use
large pages and should perform well under nested paging.
An indirect benefit of large pages is TLB efficiency. With larger pages,
each TLB entry covers a larger range of linear to physical address
translation; effectively increasing TLB capacity and reducing page walk
frequency.
4.2.6 Micro-architecture Support for Improving Nested
Paging Performance
To reduce page walk overheads, AMD64 processors maintain a fast
internal Page Walk Cache (PWC) for memory referenced by frequently
Advanced Micro Devices, Inc.
16. WHITE PAPER AMD-V™ Nested Paging
used page table entries. The PWC entries are tagged with physical
addresses and prevent a page entry reference from accessing the
memory hierarchy.
With nested paging, the significance of the PWC becomes even more
important. AMD processors supporting nested paging can cache guest
page table as well as nested page table entries. This would convert
the unconditional memory hierarchy access for data referenced by
both guest and nested page table entries to likely PWC hits. Reuse of
page table entries at the top most page level is the highest while that
at the lowest level is the least. PWC takes these characteristics into
consideration when caching entries.
As we discussed previously, the TLB plays a major role in reducing
address translation overheads. When TLB capacity increases, the
number of costly page walks needed decreases. The AMD “Barcelona”
family of processors are designed to cache up to 48 TLB entries in
their L1 Data TLB for any page size; 4KB, 2MB or 1GB pages. They
can also cache 512 4KB TLB entries or 128 2M entries in their L2 TLB.
A TLB with large capacity improves performance under nested as well
as shadow paging.
Similar to the regular TLB which caches linear address to system
address translations, AMD processors supporting nested paging
support a Nested TLB (NTLB) to cache guest physical to system
physical translations. The goal of NTLB is to reduce the average
number of page entry references during a nested walk.
The TLB, PWC and NTLB work together to improve nested paging
performance without requiring any software changes in guest or the
hypervisor.
4.2.7 Address Space IDs
Starting 64-bit AMD Opteron Rev-F processors support Address Space
IDs (ASIDs) to dynamically partition the TLB. The hypervisor assigns a
unique ASID value to each guest scheduled to run on the processor.
During a TLB lookup, the ASID value of the currently active guest is
matched against the ASID tag in the TLB entry and the linear page
frame numbers are matched for a potential TLB hit. Thus TLB entries
belonging to different guests and to the hypervisor can coexist without
causing incorrect address translations, and these TLB entries can
persist during context switches. Without ASIDs, all TLB entries must
be flushed before a context switch and refilled later.
Advanced Micro Devices, Inc.
17. WHITE PAPER AMD-V™ Nested Paging
Use of ASIDs allows the hypervisor to make efficient use of processor’s
TLB capacity to improve guest performance under nested paging.
Figure 6: Address Space ID (ASID)
4.2.8 Nested Paging Benchmarks
This whitepaper includes benchmark results collected from an early
revision of the AMD “Barcelona” family of processors and early
hypervisor implementations. It is possible that as hypervisors get
optimized for nesting paging, the overall performance will improve.
Furthermore, the performance may improve with enhancements to
micro architecture. As benchmark results from later revisions of
AMD64 Quad-Core processors and later hypervisor versions become
available, they will be added or linked to this document.
The benchmarks discussed here were collected on systems with the
following configuration:
Processors: 2-socket, 2.0GHz/1800MHz-NB Barcelona (Model 2350), 95W.
Memory: 32GB of 4GB DDR2-667MHz.
HBA: QLA2432 (dual-port PCIe 2Gb Fiber) HBA: 1 port used.
Disk Array: MSA1500, 1 controller, 1 fiber connection, 512MB cache. 15
drives 15K rpm SCSI 73GB disks.
Figure 7 shows benchmark data collected with and without nested
paging using an experimental build of VMware ESX. With nested
paging, the performance increased by approximately 14 and 58
percent for SQL DB Hammer and MS Terminal Services workloads
respectively.
Advanced Micro Devices, Inc.
18. WHITE PAPER AMD-V™ Nested Paging
Figure 7: Performance with and without nested paging with an
experimental build of VMware ESX hypervisor on AMD64 Quad-Core
Model 2350
Figure 8 shows Oracle 10G OLTP with and without nested paging with
RHEL 4.4 running under Xen 3.1. With nested paging, the performance
increased by approximately 94%. With para-virtualized (PV) drivers for
NIC and storage, the performance increased by 249%.
Advanced Micro Devices, Inc.
19. WHITE PAPER AMD-V™ Nested Paging
Figure 8: Oracle 10G OLTP performance with and without nested
paging with RHEL 5.5 under Xen on AMD64 Quad-Core Model 2350.
5 Conclusion
Nested Paging removes the virtualization overheads associated with
traditional software-based shadow paging algorithms. Together with
other architectural and micro-architectural enhancements in AMD64
Quad-core processors, Nested Paging helps deliver performance
improvements, specifically for memory intensive workloads with high
context-switch frequency. Servers based on these processors can
provide outstanding scalability, leading edge performance-per-watt,
high consolidation ratios and great headroom for server workloads.
Advanced Micro Devices, Inc.