VMware vSphere 5.5 with features like Flash Read Cache (vFRC) can improve performance of virtualized Oracle 12c databases without impacting reliability functions like VMotion. Testing showed vFRC decreased time to complete an OLAP workload by 14% and allowed seamless migration of vFRC-enabled VMs during VMotion. The combination of VMware, Cisco, and EMC technologies provided reliable virtualization and storage with increased Oracle 12c performance using vFRC.
The document discusses best practices for running Oracle databases on VMware virtual machines. It recommends: 1) carefully sizing workloads based on physical constraints; 2) optimizing ESXi host settings like disabling unnecessary processes, using large memory pages, and matching vCPUs to sessions; 3) optimizing the guest operating system; 4) using dedicated storage like SSDs and aligning datastores; and 5) separating infrastructure and VM network traffic using features like NIC teaming.
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld
This document discusses architecting Oracle databases on VMware vSphere 5 with NetApp storage. It begins with objectives such as understanding how to provision NetApp storage for an Oracle database to take advantage of VMware and NetApp technologies. It then covers topics like using Oracle with vSphere 5, recommendations for vSphere 5, virtualizing Oracle with NetApp, reference architectures, and where to learn more. The presenters are experts on Oracle and virtualization technologies looking to provide best practices on implementing Oracle databases with VMware and NetApp.
VMworld 2013: VMware Disaster Recovery Solution with Oracle Data Guard and Si...VMworld
VMworld 2013
Kannan Mani, VMware
Brad Pinkston, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
High Availability Options for Oracle Enterprise Manager 12c Cloud ControlSimon Haslam
This document discusses high availability options for Oracle Enterprise Manager 12c. It describes the architecture with a web tier, application tier, database, and agents. It outlines approaches for high availability including using a load balancer with two OMS nodes and a single database instance. Additional licensing is required for high availability configurations beyond a single database instance like RAC, Data Guard, or multiple OMS nodes. It concludes with a demonstration of simulating an OMS node or network failure in an environment with a load balancer and dual OMS nodes.
Configuring Oracle Enterprise Manager Cloud Control 12c for High AvailabilityLeighton Nelson
This document discusses configuring Oracle Enterprise Manager Cloud Control 12c for high availability. It outlines three levels of high availability - Level 1 uses a single OMS and repository, Level 2 uses an active/passive OMS with a local Data Guard repository, and Level 3 uses multiple active/active OMS instances behind a load balancer with a RAC Data Guard repository. It provides recommendations for configuring high availability for the repository, OMS instances, agents, and software library. The presentation also covers backup and recovery procedures.
Uponor Exadata e-Business Suite Migration Case StudySimo Vilmunen
Uponor, a plumbing solutions company, migrated their Oracle E-Business Suite and Oracle Business Intelligence environments from traditional hardware to Oracle Exadata in order to improve performance, scalability, availability and manageability. The migration was completed within 3 months and resulted in significant performance gains across key business processes. Lessons learned included benefits of using Exadata-specific tools and configurations and importance of testing database-specific functionality during migration.
The document discusses best practices for running Oracle databases on VMware virtual machines. It recommends: 1) carefully sizing workloads based on physical constraints; 2) optimizing ESXi host settings like disabling unnecessary processes, using large memory pages, and matching vCPUs to sessions; 3) optimizing the guest operating system; 4) using dedicated storage like SSDs and aligning datastores; and 5) separating infrastructure and VM network traffic using features like NIC teaming.
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld
This document discusses architecting Oracle databases on VMware vSphere 5 with NetApp storage. It begins with objectives such as understanding how to provision NetApp storage for an Oracle database to take advantage of VMware and NetApp technologies. It then covers topics like using Oracle with vSphere 5, recommendations for vSphere 5, virtualizing Oracle with NetApp, reference architectures, and where to learn more. The presenters are experts on Oracle and virtualization technologies looking to provide best practices on implementing Oracle databases with VMware and NetApp.
VMworld 2013: VMware Disaster Recovery Solution with Oracle Data Guard and Si...VMworld
VMworld 2013
Kannan Mani, VMware
Brad Pinkston, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
High Availability Options for Oracle Enterprise Manager 12c Cloud ControlSimon Haslam
This document discusses high availability options for Oracle Enterprise Manager 12c. It describes the architecture with a web tier, application tier, database, and agents. It outlines approaches for high availability including using a load balancer with two OMS nodes and a single database instance. Additional licensing is required for high availability configurations beyond a single database instance like RAC, Data Guard, or multiple OMS nodes. It concludes with a demonstration of simulating an OMS node or network failure in an environment with a load balancer and dual OMS nodes.
Configuring Oracle Enterprise Manager Cloud Control 12c for High AvailabilityLeighton Nelson
This document discusses configuring Oracle Enterprise Manager Cloud Control 12c for high availability. It outlines three levels of high availability - Level 1 uses a single OMS and repository, Level 2 uses an active/passive OMS with a local Data Guard repository, and Level 3 uses multiple active/active OMS instances behind a load balancer with a RAC Data Guard repository. It provides recommendations for configuring high availability for the repository, OMS instances, agents, and software library. The presentation also covers backup and recovery procedures.
Uponor Exadata e-Business Suite Migration Case StudySimo Vilmunen
Uponor, a plumbing solutions company, migrated their Oracle E-Business Suite and Oracle Business Intelligence environments from traditional hardware to Oracle Exadata in order to improve performance, scalability, availability and manageability. The migration was completed within 3 months and resulted in significant performance gains across key business processes. Lessons learned included benefits of using Exadata-specific tools and configurations and importance of testing database-specific functionality during migration.
The document summarizes Walla's plan to upgrade its 600TB Solaris ZFS storage environment from Solaris 10 to Solaris 11. The plan involves migrating the ZFS pools from old SPARC hardware to new X86 hardware using ZFS send and receive. Pools larger than 4TB will be split into 4TB luns and rebuilt on the new hardware. All servers will be upgraded to Solaris 11.1 and ZFS user properties will be added to help manage the pools. The upgrades aim to replace aging hardware, improve storage utilization, and restore high availability.
VMworld 2013: VMware vCenter Site Recovery Manager – Solution Overview and Le...VMworld
VMworld 2013
Mauricio Barra, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Thomas McQuillan, UnitedHealth Group
VMware Site Recovery Manager - Architecting a DR Solution - Best Practicesthephuck
This was the slide deck from the Philadelphia VMUG User Conference for the VMware Site Recovery Manager - Architecting a DR Solution session on May 15th, 2014.
This document discusses considerations for planning Oracle VM 3 server pool deployments for scalability, availability, and reliability. It describes key concepts of Oracle VM 3 including Oracle VM Manager, Oracle VM Server, and server pools. Server pools group multiple physical servers with shared storage so virtual machines can run on any server and live migrate between servers. The document provides best practices for configuring server pools for high availability, including enabling high availability options, sizing the server pool file system, using live migration, ensuring excess pool capacity, and planning multiple pools for large infrastructures.
This document discusses optimizing Oracle databases that run on VMware virtual machines. It covers VMware's memory, CPU, and I/O resource management and provides recommendations for configuration. Key points include using memory reservations to avoid swapping, allocating enough vCPUs while avoiding overprovisioning, dedicating storage and enabling Storage I/O Control for I/O performance. Oracle RAC support on VMware requires Oracle 11.2.0.2 or higher.
Optimize oracle on VMware (April 2011)Guy Harrison
- Virtualizing Oracle databases on VMware ESX requires careful configuration and monitoring of memory, CPU, and I/O resources to avoid performance issues.
- ESX uses memory overcommitment and techniques like ballooning and swapping to manage physical memory across VMs, which can cause problems if they page Oracle memory like the SGA.
- Multiple virtual CPUs on a VM must wait to be scheduled on the physical CPUs, showing up as ready time and impacting performance of CPU-intensive workloads.
- Dedicating storage, using raw device mapping (RDM), and tuning I/O can help optimize storage performance for virtualized Oracle databases.
This document discusses best practices for virtualizing databases. It begins with an introduction of the presenters, Michael Corey and Jeff Szastak, who are experts in virtualizing Oracle and SQL Server databases. The document then covers reasons for virtualizing databases, including flexibility, efficiency of resources, and cost savings. It provides examples of large production databases that have been successfully virtualized. The document discusses performance results from testing that show virtualized database performance is typically within 5% of physical performance. It provides recommendations for right-sizing resources and avoiding configurations like BIOS settings that could negatively impact performance. The overall message is that databases can be successfully virtualized while meeting service level agreements by following best practices.
vCenter Site Recovery Manager: Architecting a DR SolutionRackspace
VMware’s vCenter Site Recovery Manager is the market-leading disaster-recovery management product. It ensures the simplest and most reliable disaster protection for all virtualized applications. However, it is not a turn-key DR solution. Architecting your SRM solution requires deep thought and heavy planning. This presentation will help you with planning and architecting your SRM solution as well as addressing specific configuration and installation challenges. Our goal is to help you deploy and maintain a solid SRM solution to enable your DR Plan.
The document discusses upgrading from vSphere 5.x to vSphere 6.0. It covers the new vCenter Server 6.0 architecture including the Platform Services Controller. It discusses different upgrade paths such as an in-place upgrade versus a new deployment. It also provides guidance on planning the upgrade, including creating a compatibility matrix, testing plans, and readiness checks.
This document provides an overview of VMware Site Recovery Manager (SRM) and how it integrates with NetApp storage systems. It discusses:
1. The challenges of traditional disaster recovery and how SRM automates the DR process.
2. New features in SRM version 5 such as automated failback.
3. How NetApp systems like SnapMirror provide value in SRM environments through features like space-efficient DR testing using FlexClone.
4. The software and system requirements for using SRM with NetApp including required NetApp SnapMirror versions and supported platforms.
The document provides tips for improving performance and security in a vSphere 4.1 environment. It discusses new features in vSphere 4.1 related to networking, storage, memory compression, and management. It then outlines best practices for securing the virtual infrastructure, including using virtual networking segmentation, hardening ESXi hosts, protecting the vCenter management environment, and securing individual virtual machines. The document recommends configuration changes, tools, and resources to improve the security of the virtualization platform.
VMworld Europe 2014: Virtual SAN Best Practices and Use CasesVMworld
This document provides an overview and agenda for a presentation on VMware Virtual SAN. It discusses key features of Virtual SAN including its software-defined storage approach and hybrid storage using SSD and HDD. Several use cases are reviewed like virtual desktop infrastructure, remote office/branch office, and DMZ/isolated environments. Best practices are also covered for various use cases around sizing, policies, and ready nodes. The document aims to introduce attendees to Virtual SAN capabilities and considerations for different deployment scenarios.
The document summarizes a company's experience migrating from vSphere 4.1 to 5.0. Key aspects of the migration included upgrading ESXi and vCenter licenses, performing a new vCenter installation with vCenter Heartbeat for high availability, migrating VMs between ESXi hosts using a "shuttle" host, and implementing post-migration tasks like applying updates and permissions. The migration addressed challenges like multiple environments and sites, production uptime needs, and ensuring a highly available vCenter.
Advanced caching techniques with ehcache, big memory, terracotta, and coldfusionColdFusionConference
Rob Brooks-Bilson is a senior director at Amkor Technology who has been involved with ColdFusion for 18 years. He is the author of two books on ColdFusion programming and an Adobe Community Professional for ColdFusion. The document outlines his agenda for a presentation on caching in ColdFusion, which will cover caching tags and functions, Ehcache, replicating caches, BigMemory Go, and distributed caching with Terracotta. It provides legal disclaimers about the third-party applications discussed and their lack of official Adobe support.
This document outlines the steps for building a SQL Server cluster for high availability, including planning considerations, required hardware, installing Windows clustering features, configuring storage, installing and configuring SQL Server across nodes, and testing the cluster configuration. Key aspects that are discussed include defining recovery time and point objectives, installing SQL Server using the "Create New Failover Cluster" option, installing SQL on each node to enable failover, and performing backups and restores from cluster-owned drives. Testing the applications on the clustered environment is also emphasized.
VMworld 2013
Kiran Madnani, VMware
Rawlinson Rivera, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Azure Virtual Machines provide choice, scalability, and reliability. They can be provisioned from the VM Gallery, custom images, or templates. VM extensions allow post-deployment configuration. Availability sets distribute VMs across hardware to ensure uptime. Premium storage supports high performance workloads. Scale sets deploy identical VMs and scale capacity automatically with load balancers. Terraform codifies cloud APIs into declarative files that can deploy and manage Azure resources as code.
The document discusses configuring VM storage profiles in vSphere 5 to help automate the deployment of virtual machines to the appropriate datastores. It involves creating storage capabilities to define storage characteristics, then using those capabilities to create VM storage profiles. These profiles are applied to virtual machine disks and clusters to ensure VMs are placed according to requirements. The process reduces administration and helps prevent misconfigurations when deploying new VMs.
This document discusses using SQL Server in a clustered environment for high availability and fault tolerance. It describes different hardware architectures for clustered SQL Server setups including single-tier, two-tier, and scalable multi-tier architectures. It also covers cluster server configurations, requirements for multi-node clusters, and how to write cluster-aware applications. Resources for clustering SQL Server on Windows are provided.
Virtualizing Oracle Databases with VMware provides an overview of virtualizing databases with VMware. Key points include:
1. VMware virtualization enables database consolidation and migration capabilities like VMotion for high availability and load balancing.
2. Performance studies show virtualized databases achieve near-native performance and DRS helps balance workloads across hosts for better performance and response times.
3. Best practices for deploying databases in virtual environments include choosing appropriate hardware, configuring storage, tuning the virtual machine configuration and operating system, database configuration, and performance monitoring.
The document describes the creation of the Burner Board, a programmable skateboard designed for Burning Man. Key aspects included dust-proofing the electronics, installing a powerful motor and large battery, and developing an addressable LED surface and mobile app for controlling lights and sounds. The project brought together a global team who collaborated through challenges to complete the board on time. It is presented as an example of how hardware projects can be fun and powerful when driven by a clear vision and belief, and supported by a dedicated team.
The document summarizes Walla's plan to upgrade its 600TB Solaris ZFS storage environment from Solaris 10 to Solaris 11. The plan involves migrating the ZFS pools from old SPARC hardware to new X86 hardware using ZFS send and receive. Pools larger than 4TB will be split into 4TB luns and rebuilt on the new hardware. All servers will be upgraded to Solaris 11.1 and ZFS user properties will be added to help manage the pools. The upgrades aim to replace aging hardware, improve storage utilization, and restore high availability.
VMworld 2013: VMware vCenter Site Recovery Manager – Solution Overview and Le...VMworld
VMworld 2013
Mauricio Barra, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Thomas McQuillan, UnitedHealth Group
VMware Site Recovery Manager - Architecting a DR Solution - Best Practicesthephuck
This was the slide deck from the Philadelphia VMUG User Conference for the VMware Site Recovery Manager - Architecting a DR Solution session on May 15th, 2014.
This document discusses considerations for planning Oracle VM 3 server pool deployments for scalability, availability, and reliability. It describes key concepts of Oracle VM 3 including Oracle VM Manager, Oracle VM Server, and server pools. Server pools group multiple physical servers with shared storage so virtual machines can run on any server and live migrate between servers. The document provides best practices for configuring server pools for high availability, including enabling high availability options, sizing the server pool file system, using live migration, ensuring excess pool capacity, and planning multiple pools for large infrastructures.
This document discusses optimizing Oracle databases that run on VMware virtual machines. It covers VMware's memory, CPU, and I/O resource management and provides recommendations for configuration. Key points include using memory reservations to avoid swapping, allocating enough vCPUs while avoiding overprovisioning, dedicating storage and enabling Storage I/O Control for I/O performance. Oracle RAC support on VMware requires Oracle 11.2.0.2 or higher.
Optimize oracle on VMware (April 2011)Guy Harrison
- Virtualizing Oracle databases on VMware ESX requires careful configuration and monitoring of memory, CPU, and I/O resources to avoid performance issues.
- ESX uses memory overcommitment and techniques like ballooning and swapping to manage physical memory across VMs, which can cause problems if they page Oracle memory like the SGA.
- Multiple virtual CPUs on a VM must wait to be scheduled on the physical CPUs, showing up as ready time and impacting performance of CPU-intensive workloads.
- Dedicating storage, using raw device mapping (RDM), and tuning I/O can help optimize storage performance for virtualized Oracle databases.
This document discusses best practices for virtualizing databases. It begins with an introduction of the presenters, Michael Corey and Jeff Szastak, who are experts in virtualizing Oracle and SQL Server databases. The document then covers reasons for virtualizing databases, including flexibility, efficiency of resources, and cost savings. It provides examples of large production databases that have been successfully virtualized. The document discusses performance results from testing that show virtualized database performance is typically within 5% of physical performance. It provides recommendations for right-sizing resources and avoiding configurations like BIOS settings that could negatively impact performance. The overall message is that databases can be successfully virtualized while meeting service level agreements by following best practices.
vCenter Site Recovery Manager: Architecting a DR SolutionRackspace
VMware’s vCenter Site Recovery Manager is the market-leading disaster-recovery management product. It ensures the simplest and most reliable disaster protection for all virtualized applications. However, it is not a turn-key DR solution. Architecting your SRM solution requires deep thought and heavy planning. This presentation will help you with planning and architecting your SRM solution as well as addressing specific configuration and installation challenges. Our goal is to help you deploy and maintain a solid SRM solution to enable your DR Plan.
The document discusses upgrading from vSphere 5.x to vSphere 6.0. It covers the new vCenter Server 6.0 architecture including the Platform Services Controller. It discusses different upgrade paths such as an in-place upgrade versus a new deployment. It also provides guidance on planning the upgrade, including creating a compatibility matrix, testing plans, and readiness checks.
This document provides an overview of VMware Site Recovery Manager (SRM) and how it integrates with NetApp storage systems. It discusses:
1. The challenges of traditional disaster recovery and how SRM automates the DR process.
2. New features in SRM version 5 such as automated failback.
3. How NetApp systems like SnapMirror provide value in SRM environments through features like space-efficient DR testing using FlexClone.
4. The software and system requirements for using SRM with NetApp including required NetApp SnapMirror versions and supported platforms.
The document provides tips for improving performance and security in a vSphere 4.1 environment. It discusses new features in vSphere 4.1 related to networking, storage, memory compression, and management. It then outlines best practices for securing the virtual infrastructure, including using virtual networking segmentation, hardening ESXi hosts, protecting the vCenter management environment, and securing individual virtual machines. The document recommends configuration changes, tools, and resources to improve the security of the virtualization platform.
VMworld Europe 2014: Virtual SAN Best Practices and Use CasesVMworld
This document provides an overview and agenda for a presentation on VMware Virtual SAN. It discusses key features of Virtual SAN including its software-defined storage approach and hybrid storage using SSD and HDD. Several use cases are reviewed like virtual desktop infrastructure, remote office/branch office, and DMZ/isolated environments. Best practices are also covered for various use cases around sizing, policies, and ready nodes. The document aims to introduce attendees to Virtual SAN capabilities and considerations for different deployment scenarios.
The document summarizes a company's experience migrating from vSphere 4.1 to 5.0. Key aspects of the migration included upgrading ESXi and vCenter licenses, performing a new vCenter installation with vCenter Heartbeat for high availability, migrating VMs between ESXi hosts using a "shuttle" host, and implementing post-migration tasks like applying updates and permissions. The migration addressed challenges like multiple environments and sites, production uptime needs, and ensuring a highly available vCenter.
Advanced caching techniques with ehcache, big memory, terracotta, and coldfusionColdFusionConference
Rob Brooks-Bilson is a senior director at Amkor Technology who has been involved with ColdFusion for 18 years. He is the author of two books on ColdFusion programming and an Adobe Community Professional for ColdFusion. The document outlines his agenda for a presentation on caching in ColdFusion, which will cover caching tags and functions, Ehcache, replicating caches, BigMemory Go, and distributed caching with Terracotta. It provides legal disclaimers about the third-party applications discussed and their lack of official Adobe support.
This document outlines the steps for building a SQL Server cluster for high availability, including planning considerations, required hardware, installing Windows clustering features, configuring storage, installing and configuring SQL Server across nodes, and testing the cluster configuration. Key aspects that are discussed include defining recovery time and point objectives, installing SQL Server using the "Create New Failover Cluster" option, installing SQL on each node to enable failover, and performing backups and restores from cluster-owned drives. Testing the applications on the clustered environment is also emphasized.
VMworld 2013
Kiran Madnani, VMware
Rawlinson Rivera, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Azure Virtual Machines provide choice, scalability, and reliability. They can be provisioned from the VM Gallery, custom images, or templates. VM extensions allow post-deployment configuration. Availability sets distribute VMs across hardware to ensure uptime. Premium storage supports high performance workloads. Scale sets deploy identical VMs and scale capacity automatically with load balancers. Terraform codifies cloud APIs into declarative files that can deploy and manage Azure resources as code.
The document discusses configuring VM storage profiles in vSphere 5 to help automate the deployment of virtual machines to the appropriate datastores. It involves creating storage capabilities to define storage characteristics, then using those capabilities to create VM storage profiles. These profiles are applied to virtual machine disks and clusters to ensure VMs are placed according to requirements. The process reduces administration and helps prevent misconfigurations when deploying new VMs.
This document discusses using SQL Server in a clustered environment for high availability and fault tolerance. It describes different hardware architectures for clustered SQL Server setups including single-tier, two-tier, and scalable multi-tier architectures. It also covers cluster server configurations, requirements for multi-node clusters, and how to write cluster-aware applications. Resources for clustering SQL Server on Windows are provided.
Virtualizing Oracle Databases with VMware provides an overview of virtualizing databases with VMware. Key points include:
1. VMware virtualization enables database consolidation and migration capabilities like VMotion for high availability and load balancing.
2. Performance studies show virtualized databases achieve near-native performance and DRS helps balance workloads across hosts for better performance and response times.
3. Best practices for deploying databases in virtual environments include choosing appropriate hardware, configuring storage, tuning the virtual machine configuration and operating system, database configuration, and performance monitoring.
The document describes the creation of the Burner Board, a programmable skateboard designed for Burning Man. Key aspects included dust-proofing the electronics, installing a powerful motor and large battery, and developing an addressable LED surface and mobile app for controlling lights and sounds. The project brought together a global team who collaborated through challenges to complete the board on time. It is presented as an example of how hardware projects can be fun and powerful when driven by a clear vision and belief, and supported by a dedicated team.
MySQL's 2008-2009 roadmap included:
(1) Enhancements to MySQL Server, including new storage engines, partitioning, and performance improvements.
(2) New tools like MySQL Load Balancer and Query Analyzer to help customers scale their applications and diagnose performance issues.
(3) Ongoing development of MySQL Cluster and features for the MySQL 6.0 release focused on subquery optimization, foreign keys, and online backup/restore.
This document provides an overview of OpenSolaris and its key features and enhancements in the 2009.06 release. It summarizes OpenSolaris capabilities for the desktop, developers, and datacenter. For the datacenter, it highlights performance improvements, installation tools, networking virtualization, and support offerings. For developers, it outlines tools and libraries to simplify building applications. For the desktop, it describes updated desktop features and integrated applications.
Solaris 8 containers and solaris 9 containers customer presentationxKinAnx
Solaris 8 Containers and Solaris 9 Containers allow organizations to consolidate multiple legacy Solaris 8 and 9 application environments onto newer Solaris 10 hardware. This provides benefits like reduced costs, improved utilization, and a bridging technology to help migrate applications to Solaris 10 at each organization's own pace while reducing risks. The technology uses Solaris Containers, BrandZ, and other Solaris 10 features to virtualize and run the legacy environments in a compatible way on Solaris 10 systems. It provides a way to phase upgrades by initially deploying applications in Containers and then later redeploying directly on Solaris 10.
This document summarizes Oracle's virtualization product Oracle VM. It outlines Oracle VM's key features such as running Linux and Windows virtual machines, live migration, and integrated management. It also discusses Oracle VM's testing and support for Oracle databases, middleware, and applications. The document promotes Oracle VM's benefits like consolidation, high availability, and lower total cost of ownership.
This document provides an overview of topics covered in a Red Hat System Administration training course, including:
1) Managing files graphically with Nautilus and getting help in both graphical and textual environments.
2) Managing physical storage by creating and formatting partitions, and managing logical volumes using LVM concepts.
3) Monitoring system resources such as processes, disk usage, and software packages.
4) Configuring networking and managing users, groups, and permissions.
See how Dell works efficiently with VMware to provide innovative architectures that are scalable and flexible. Learn about servers, networking, storage, and comprehensive systems management
- Oracle VM is Oracle's virtualization software that allows multiple guest operating systems to run concurrently on a single physical host.
- Oracle VM is fully supported and certified for running Oracle products in virtualized environments, unlike other virtualization solutions.
- Running Oracle databases and applications on Oracle VM provides benefits like server consolidation, rapid provisioning using VM templates, high availability with features like live migration and auto-restart.
This document discusses VMware performance troubleshooting. It covers topics like root cause analysis, performance characteristics of CPU, memory, disk and networking, and tools like ESXTop, vm-support and the service console. It provides guidelines on capacity planning, virtual machine optimization and design best practices.
Virtualization allows consolidation of servers to improve efficiency and reduce costs. It addresses challenges like high server maintenance costs, power and cooling expenses from datacenter sprawl, and limited space for physical expansion. Solaris virtualization technologies like containers, logical domains, and the xVM hypervisor enable consolidation while maintaining performance and security. They provide flexibility to adapt resource allocation to business needs and improve resilience against failures or disasters.
This document provides an overview of vMotion capabilities in VMware vSphere, including:
- Types of virtual machine migrations like vMotion, Storage vMotion, and shared-nothing vMotion.
- Requirements for vMotion like compatible CPUs and network connectivity.
- Enhanced features in vSphere 6 like separate vMotion networking stacks and long distance vMotion.
- Best practices for vMotion planning, limitations, and troubleshooting migration errors.
The document provides an overview of virtual networking concepts in VMware vSphere, including:
- Types of virtual switch connections like virtual machine port groups and VMkernel ports
- Standard switches and distributed switches
- VLAN configurations and tagging
- Network adapter and switch port policies for security, traffic shaping, and failover
- Troubleshooting tools like ESXCLI, TCPDUMP and networking commands
This document provides an overview and introduction to virtual storage concepts in VMware vSphere, including NFS, iSCSI, VMFS, and Virtual SAN datastores. It discusses storage protocols, multipathing, and best practices for configuring and managing different types of datastores. The document is divided into several sections covering storage concepts, iSCSI, NFS, VMFS, and Virtual SAN datastores.
The document provides an introduction to VMware vSphere distributed switches. It lists the benefits of distributed switches over standard switches, describes the distributed switch architecture, and discusses how to create, manage, and configure distributed switches and their properties. It also covers topics like distributed port groups, VMkernel networking, NetFlow, private VLANs, and troubleshooting distributed switch issues.
This document provides an overview of VMware vSphere Update Manager and host profiles. It discusses how vSphere Update Manager can be used to centrally manage patches and updates for ESXi hosts and virtual machines. Key capabilities of vSphere Update Manager include automated patch downloading, creation of baselines and groups, scanning systems for compliance, and remediating non-compliant systems. The document also discusses how host profiles provide a mechanism for centralized host configuration management through the creation of profiles from reference hosts and attaching other hosts to profiles.
Oracle Solaris Simple, Flexible, Fast: Virtualization in 11.3OTN Systems Hub
Oracle Solaris
Simple, Flexible, Fast:
Virtualization in 11.3
Duncan Hardie – Principal Product Manager
Edward Pilatowicz – Senior Principal Software Engineer
Oracle Solaris
June 14, 2016
This document provides an overview of VMware virtualization solutions including ESXi, vSphere, and vCenter. It describes what virtualization and hypervisors are, lists VMware's product lines, and summarizes key features and capabilities of ESXi, vSphere, and vCenter such as centralized management, monitoring, high availability, and scalability.
The document is about a tutorial on VMware performance for advanced users. It discusses:
- Using a combination of introductory vSphere internals and performance analysis techniques to learn how to interpret metrics and triage performance problems.
- Topics that will be covered include performance monitoring, CPU, memory, I/O and storage, networking, and applications.
- The objective is for attendees to learn how to be practitioners of performance diagnosis and capacity planning with vSphere.
Streamline operations with new and updated VMware vSphere 8.0 features on 16t...Principled Technologies
By using the latest software and Dell PowerEdge servers in your VMware vSphere environment, you can provide your data center administrators with new or updated tools that simplify routine tasks in both initial host setup and ongoing monitoring. In our exploration of the latest features in vSphere 8.0 Lifecycle Manager, we found that vSphere 8.0 on latest-gen Dell PowerEdge servers offers advantages compared to the previous generation, which may make an infrastructure update worth your while. By introducing vSphere Configuration Profiles and providing simpler image updates to vSphere clusters, VMware vSphere 8.0 on latest-generation Dell PowerEdge servers can help streamline operations for your administrative staff.
White paper: IBM FlashSystems in VMware EnvironmentsthinkASG
Drive performance in VMware environments with IBM FlashSystem. IBM flash storage delivers extreme, scalable performance for virtualized infrastructure.
VMware vSphere vMotion: 5.4 times faster than Hyper-V Live MigrationVMware
Businesses using a virtualized infrastructure have many reasons to move active virtual machines (VMs) from one physical server to another. Whether the migrations are for routine maintenance, balancing performance needs, work distribution (consolidating VMs onto fewer servers during non-peak hours to conserve resources), or another reason, the best virtual infrastructure platform executes the move as quickly as possible and with minimal impact to end users.
We tested two competing features that move active VMs from one server to another, VMware vSphere 5 vMotion and Microsoft® Windows Server® 2008 R2 SP1 Hyper-V Live Migration. While both perform these moves with no VM downtime, in our testing the VMware solution did so faster, with greater application stability, and with less impact to application performance – clearly showing that not all live migration technologies are the same. VMware also holds an enormous advantage in concurrency: VMware vSphere 5 can move eight VMs at a time while a Microsoft Hyper-V cluster node can take part only as the source or destination in one live migration at a time. In our two test scenarios, the VMware vMotion solution was up to 5.4 times faster than the Microsoft Hyper-V Live Migration solution.
Most medium and large-sized IT organizations have deployed several generations of virtualized servers, and they have become more comfortable with the performance and reliability with each deployment. As IT organizations increased virtual machine (VM) density, they reached the limits of vSphere software, server memory, CPU, and I/O.
A new VM engine is now available and this document describes how it can help IT organizations maximize use of their servers running VMware® vSphere® 5.1 (henceforth referred to as vSphere 5.1).
VMworld 2013: What's New in vSphere Platform & Storage VMworld
VMworld 2013
Kyle Gleed, VMware
Cormac Hogan, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The IBM XIV Gen3 Storage System provides several key integrations with VMware vSphere that improve performance and management:
1) It supports VAAI primitives like full copy offload and hardware-assisted locking that improve scalability and reduce host processing.
2) It integrates with vSphere APIs like VASA to provide real-time storage information and alerts to vCenter.
3) It includes a plug-in for vCenter that allows storage provisioning, mapping, replication and snapshot management from within vCenter.
Virtualization performance: VMware vSphere 5 vs. Red Hat Enterprise Virtualiz...Principled Technologies
Using a hypervisor that offers better resource management and scalability can deliver excellent virtual machine performance on your servers. In our testing, VMware vSphere 5 allowed our host’s virtual machines to outperform those running on RHEV 3 by over 28 percent in total OPM performance. Furthermore, VMware vSphere 5 performance continued to improve when going from 39 VMs to 42 VMs: Total performance for VMware vSphere 5 increased by 2.8 percent, whereas it decreased by 7.2 percent with RHEV 3.
With the capabilities and scalability that VMware vSphere 5 offers, you are able to utilize the full capacity of your servers with confidence and purchase fewer servers to handle workload spikes; this can translate to fewer racks in the data center, lower costs for your business, and more consistent overall application performance.
Networker integration for optimal performanceMohamed Sohail
In large, modern data centers, integrating multiple products—whether from the same vendor or multiple vendors—to form a stable, consistent workflow is a major challenge. In their award-winning Knowledge Sharing article, Mohamed Sohail and Shareef Bassiouny offer some best practices for integrating NetWorker and different EMC products and present some best practices for the optimum performance of this integration.
Blue Chip is a leading UK provider of IT infrastructure solutions established in 1992. It provides systems design, implementation, support and training services across the UK from multiple locations. Quest's vRanger 5.0 backup and replication software includes new features like integrated backup and replication, cataloging for faster restores, and Linux file-level restore. The software also improves vReplicator integration and supports NFS repositories.
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyEMC Forum India
Virtualizing Critical Applications with Vsphere 5 provides concise summaries of the key enhancements in vSphere 5 that enable virtualizing even the most critical applications. These include support for larger virtual machines with up to 32 vCPUs, 1TB of RAM and 4x larger sizes. It also improves availability, storage, and network services with features like Storage DRS, Profile-Driven Storage, and Network I/O Control that provide performance guarantees and help prevent resource starvation issues. The document also highlights how vSphere 5 simplifies infrastructure deployment and management with capabilities such as Auto Deploy, vCenter Server Appliance, and the new Web Client.
Mythbusting goes virtual What's new in vSphere 5.1Eric Sloof
The document summarizes new features in vSphere 5.1 that address common myths about virtualization limitations. It discusses that vMotion can now occur without shared storage using enhanced vMotion, vSphere management no longer requires Windows with the new web client, vSphere Replication provides site disaster recovery without SRM, the VMFS host limit for linked clones increased from 8 to 32, and distributed switch configurations can now be backed up and restored.
VMworld 2013: Maximize Database Performance in Your Software-Defined Data CenterVMworld
VMworld 2013
Mark Achtemichuk, VMware
Michael Webster, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
With today’s technologies, you can run large mission-critical databases on virtual machines safely and securely. Creating virtual servers to host databases, instead of running them on physical servers, has the potential to provide a number of time- and cost-saving benefits to your organization, as you can take advantage of the many powerful features and tools that VMware vSphere offers including vMotion, DRS, and much more. We designed this functional stress test to test the complete hardware and software stack, which was comprised of VMware vSphere, Cisco UCS, and EMC VMAX Cloud Edition storage. The EMC VMAX Cloud Edition absorbed the workload’s requested storage throughput. The Cisco B200 M3 servers were able to supply the processing power necessary for the workload. The Cisco Nexus switches, despite using only at a fraction of their maximum capacity in this test, allowed for the required network throughput, including the vMotion traffic. This physical infrastructure, used in concert with VMware vSphere, allowed vMotions to occur and the RAC application to remain active despite the migration scenarios we tested, some of which were more extreme than database administrators would typically encounter. The tests experienced no false ejections or RAC cluster fencing operations, showing the mature capabilities of VMware vMotion technology.
As we showed in our tests using Cisco UCS server hardware and EMC VMAX Cloud Edition storage, VMware vSphere with vMotion made it easy to shift large databases and other workloads from one server to another in a cluster, without application downtime. By choosing to run your large databases on VMware vSphere virtual machines, you can reap the benefits of VMware vMotion for the ultimate agility and flexibility in managing your mission-critical database workloads.
Business-critical applications on VMware vSphere 6, VMware Virtual SAN, and V...Principled Technologies
The document summarizes performance testing of VMware vSphere 6, VMware Virtual SAN, and VMware NSX running business-critical applications. In single-site testing, the solution delivered over 189,000 IOPS and 5ms average read latency under heavy workload. In two-site testing, it live migrated all VMs between sites in under 9 minutes with no downtime or performance degradation for applications. The software-defined datacenter solution provided reliable performance and business continuity for critical workloads.
Comparação entre XenServer 6.2 e VMware VSphere 5.1 - Comparison of Citrix Xe...Lorscheider Santiago
The document compares Citrix XenServer 6.2 and VMware vSphere 5.1. It discusses that both pioneered server virtualization and how their architectures have evolved. It then analyzes key areas like memory management, storage management, infrastructure management, and disaster recovery planning. It notes XenServer uses open standards like VHD and avoids proprietary formats. For desktop virtualization, it highlights XenServer integrates with Citrix XenDesktop and its Provisioning Services for template management to efficiently deploy golden images at scale.
Hyper-V provides competitive advantages over VMware in the areas of core virtualization, private cloud infrastructure, scalability, storage capabilities, networking, security, mobility and high availability. It offers higher scalability, larger virtual machines and disks, more storage features, an extensible virtual switch, encryption, and live migration capabilities without additional licensing costs compared to VMware.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
Conclusion
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. We found that a Dell PowerEdge R760 server cluster running VCF 5.1 processed over 78 percent more TPM and 79 percent more NOPM than a Dell PowerEdge R750 server cluster running VCF 4.5. It’s also worth noting that the PowerEdge R750 cluster bottlenecked on vSAN storage, with max write latency at 8.9ms. For reference, the PowerEdge R760 cluster clocked in at 3.8ms max write latency. This higher latency is due in part to the single disk group per host on the moderately configured PowerEdge R750 cluster, while the better-configured PowerEdge R760 cluster supported four disk groups per host. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Improve performance and gain room to grow by easily migrating to a modern Ope...Principled Technologies
We deployed this modern environment, then migrated database VMs from legacy servers and saw performance improvements that support consolidation
Conclusion
If your organization’s transactional databases are running on gear that is several years old, you have much to gain by upgrading to modern servers with new processors and networking components and an OpenShift environment. In our testing, a modern OpenShift environment with a cluster of three Dell PowerEdge R7615 servers with 4th Generation AMD EPYC processors and high-speed 100Gb Broadcom NICs outperformed a legacy environment with MySQL VMs running on a cluster of three Dell PowerEdge R7515 servers with 3rd Generation AMD EPYC processors and 25Gb Broadcom NICs. We also easily migrated a VM from the legacy environment to the modern environment, with only a few steps required to set up and less than ten minutes of hands-on time. The performance advantage of the modern servers would allow a company to reduce the number of servers necessary to perform a given amount of database work, thus lowering operational expenditures such as power and cooling and IT staff time for maintenance. The high-speed 100Gb Broadcom NICs in this solution also give companies better network performance and networking capacity to grow as they embrace emerging technologies such as AI that put great demands on networks.
Similar to Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced features Flash Read Cache and vMotion (20)
Help skilled workers succeed with Dell Latitude 7030 and 7230 Rugged Extreme ...Principled Technologies
Instead of equipping consumer-grade tablets with rugged cases
Conclusion
In our hands-on testing, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets showed that they are better equipped to help skilled workers than consumer-grade Apple iPad Pro and Samsung Galaxy Tab S9 tablets in multiple ways. They provide more built-in capabilities and features than the consumer-grade tablets we tested. And, while they were more expensive than the rugged-case fortified consumer-grade options we tested, their rugged claims were more than skin deep.
In our performance and durability tests, the Dell Latitude 7030 and 7230 Rugged Extreme Tablets performed better in demanding manufacturing, logistics, and field service environments than consumer-grade tablets with rugged cases. Both Rugged Extreme Tablets, with their greater thermal range, suffered less performance degradation in extreme temperatures, never failed and were merely scuffed after 26 hard drops, survived a 10 minute drenching with no ill effects, and were easier to view in direct sunlight than Apple iPad Pro and Samsung Galaxy Tab S9 tablets.
Bring ideas to life with the HP Z2 G9 Tower Workstation - InfographicPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to a similarly configured Dell Precision 3660 Tower Workstation in its out-of-box performance mode
Investing in GenAI: Cost‑benefit analysis of Dell on‑premises deployments vs....Principled Technologies
Conclusion
Diving into the world of GenAI has the potential to yield a great many benefits for your organization, but it first requires consideration for how best to implement those GenAI workloads. Whether your AI goals are to create a chatbot for online visitors, generate marketing materials, aid troubleshooting, or something else, implementing an AI solution requires careful planning and decision-making. A major decision is whether to host GenAI in the cloud or keep your data on premises. Traditional on-premises solutions can provide superior security and control, a substantial concern when dealing with large amounts of potentially sensitive data. But will supporting a GenAI solution on site be a drain on an organization’s IT budget?
In our research, we found that the value proposition is just the opposite: Hosting GenAI workloads on premises, either in a traditional Dell solution or using a managed Dell APEX pay-per-use solution, could significantly lower your GenAI costs over 3 years compared to hosting these workloads in the cloud. In fact, we found that a comparable AWS SageMaker solution would cost up to 3.8 times as much and an Azure ML solution would cost up to 3.6 times as much as GenAI on a Dell APEX pay-per-use solution. These results show that organizations looking to implement GenAI and reap the business benefits to come can find many advantages in an on-premises Dell solution, whether they opt to purchase and manage it themselves or choose a subscription-based Dell APEX pay-per-use solution. Choosing an on-premises Dell solution could save your organization significantly over hosting GenAI in the cloud, while giving you control over the security and privacy of your data as well as any updates and changes to the environment, and while ensuring your environment is managed consistently.
Workstations powered by Intel can play a vital role in CPU-intensive AI devel...Principled Technologies
In three AI development workflows, Intel processor-powered workstations delivered strong performance, without using their GPUs, making them a good choice for this part of the AI process
Conclusion
We executed three AI development workflows on tower workstations and mobile workstations from three vendors, with each workflow utilizing only the Intel CPU cores, and found that these platforms were suitable for carrying out various AI tasks. For two of the workflows, we learned that completing the tasks on the tower workstations took roughly half as much time as on the mobile workstations. This supports the idea that the tower workstations would be appropriate for a development environment for more complex models with a greater volume of data and that the mobile workstations would be well-suited for data scientists fine-tuning simpler models. In the third workflow, we explored tower workstation performance with different precision levels and learned that using 16-bit floating point precision allowed the workstations to execute the workflow in less time and also reduced memory usage dramatically. For all three AI workflows we executed, we consider the time the workstations needed to complete the tasks to be acceptable, and believe that these workstations can be appropriate, cost-effective choices for these kinds of activities.
Enable security features with no impact to OLTP performance with Dell PowerEd...Principled Technologies
Get comparable online transaction processing (OLTP) performance with or without enabling AMD Secure Memory Encryption and AMD Secure Encrypted Virtualization - Encrypted State
Conclusion
You’ve likely already implemented many security measures for your servers, which may include physical security for the data center, hardware-level security, and software-level security. With the cost of data breaches high and still growing, however, wise IT teams will consider what additional security measures they may be able to implement.
AMD SME and SEV-ES are technologies that are already available within your AMD processor-powered 16th Generation Dell PowerEdge servers—and in our testing, we saw that they can offer extra layers of security without affecting performance. We compared the online transaction processing performance of a Dell PowerEdge R7625 server, powered by AMD EPYC 9274F processors, with and without these two security features enabled. We found that enabling AMD Secure Memory Encryption and Secure Encrypted Virtualization-Encrypted State did not impact performance at all.
If your team is assessing areas where you might be able to enhance security—without paying a large performance cost—consider enabling AME SME and AMD SEV-ES in your Dell PowerEdge servers.
Improving energy efficiency in the data center: Endure higher temperatures wi...Principled Technologies
In high-temperature test scenarios, a Dell PowerEdge HS5620 server continued running an intensive workload without component warnings or failures, while a Supermicro SYS‑621C-TN12R server failed
Conclusion: Remain resilient in high temperatures with the Dell PowerEdge HS5620 to help increase efficiency
Increasing your data center’s temperature can help your organization make strides in energy efficiency and cooling cost savings. With servers that can hold up to these higher everyday temperatures—as well as high temperatures due to unforeseen circumstances—your business can continue to deliver the performance your apps and clients require.
When we ran an intensive floating-point workload on a Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R in three scenario types simulating typical operations at 25°C, a fan failure, and an HVAC malfunction, the Dell server experienced no component warnings or failures. In contrast, the Supermicro server experienced warnings in all three scenario types and experienced component failures in the latter two tests, rendering the system unusable. When we inspected and analyzed each system, we found that the Dell PowerEdge HS5620 server’s motherboard layout, fans, and chassis offered cooling design advantages.
For businesses aiming to meet sustainability goals by running hotter data centers, as well as those concerned with server cooling design, the Dell PowerEdge HS5620 is a strong contender to take on higher temperatures during day-to-day operations and unexpected malfunctions.
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a Kubernetes container-based generative AI workload effectively
Dell APEX Cloud Platform for Red Hat OpenShift: An easily deployable and powe...Principled Technologies
The 4th Generation Intel Xeon Scalable processor‑powered solution deployed in less than two hours and ran a generative AI workload effectively
Conclusion
The appeal of incorporating GenAI into your organization’s operations is likely great. Getting started with an efficient solution for your next LLM workload or application can seem daunting because of the changing hardware and software landscape, but Dell APEX Cloud Platform for Red Hat OpenShift powered by 4th Gen Intel Xeon Scalable processors could provide the solution you need. We started with a Dell Validated Design as a reference, and then went on to modify the deployment as necessary for our Llama 2 workload. The Dell APEX Cloud Platform for Red Hat OpenShift solution worked well for our LLM, and by using this deployment guide in conjunction with numerous Dell documents and some flexibility, you could be well on your way to innovating your next GenAI breakthrough.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation (VCF)
For organizations running clusters of moderately configured, older Dell PowerEdge servers with a previous version of VCF, upgrading to better-configured modern servers can provide a significant performance boost and more.
Upgrade your cloud infrastructure with Dell PowerEdge R760 servers and VMware...Principled Technologies
Compared to a cluster of PowerEdge R750 servers running VMware Cloud Foundation 4.5
If your company is struggling with underperforming infrastructure, upgrading to 16th Generation Dell PowerEdge servers running VCF 5.1 could be just what you need to handle more database throughput and reduce vSAN latencies. As an additional benefit to IT admins, we also found that the embedded VMware Aria Operation adapter provided useful infrastructure insights.
Realize 2.1X the performance with 20% less power with AMD EPYC processor-back...Principled Technologies
Three AMD EPYC processor-based two-processor solutions outshined comparable Intel Xeon Scalable processor-based solutions by handling more Redis workload transactions and requests while consuming less power
Conclusion
Performance and energy efficiency are significant factors in processor selection for servers running data-intensive workloads, such as Redis. We compared the Redis performance and energy consumption of a server cluster in three AMD EPYC two-processor configurations against that of a server cluster in two Intel Xeon Scalable two-processor configurations. In each of our three test scenarios, the server cluster backed by AMD EPYC processors outperformed the server cluster backed by Intel Xeon Scalable processors. In addition, one of the AMD EPYC processor-based clusters consumed 20 percent less power than its Intel Xeon Scalable processor-based counterpart. Combining these measurements gave us power efficiency metrics that demonstrate how valuable AMD EPYC processor-based servers could be—you could see better performance per watt with these AMD EPYC processor-based server clusters and potentially get more from your Redis or other data intensive applications and workloads while reducing data center power costs.
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience
Conclusion
When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
A Principled Technologies deployment guide
Conclusion
Deploying VMware Cloud Foundation 5.1 on next gen Dell PowerEdge servers brings together critical virtualization capabilities and high-performing hardware infrastructure. Relying on our hands-on experience, this deployment guide offers a comprehensive roadmap that can guide your organization through the seamless integration of advanced VMware cloud solutions with the performance and reliability of Dell PowerEdge servers. In addition to the deployment efficiency, the Cloud Foundation 5.1 and PowerEdge solution delivered strong performance while running a MySQL database workload. By leveraging VMware Cloud Foundation 5.1 and PowerEdge servers, you could help your organization embrace cloud computing with confidence, potentially unlocking a new level of agility, scalability, and efficiency in your data center operations.
Based on our research using publicly available materials, it appears that Dell supports nine of the ten PC security features we investigated, HP supports six of them, and Lenovo supports three features.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Increase security, sustainability, and efficiency with robust Dell server man...Principled Technologies
Compared to the Supermicro management portfolio
Conclusion
Choosing a vendor for server purchases is about more than just the hardware platform. Decision-makers must also consider more long-term concerns, including system/data security, energy efficiency, and ease of management. These concerns make the systems management tools a vendor offers as important as the hardware.
We investigated the features and capabilities of server management tools from Dell and Supermicro, comparing Dell iDRAC9 against Supermicro IPMI for embedded server management and Dell OpenManage Enterprise and CloudIQ against Supermicro Server Manager for one-to-many device and console management and monitoring. We found that the Dell management tools provided more comprehensive security, sustainability, and management/monitoring features and capabilities than Supermicro servers did. In addition, Dell tools automated more tasks to ease server management, resulting in significant time savings for administrators versus having to do the same tasks manually with Supermicro tools.
When making a server purchase, a vendor’s associated management products are critical to protect data, support a more sustainable environment, and to ease the maintenance of systems. Our tests and research showed that the Dell management portfolio for PowerEdge servers offered more features to help organizations meet these goals than the comparable Supermicro management products.
Scale up your storage with higher-performing Dell APEX Block Storage for AWS ...Principled Technologies
In our tests, Dell APEX Block Storage for AWS outperformed similarly configured solutions from Vendor A, achieving more IOPS, better throughput, and more consistent performance on both NVMe-supported configurations and configurations backed by Elastic Block Store (EBS) alone.
Dell APEX Block Storage for AWS supports a full NVMe backed configuration, but Vendor A doesn’t—its solution uses EBS for storage capacity and NVMe as an extended read cache—which means APEX Block Storage for AWS can deliver faster storage performance.
Scale up your storage with higher-performing Dell APEX Block Storage for AWSPrincipled Technologies
Dell APEX Block Storage for AWS offered stronger and more consistent storage performance for better business agility than a Vendor A solution
Conclusion
Enterprises desiring the flexibility and convenience of the cloud for their block storage workloads can find fast-performing solutions with the enterprise storage features they’re used to in on-premises infrastructure by selecting Dell APEX Block Storage for AWS.
Our hands-on tests showed that compared to the Vendor A solution, Dell APEX Block Storage for AWS offered stronger, more consistent storage performance in both NVMe-supported and EBS-backed configurations. Using NVMe-supported configurations, Dell APEX Block Storage for AWS achieved 4.7x the random read IOPS and 5.1x the throughput on sequential read operations per node vs. Vendor A. In our EBS-backed comparison, Dell APEX Block Storage for AWS offered 2.2x the throughput per node on sequential read operations vs. Vendor A.
Plus, the ability to scale beyond three nodes—up to 512 storage nodes with capacity of up to 8 PBs—enables Dell APEX Block Storage for AWS to help ensure performance and capacity as your team plans for the future.
Get in and stay in the productivity zone with the HP Z2 G9 Tower WorkstationPrincipled Technologies
We compared CPU performance and noise output of an HP Z2 G9 Tower Workstation in High Performance Mode to Dell Precision 3660 and 5860 tower workstations in optimized performance modes
Conclusion
HP Z2 G9 Tower Workstation users can change the BIOS settings to dial in the performance mode that best suits their needs: High Performance Mode, Performance Mode, or Quiet Mode. In good
news for both creative and technical professionals, we found that an Intel Core i9-13900 processor-powered HP Z2 G9 Tower Workstation set to High Performance mode received higher CPU-based benchmark scores than both a similarly configured Dell Precision 3660 and a Dell Precision 5860 equipped with an Intel Xeon w5-2455x processor. Plus, the HP Z2 G9 Tower Workstation was quieter while running CPU-intensive Cinebench 2024 and SPECapc for Solidworks 2022 workloads than both Dell Precision tower workstations. This means HP Z2 G9 Tower Workstation users who prize performance over everything else can do so without sacrificing a quiet workspace.
Open up new possibilities with higher transactional database performance from...Principled Technologies
In our PostgreSQL tests, R7i instances boosted performance over R6i instances with previous-gen processors
If you use the open-source PostgreSQL database to run your critical business operations, you have many cloud options from which to choose. While many of these instances can do the job, some can deliver stronger performance, which can mean getting a greater return on your cloud investment.
We conducted hands-on testing with the HammerDB TPROC-C benchmark to see how the PostgreSQL performance of Amazon EC2 R7i instances, enabled by 4th Gen Intel Xeon Scalable processors, stacked up to that of R6i instances with previous-generation processors. We learned that small, medium-sized, and large R7i instances with the newer processors delivered better OLTP performance, with improvements as high as 13.8 percent. By choosing the R7i instances, your organization has the potential to support more users, deliver a better experience to those users, and even lower your cloud operating expenditures by requiring fewer instances to get the job done.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
National Security Agency - NSA mobile device best practices
Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced features Flash Read Cache and vMotion
1. MARCH 2014
A PRINCIPLED TECHNOLOGIES TEST REPORT
Commissioned by VMware, Inc.
ACCELERATING VIRTUALIZED ORACLE 12C PERFORMANCE WITH
VSPHERE 5.5 ADVANCED FEATURES FLASH READ CACHE AND VMOTION
IT administrators are always looking for ways to improve and fully utilize their
hardware resources. Virtualizing IT infrastructure for critical applications and databases,
such as Oracle Database 12c, has become the IT industry trend, providing the ability to
condense multiple workloads on a single server. VMware vSphere is a purpose-built
hypervisor designed to provide the performance, reliability, and flexibility that these
mission-critical applications require. With new features such as vSphere Flash Read
Cache™ (vFRC) in vSphere 5.5, VMware can improve Oracle Database 12c performance
while maintaining the reliability features you have come to expect from the platform,
including VMware vMotion.
In the Principled Technologies labs, we set up a four-node VMware vSphere 5.5
cluster using Cisco UCS B200 M3 blade servers and EMC VMAX 10K storage. We ran 10
virtual machines (VMs) on the cluster, each with its own Oracle Database 12c
application, and ran a mix of database workloads simultaneously to gather baseline
performance data. Then, we enabled the new vFRC feature specifically on the OLAP
workloads and ran the tests again. We found that vSphere 5.5 with vSphere Flash Read
Cache-enabled VMs decreased the time it took to run an OLAP workload by up to 14
percent. Additionally, we demonstrated the tried-and-true VMware vMotion
functionality when we enabled the vFRC feature and moved VMs from one server to
2. A Principled Technologies test report 2Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
another. The vFRC-enabled VMs transitioned seamlessly to the other hosts while
continuing to cache on the destination host.
VSPHERE FLASH READ CACHE BOOSTED ORACLE PERFORMANCE
Benefits on the vFRC-enabled VM
Maximizing the performance of your virtualized critical Oracle Database 12c
applications is crucial to the success of your business. As we found in our tests, vSphere
Flash Read Cache in VMware vSphere 5.5 can improve performance without sacrificing
the tools that have become critical to your infrastructure management, such as VMware
vMotion. VMware vFRC is designed to lower application latency by virtualizing server-
side flash storage to provide a high-performing read cache layer.1
As data on a vFRC-
enabled virtual machine disk (VMDK) is read, vFRC copies the data to a flash resource
pool that is comprised of one or more high-performance, enterprise flash devices at the
individual ESXi host level. As repeated reads begin to occur over time on the vFRC-
enabled VMDK, the VM accesses the data from the flash resource pool, bypassing the
VMDK and the underlying physical storage. Not only can this offload of data from the
storage onto the local server benefit vFRC enabled VMs, it can also relieve shared
storage resources in a mixed workload environment, indirectly improving storage access
performance across the board. See Figure 1 for a detailed diagram of how data is read
by vFRC-enabled VMs in vSphere 5.5.
1
To learn more about vSphere Flash Read Cache, visit www.vmware.com/products/vsphere/features-flash.
3. A Principled Technologies test report 3Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
Figure 1: Detailed view of how vSphere Flash Read Cache
works in vSphere 5.5.
We put the performance of vFRC to the test on a VMware cluster comprised of
Cisco UCS B200 M3 servers with EMC VMAX 10K storage. This combination of VMware
software, Cisco servers, and EMC storage delivered promising results for our virtualized
Oracle Database 12c workloads. See Appendix A for system configuration details, and
Appendix B and Appendix C for detailed testing steps. See the section “vSphere Flash
Read Cache configuration” in Appendix B for details on our vFRC configuration.
As Figure 2 shows, the baseline configuration with vFRC disabled did not
perform as well as the configuration with vFRC enabled. The baseline configuration took
3 hours and 13 minutes (or 193 minutes) to complete the OLAP test while the
configuration with vFRC-enabled VMs took 2 hours and 46 minutes (or 166 minutes).
Enabling vFRC decreased the time it took to complete the TPC-H-like OLAP workload by
14 percent.
4. A Principled Technologies test report 4Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
Figure 2: Enabling vSphere
Flash Read Cache decreased
the time it took to complete
the TPC-H-like OLAP workload
by 14 percent.
New feature, same performance during vMotion
Servers need occasional maintenance, and the ability to live migrate important
virtualized database workloads is key in avoiding application downtime. This is where
the flexibility of virtualization with VMware vSphere can benefit businesses. VMware
vMotion allows you to perform these live migrations of VMs from one server in a cluster
to another, without causing your workload performance to take a hit.
You can continue to utilize this familiar functionality of vMotion with vSphere
Flash Read Cache, as it checks destination-host compatibility for cache devices when
choosing to migrate vFRC-enabled VMs. Scheduling migrations via vMotion in vSphere
5.5 is the same as previous versions, with the additional choice to migrate the cache
contents with the VM or allow the VM to re-cache once the migration is complete. We
used vMotion to migrate the VMs off one host containing one vFRC-enabled OLAP VM
and two OLTP VMs that were not vFRC enabled. While there are benefits to migrating
the cache with the VM, we chose to allow the cache to re-warm on the destination
server, illustrating that even in situations where the cache has to re-warm, workloads
still benefit from vFRC. The vFRC configuration migrated and continued caching on the
destination server without issue. Figure 3 shows how we performed vMotions in our
testing.
193
166
150
155
160
165
170
175
180
185
190
195
200
Baseline configuration
with vFRC disabled
Configuration
with vFRC enabled
Minutes
Time to complete the OLAP workload
(smaller numbers are better)
In our labs…
We achieved max
throughput of up to
23 Gb/s during
vMotion.
Additionally, the
vFRC-enabled OLAP
VM migration took
2 minutes and 6
seconds to
complete.
5. A Principled Technologies test report 5Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
Figure 3: Moving VMs from one host to another with vMotion.
We measured the vFRC hit rate percent on our original host before the vMotion
event and on the target host after the vMotion event. Hit rate is a good measurement of
how well vFRC is performing. We looked at the hit rate percentage in five-minute
intervals during testing. Figure 4 shows the vFRC hit rate percent during our TPC-H-like
OLAP testing. Our vMotion event took 2 minutes and six seconds to complete and
started at the 90-minute mark.
6. A Principled Technologies test report 6Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
Figure 4: The vFRC hit rate
percentage for the original host
before the vMotion event and
for the target host after the
vMotion event.
During the same test, we also measured the total GB of vFRC used by the VMs.
Although similar to the hit rate results, the number of GB used by the VMs presents raw,
quantifiable data that would be subject to scaling in your datacenter. We looked at the
total GB of vFRC in five-minute intervals during testing and at the same times when we
monitored the hit rate. Figure 5 shows the amount of GB of vFRC used by the VMs
during our TPC-H-like OLAP testing. Length and start time for our vMotion event are the
same as Figure 3.
Figure 5: The total GB of vFRC
used by the VMs for the
original host before the
vMotion event and for the
target host after the vMotion
event.
vMotion
0
10
20
30
40
50
60
70
80
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190
Percent
Minutes
vFRC hit rate during OLAP testing
Original Host Target Host
vMotion event
0
20
40
60
80
100
120
140
160
180
200
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190
GBused
Minutes
Total vFRC used during OLAP testing
Original Host Target Host
7. A Principled Technologies test report 7Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
WHAT THIS MEANS FOR YOU
Administrators can view the benefits of vFRC-enabled VMs as they apply to their
specialties or focus for their infrastructure. In the following sections, we illustrate the
impact of our results tailored to those specialties or focuses.
What this means for VMware vSphere admins
To keep current with new technology and improvements, VMware looks to
bring increased functionality to vSphere with every release. A major change in vSphere
5.5 comes in the addition of vFRC. With VMware vSphere 5.5, you now have access to
more performance-enhancing features, like vFRC, while using the same interface and
management tools you already use in your virtualized environment. You can upgrade
your infrastructure to version 5.5 from 5.1 and start realizing the benefits of these new
features immediately. VMware vSphere continues to be the flagship for VMware and it
is still a preferred choice of virtualization platforms in various-sized datacenters.
The new vSphere Flash Read Cache feature we tested enables pooling of flash-
based devices into a single vSphere Flash Resource to speed up performance of read-
intensive workloads. As our tests results show, VMware vSphere 5.5 can provide a
powerful platform for critical Oracle Database 12c applications and improve
performance with its new features, while still being able to depend on the reliability and
speed of vMotion.
What this means for Oracle Database 12c admins
Administrators dealing with Oracle Database applications have two main
concerns: performance and reliability. The new vSphere Flash Read Cache feature helps
address the performance concern. For read-heavy OLAP workloads, vSphere 5.5 with
vFRC enabled can increase the performance of your decision support systems. Any
performance increase can translate to getting more from your hardware, which can
mean a delay in upgrading current hardware/purchasing new hardware and any
associated costs, such as Oracle licensing.
Business-critical databases, whether virtualized or bare metal, cannot go down
or work grinds to a halt. With reliability features such as VMware vMotion, which lets
you move VMs from one server to another for maintenance events, and performance-
enhancing features like vFRC, Oracle Database applications can keep working and
improving even if maintenance is needed on your hardware.
What this means for Cisco server admins
Enterprise-class servers, such as the Cisco UCS B200 M3 blade servers we used
in our study, can help deliver high levels of performance and density for virtualized
Oracle Database 12c workloads running on VMware vSphere 5.5 and vFRC. The Cisco
UCS chassis we used in testing is capable of providing up to 80 Gbps of bandwidth
8. A Principled Technologies test report 8Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
without a switch to any blade where vMotion can occur. The VIC 1240 adapter that we
used for the UCS infrastructure enables 40 Gbps to each blade by default. With only a
single FCoE connection to the Fabric Interconnects, the VIC 1240 can burst to 10 Gbps
per Fabric Interconnect or 20 Gbps per blade. This allows you the flexibility you need
when using vFRC or performing vMotion. With our configuration, we were able to push
vMotion throughput up to 23 Gbps. To see how we setup and used our Cisco
components, see the section “Hardware and Software” in Appendix B.
The Cisco UCS infrastructure features a converged fabric where all systems
management and configuration originates from a pair of redundant Fabric Interconnects
(FI) to allow management of large-scale deployments and migrations from a single
location, easing the job of server admins.
For our flash device, we used an LSI 400GB SLC WarpDrive mezzanine card, sold
by Cisco for their UCS blade servers. By adding the LSI WarpDrive to our flash resource
pool, we not only gained added capacity for read cache, but we also gained the
reliability and durability of SLC solid-state technology, ensuring great underlying
hardware performance for vFRC.
By pairing this Cisco architecture with the performance enhancing and reliability
features of VMware vSphere 5.5 and vFRC, you can ensure you get the most out of
mission-critical workloads.
What this means for EMC VMAX storage admins
The storage solutions that storage administrators choose can greatly affect the
performance of critical Oracle Database 12c workloads running on vFRC-enabled VMs.
The EMC VMAX 10K we used in our tests is an enterprise-class storage array that is
tiered with EFD, FC, and SATA disks leveraging FAST technology. This ability to choose
from various tiered I/O performance levels ensures you get the storage I/O needed to
run a virtualized mixed Oracle Database 12c environment, regardless of the specific
storage demands for each virtual workload. In a virtualized environment like a typical
VMware vSphere 5.5 cluster, the ability to provide different levels of performance
capabilities to each workload while still carefully managing storage resources is crucial.
VMAX 10K features like QOS service levels, dynamic host I/O limits, and storage tiering
make it easy to ensure hypervisor hosts and virtual machines that need more I/O get the
storage resources they need, while still providing reliable service to other hosts in the
environment.
With the read caching capabilities of vSphere Flash Read Cache, frequently read
data is moved from external storage to a local flash device on a server. The vFRC
enabled workloads and increased reads being serviced by the underlying flash device
have a closer data locality as more storage I/O becomes available. This additional
headroom allows storage administrators to see potential increases in storage
9. A Principled Technologies test report 9Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
performance for other workloads in the environment during times of peak utilization. To
see how we configured our storage layout for the VMAX 10K, see the section titled
“Storage Layout” in Appendix B.
IN CONCLUSION
Strong Oracle Database 12c performance is vital to the state of your business.
Virtualizing such important workloads requires a reliable and high-performing
virtualization platform, along with the right servers and storage. EMC, Cisco and
VMware offer proven technologies to meet this need. In addition, newer technologies
like vFRC can have a positive impact on database performance by offloading some of the
storage I/O onto the local server. This can be beneficial to the intended application and
has the potential to improve all applications in a mixed workload environment over time
by relieving pressure on shared storage resources.
In our tests, we found that the new release of VMware vSphere 5.5 provided a
new feature, vSphere Flash Read Cache, that decrease TPC-H-like OLAP workload
processing time by 14 percent. We also found that running these workloads on Oracle
Database 12c with the new feature didn’t affect the ability of administrators to
complete routine vMotion tasks; with vSphere Flash Read Cache enabled during a
vMotion, the migration went smoothly and vFRC continued to cache after the migration
completed. This means that the combination of VMware vSphere 5.5 platform, Cisco
UCS B200 M3 servers, and EMC VMAX 10K storage was able to provide improved Oracle
Database 12c performance using the new vSphere Flash Read Cache feature, which
improves the reliability and database response times you deliver for customers and
employees alike.
10. A Principled Technologies test report 10Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
APPENDIX A – SYSTEM CONFIGURATION INFORMATION
Figure 6 provides detailed configuration information for the test systems.
System 4x Cisco UCS B200 M3 server
General
Number of processor packages 2
Number of cores per processor 8
Number of hardware threads per core 2
System power management policy Default
CPU
Vendor Intel®
Name Xeon®
Model number E5-2680
Stepping 7
Socket type LGA2011
Core frequency (GHz) 2.7
Bus frequency 8.00 GT/s
L1 cache 32KB +32KB
L2 cache 256KB per core
L3 cache 20MB
Platform
Vendor and model number Cisco UCS B200 M3
Motherboard model number Cisco FCH1607GV4
BIOS name and version Cisco B200M3.2.1.1a.0.121720121447
BIOS settings Default
Memory modules
Total RAM in system (GB) 320
Vendor and model number 16x Cisco UCS-MR-1X162RY-A16, 8x Cisco UCS-MR-1X082RY-A
Type PC3-12800
Speed (MHz) 1,600
Speed running in the system (MHz) 1,333
Size (GB) (16x) 16, (8x) 8
Number of RAM module(s) 24 (16 + 8)
Chip organization Double-sided
Rank Dual
Hypervisor
Name VMware vSphere 5.5.0
Build number 1331820
Language English
RAID controller
Vendor and model number LSI® MegaRAID SAS 2004
Firmware version 20.10.1-0100
11. A Principled Technologies test report 11Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
System 4x Cisco UCS B200 M3 server
Hard drives
Vendor and model number Seagate A03-D146GC2
Number of drives 2
Size (GB) 146
RPM 15,000
Type SAS
SSD cache drive
Vendor and model number LSI UCSB-F-LSI-400S SLC WarpDrive®
Number of drives 1
Size (GB) 400
RPM n/a
Type PCI-E
Converged I/O adapters
Vendor and model number Cisco UCSB-MLOM-40G-01, Cisco UCS-VIC-M82-8P
Type mLOM, Mezzanine
Virtual machine operating system
Name Oracle Enterprise Linux Release 6.5
Kernel 3.8.13-26.2.1.el6uek.x86_64
Language English
Database software Oracle Database 12c Build 12.2.0.1
Database benchmarks
Benchmark 1 HammerDB v2.15
Benchmark 2 DVD Store 2.1
Figure 6: Configuration information for the systems used in our tests.
Figure 7 provides the firmware information for the Cisco hardware we used in our tests.
UCS 5108 chassis Firmware version
UCS Manager 2.2(1b)
UCS 2208XP IO Module 1 & 2 2.2(1b)
UCS 6248UP Fabric Interconnect 1 & 2 5.2(3)N2(2.21b)
UCS B200 M3 blades Firmware version
BIOS B200M3.2.2.1a.0.111220131105
CIMC Controller 2.2(1b)
UCS VIC 1240 2.2(1b)
Figure 7: Firmware information for the Cisco hardware used in our tests.
12. A Principled Technologies test report 12Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
APPENDIX B – WHAT WE TESTED
Hardware and software
In our test bed, we configured the Cisco UCS 5108 Blade Server chassis with four cables coming from each FEX
(eight in total), going into two UCS 6248UP Fabric Interconnects (FIs). We then cabled each FI via four 10Gb Ethernet
port and one 8Gb FC port to two Cisco Nexus™ 5548UP switches, with two Ethernet links from each FI connected to each
switch, resulting in a fully redundant infrastructure. We aggregated each set of four Ethernet ports into a port channel to
ensure maximum bandwidth. Figure 8 illustrates our test bed.
Figure 8: The test bed used in testing and how components were connected.
We then configured four Cisco UCS B200 M3 blade servers with VMware vSphere 5.5. On a separate rack server,
we configured a VMware vCenter server, connected it to the UCS chassis network, and created a cluster in the VMware
13. A Principled Technologies test report 13Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
vSphere Web Console using the four blade servers. We created two vSphere vSwitches on each blade server: the first for
VM management and Oracle Application connections and the second for vMotion traffic. Each vMotion vSwitch had a
maximum MTU of 9,000 bytes and used four uplink ports (physical NICs). The management and Oracle Application
vSwitches used two of those uplink ports.
For the vMotion network specifically, we setup multi-NIC vMotion on the vSphere hosts according to VMware
best practices.2
In the Networking inventory area of the vSphere client, we created two VMkernels on the vMotion
vSwitch. We assigned two ports on the vMotion subnet to each, and configured teaming and failover. For the first
VMkernel, we assigned the first uplink as active and the second as standby. On the second, we assigned the first uplink
as standby and the second as active. The vMotion network used Jumbo Frames (MTU 9000).
Virtual machine
We had ten VMs total: two OLAP VMs to run the TPC-H-like workload and eight OLTP VMs to run the TPC-C-like
workload. The two OLAP VMs had eight vCPUs and 200GB of RAM each. The eight OLTP VMs each had two vCPUs and
48GB of RAM. The OLAP VMs had one OS VMDK, four data VMDKs, four temp VMDKs, and four redo log VMDKs. We
split the four VMDKs in each group across four LUNs on our storage. The OLTP VMs each had one OS VMDK, one data
VMDK, and one redo log VMDK. Our VM virtual NICs used the VMXNET3 type, as it offers the latest paravirtualized
benefits for VM NICs.
vSphere Flash Read Cache configuration
For our vSphere Flash Read Cache configuration, we configured a flash resource pool on each host, comprised of
a single LSI 400GB SLC WarpDrive installed in each Cisco UCS B200 M3 server. At the VM level, we configured the four
data VMDKs on each OLAP workload VM to use vFRC. We divided the available capacity in the flash resource pool by
four (92 GB) and configured each vFRC-enabled data VMDK to use that much cache capacity. For our cache block size,
we set each VMDK cache to a 32KB block size to match the configured Oracle database block size of our OLAP workload.
We did not configure vFRC for our OLTP workload VMs. See Figure 9 for a detailed look at our OLAP vFRC configuration.
2
kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007467
14. A Principled Technologies test report 14Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
Figure 9: The vFRC configuration for the OLAP VMs used in our testing.
Test tools
HammerDB is an open-source benchmark tool that tests the database performance of many leading databases,
including Oracle Database, Microsoft® SQL Server®, PostgreSQL, MySQL™, and more. The benchmark includes two built-
in workloads derived from industry-standard benchmarks: a transactional (TPC-C-like) workload and a data warehouse
(TPC-H-like) workload. For this study, we used the data warehouse workload. Our tests were not official TPC results and
are not comparable in any manner. For more information about HammerDB, visit hammerora.sourceforge.net.
We used the DVD Store Version 2.1 (DS2) benchmarking tool for our TPC-C like workload. DS2 models an online
DVD store, where customers log in, search for movies, and make purchases. DS2 reports these actions in orders per
minute (OPM) that the system could handle, to show what kind of performance you could expect for your customers.
15. A Principled Technologies test report 15Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
The DS2 workload also performs other actions, such as adding new customers. For more information about the DS2 tool,
see www.delltechcenter.com/page/DVD+Store.
Storage layout
Physical and virtual storage
In our labs, we configured the EMC VMAX 10K array according to best practices. We coordinated with EMC
VMAX 10K engineers on management access, cabling, tiering, and monitoring. Figure 10 shows how we connected the
components.
Figure 10: Diagram of the VMAX 10K
We followed EMC best practices for allocating high-performance storage volumes for the vSphere VMDKs
associated with the Oracle data, temporary tablespaces, and redo logs. Though we had three configured tiers on our
VMAX 10K, we used two tiers of storage in our tests: a top performance tier comprised of EFD and FC disks, and a middle
performance tier of EFD, FC, and SATA disks. From the VMAX 10K management console, we created four 512GB volumes
16. A Principled Technologies test report 16Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
in the top tier and four 4TB volumes in the middle tier. The volumes in the top tier held the virtual disks containing the
Oracle OLTP and OLAP databases as well as the OLTP redo logs. The middle tier volumes held virtual disks containing
operating system data and the remaining Oracle disks (system files, OLAP temp tablespace, and OLAP redo logs). See
Figure 11 for the storage layout of OLAP and OLTP VMs.
Figure 11: Logical layout of Oracle storage on the EMC VMAX 10K.
Performance test scenarios and workflow
Baseline
To determine the impact vFRC and vMotion have on performance, we needed to develop a baseline score. To do
so, we ran our TPC-H-like and TPC-C-like workloads against our ten VMs without any other factor involved. We gathered
esxtop data, the benchmark scores, and the time to run the TPC-H-like workload to establish our baseline results.
vFRC enabled
Once we had our baseline scores, we wanted to determine the impact of vFRC on performance scores. We reset
everything to default states and enabled vFRC. We split the full amount of the available cache equally across the four
data VMDKs on the two OLAP VMs. We then ran our workloads against the ten VMs as we did in the baseline run. We
gathered esxtop data, the benchmark scores, the time to run the TPC-H-like workload, and the cache data to determine
our new results.
17. A Principled Technologies test report 17Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
vFRC and vMotion event
Lastly, we wanted to gauge the effect of a vMotion event on the performance with vFRC enabled, and prove that
vFRC and vMotion can work together. After resetting all of our testbed to the default state, we again setup and ran the
same vFRC test as our previous run. Using the time to complete the TPC-H-like workload in our previous run, we marked
the halfway point in our run, and performed a vMotion event on one OLAP and two OLTP VMs. We then let the
workloads finish on their new hosts. We gathered esxtop data, benchmark scores, time to run the TPC-H-like OLAP
workload, cache data from both the destination and the target hosts, and vMotion network data to determine our new
results.
18. A Principled Technologies test report 18Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
APPENDIX C – HOW WE TESTED
Installing VMware vSphere 5.5 (ESXi) on the Cisco UCS B200 M3 blades
1. Insert the disk, and boot from disk.
2. On the Welcome screen, press Enter.
3. On the End User License Agreement (EULA) screen, press F11.
4. On the Select a Disk to Install or Upgrade Screen, select the relevant volume on which to install ESXi, and press
Enter.
5. On the Please Select a Keyboard Layout screen, press Enter.
6. On the Enter a Root Password Screen, assign a root password and confirm it by entering it again. Press Enter to
continue.
7. On the Confirm Install Screen, press F11 to install.
8. On the Installation complete screen, press Enter to reboot.
9. Repeat steps 1-8 for each B200 M3 blade.
Configuring ESXi after Installation
1. On the ESXi 5.5 screen, press F2, enter the root password, and press Enter.
2. On the System Customization screen, select Troubleshooting Options, and press Enter.
3. On the Troubleshooting Mode Options screen, select enable ESXi Shell, and press Enter.
4. Select Enable SSH, press Enter, and press ESC.
5. On the System Customization screen, select Configure Management Network.
6. On the Configure Management Network screen, select IP Configuration.
7. On the IP Configuration screen, select set static IP, enter an IP address, subnet mask, and default gateway, and press
Enter.
8. On the Configure Management Network screen, press Esc. When asked to apply the changes, type Y.
9. Repeat steps 1-8 for each B200 M3 blade.
For our vCenter management server, we deployed the vCenter Appliance on a separate ESXi host and configured
it for our environment.
Configuring VM networking on ESXi
1. Log into the vSphere Web Client with the administrator credentials and navigate to the B200 M3 host.
2. In the Manage tab, click on Networking, and in the Virtual switches pane, click the Add host networking button.
3. Select Physical Network Adapter, and click Next.
4. Select New standard switch, and assign to it two of the B200 M3’s ports connected to the physical test network.
5. Click Next.
6. Provide a label for the switch, and click Next.
7. Click Next.
8. Click Finish.
9. Select VMkernel adapters in the pane on the left.
10. Click Add Networking.
11. Choose Virtual machine Port Group for a Standard Switch, and click Next.
12. Select the switch you just created, and click Next.
13. Label the network and assign the appropriate VLAN ID, and click Next.
19. A Principled Technologies test report 19Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
14. Click Finish.
15. Click Add Networking again, and choose VMkernel Network Adapter.
16. Choose the switch you created earlier, and click Next.
17. Label the network, choose the appropriate VLAN ID, check Management traffic, and click Next.
18. Use static IPv4 settings, and set the appropriate address. Click next.
19. Click Finish.
20. Repeat steps 1 – 8 to create a second vSwitch assigning 4 physical ports instead of 2.
21. Click on the new vSwitch, and click Edit.
22. Change the MTU to 9000 Bytes, and click OK.
23. Repeat steps 15 – 19 to create four vMotion VMkernels (check vMotion traffic instead of Management).
24. For each vMotion VMkernel, click edit and change the NIC settings to 9000 MTUs.
25. Click on Virtual switches in the left hand pane.
26. Click on the vMotion vSwitch, and highlight the first vMotion VMkernel.
27. Click Edit.
28. Go to Teaming and failover, and check Override.
29. Choose one vmnic to be active, and set the rest as Standby.
30. Click OK.
31. Repeat steps 27 through 30 for each of the other three VMkernels each time choosing a different vmnic to remain
active, giving each VMkernel its own vmnic.
Creating the first VM
1. From the B200 M3 host in the vSphere Web Client, navigate to the Virtual Machines page.
2. Click the New Virtual Machine button.
3. At the Select a creation type screen, select Create a new virtual machine, and click Next.
4. Assign a name to the virtual machine, and click Next.
5. Select the B200 M3 host to run the virtual machine, and click Next.
6. Select the first assigned OS Datastore on the external storage, and click Next.
7. At the Select compatibility screen, select ESXi 5.5 and later, and click Next.
8. Choose Linux, and choose Oracle Linux 4/5/6 (64-bit), and click Next.
9. If this is an OLAP VM, choose four CPUs; for an OLTP VM, choose two CPUs.
10. If the is an OLAP VM, provide 200GB RAM; for an OLTP VM, provide 48GB RAM.
11. For New Network, select the switch made previously, and choose the VMXNET3 adapter type.
12. For the first hard disk, assign 60GB for the OLAP VMs; for an OLTP VM, assign 40GB , thick provision eager zeroed,
and the virtual device node SCSI(0:0).
13. From the New device: drop-down menu, select SCSI Controller.
14. Change the new SCSI controller’s type to VMware Paravirtual.
15. Add VHDs to the OLTP VMs
a. Add a new hard disk, assign 120 GB, thick provision eager zeroed, and the virtual device node SCSI(1:0). Also
change its location to an appropriate datastore. This will be the Oracle data volume.
b. Add a new hard disk, assign 10 GB, thick provision eager zeroed, and the virtual device node SCSI(1:1). Also
change its location to an appropriate datastore. This will be the Oracle redo logs volume.
16. Add VHDs to the OLAP VMs
20. A Principled Technologies test report 20Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
a. Add four new 120GB VHDs, Thick provision lazy zeroed, assigned to an appropriate datastore. These will hold
the Oracle data.
b. Add four new 10GB VHDs, Thick provision lazy zeroed, assigned to an appropriate datastore. These will hold the
Oracle redo logs.
c. Add four new 30GB VHDs, Thick provision lazy zeroed, assigned to an appropriate datastore. These will hold the
Oracle temp tablespace.
17. Click Next
18. Click Finish.
19. Start the VM.
20. Attach the Oracle Enterprise Linux 6.4 ISO image to the VM and install Oracle Enterprise Linux 6.4 on your VM.
Oracle Linux 6.5 and Oracle Database 12c
We configured each VM with Oracle Linux 6.5 and Oracle Database 12c. For our OLTP VMs, we used a basic
install of Oracle Database 12c and used the local file system for database files: one disk for data; a second disk for redo
logs. For our OLAP VMs, we first installed Oracle Grid Infrastructure and utilized Oracle Automatic Storage Management
to create three four-disk ASM groups for data, redo logs, and temp table spaces. We then installed and configured
Oracle Database 12c utilizing these ASM disk groups for our database files.
Installing Oracle Linux 6.5
1. Insert the Oracle Linux 6.5 DVD into the server, and boot to it.
2. Select Install or upgrade an existing system.
3. If you are unsure of the fidelity of the installation disk, select OK to test the installation media; otherwise, select
Skip.
4. In the opening splash screen, select Next.
5. Choose the language you wish to use, and click Next.
6. Select the keyboard layout, and click Next.
7. Select Basic Storage Devices, and click Next.
8. Select Fresh Installation, and click Next.
9. Insert the hostname, and select Configure Network.
10. In the Network Connections menu, configure network connections.
10. After configuring the network connections, click Close.
11. Click Next.
12. Select the nearest city in your time zone, and click Next.
13. Enter the root password, and click Next.
14. Select Use All Space, and click Next.
15. When the installation prompts you to confirm that you are writing changes to the disk, select Write changes to disk.
16. Select Software Basic Server, and click Next. Oracle Linux installation begins.
17. When the installation completes, select Reboot to restart the server.
Installing VMware Tools
1. Install guest tools or agents.
2. (VMware only) Install VMware Tools on the guest:
Right-click the VM in the Web Client, and select Install/Upgrade VMware Tools.
Log onto the guest as root
21. A Principled Technologies test report 21Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
Mount the CDROM device:
# mount –o ro /dev/cdrom /mnt
Untar VMware Tools into a temporary directory:
# tar –C /tmp –zxf /mnt/VMwareTools-9.4.0-1280544.tar.gz
Run the install script and accept the defaults:
# /tmp/vmware-tools-distrib/vmware-install.pl
Follow the prompts to configure and install VMware tools.
The installer will automatically load the NIC drivers, create a new initrd, and unmount the CD.
Reboot the VM.
Initial configuration tasks
Complete the following steps to provide the functionality that Oracle Database requires. We performed all of
these tasks as root.
1. Disable firewall services. In the command line (as root), type:
# service iptables stop
# chkconfig iptables off
# service ip6tables stop
# chkconfig ip6tables off
2. Set SELinux:
# vi /etc/selinux/config
SELINUX=permissive
3. Modify /etc/hosts to include the IP address of the internal IP and the hostname.
4. Edit 90-nproc.conf:
# vim /etc/security/limits.d/90-nproc.conf
Change this:
* soft nproc 1024
To this:
* - nproc 16384
5. (OLAP VM only) Enable huge pages by adding these lines to /etc/sysctl.conf:
vm.nr_hugepages=61440
vm.hugetlb_shm_group=54321
6. Install 12c RPM packages, resolve package dependencies and modify kernel parameters:
# yum install oracle-rdbms-server-12cR1-preinstall –y
7. Install automatic system tuning for database storage through yum:
# yum install tuned
# chkconfig tuned on
# tuned-adm profile enterprise-storage
8. Using yum, install the following prerequisite packages for Oracle Database:
# yum install elfutils-libelf-devel
# yum install xhost
# yum install unixODBC
# yum install unixODBC-devel
# yum install oracleasm-support oracleasmlib oracleasm
9. Create the oracle user account and groups and password:
22. A Principled Technologies test report 22Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
# groupadd -g 1003 oper
# groupadd -g 1004 asmadmin
# groupadd -g 1005 asmdba
# groupadd -g 1006 asmoper
# usermod -G dba,oper,asmadmin,asmdba,asmoper oracle
# passwd oracle
10. Create the /u01 directory for Oracle inventory and software and give it to the oracle user:
# mkdir -p /u01/app/oracle/product/12.1.0/grid
# mkdir -p /u01/app/oracle/product/12.1.0/dbhome_1
# chown -R oracle:oinstall /u01
# chmod -R 775 /u01
11. Edit bash profiles to set up user environments:
# vim /home/oracle/.bash_profile
# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=orcl.localdomain
export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=$ORACLE_BASE/product/12.1.0/grid
export DB_HOME=$ORACLE_BASE/product/12.1.0/dbhome_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=orcl
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias grid_env='. /home/oracle/grid_env'
alias db_env='. /home/oracle/db_env'
# vim /home/oracle/grid_env
export ORACLE_SID=+ASM1
export ORACLE_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
# vim /home/oracle/db_env
export ORACLE_SID=orcl
export ORACLE_HOME=$DB_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
24. A Principled Technologies test report 24Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
2. Open a terminal to the unzipped database directory.
3. Set the Oracle grid environment.
4. To start the installer, type./runInstaller
5. At the software Updates screen, select Skip updates.
6. In the Select Installation Option screen, select Install and Configure Grid Infrastructure for a Standalone Server, and
click Next.
7. Choose the language, and click Next.
8. In the Create ASM Disk Group screen, choose the Disk Group Name, change redundancy to External.
9. Select the four disks that you are planning to use for the database, and click Next.
10. In the Specify ASM Password screen, write the passwords for the ASM users, and click Next.
11. Leave the default Operating System Groups, and click Next.
12. Leave the default installation, and click Next.
a. Leave the default inventory location, and click Next.
b. Under Root script execution select Automatically run configuration scripts and enter root credentials.
c. In the Prerequisite Checks screen, make sure that there are no errors.
d. In the Summary screen, verify that everything is correct, and click Finish to install Oracle Grid Infrastructure.
e. At one point during the installation, the installation prompts you to execute two configuration scripts as root.
Follow the instructions to run the scripts.
f. At the Finish screen, click Close.
13. To run the ASM Configuration Assistant, type asmca.
14. In the ASM Configuration Assistant, click Create.
15. In the Create Disk Group window, name the new disk group log choose redundancy External (None), select the four
disks for redo logs, and click OK.
16. In the ASM Configuration Assistant, click Create.
17. In the Create Disk Group window, name the new disk group temp choose redundancy External (None), select the
four disks for the temp tablespace, and click OK.
18. Exit the ASM Configuration Assistant.
Creating the ASM disk groups (OLAP only)
1. Login to sqlplus:
# sqlplus / as sysasm
2. Run the following sqlplus commands:
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '12.1';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.rdbms' = '12.1';
SQL> CREATE DISKGROUP REDO external REDUNDANCY disk
'/dev/oracleasm/redo01',
'/dev/oracleasm/redo02',
'/dev/oracleasm/redo03',
'/dev/oracleasm/redo04'
ATTRIBUTE
'au_size'='1M',
'compatible.asm' = '12.1',
'compatible.rdbms' = '12.1';
25. A Principled Technologies test report 25Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
SQL> CREATE DISKGROUP TEMP external REDUNDANCY disk
'/dev/oracleasm/temp01',
'/dev/oracleasm/temp02',
'/dev/oracleasm/temp03',
'/dev/oracleasm/temp04'
ATTRIBUTE
'au_size'='4M',
'compatible.asm' = '12.1',
'compatible.rdbms' = '12.1';
Installing Oracle Database 12c
1. Unzip linuxamd64_12c_database_1_of_2.zip and linuxamd64_12c_database_2_of_2.zip.
2. Open a terminal to the unzipped database directory.
3. Set the Oracle database environment.
4. Run ./runInstaller.sh.
5. Wait for the GUI installer loads.
6. On the Configure Security Updates screen, enter the credentials for My Oracle Support. If you do not have an
account, uncheck the box I wish to receive security updates via My Oracle Support, and click Next.
7. At the warning, click Yes.
8. On the Download Software Updates screen, enter the desired update option, and click Next.
9. On the Select Installation Option screen, select Install database software only, and click Next.
10. On the Grid Installation Options screen, select Single instance database installation, and click Next.
11. On the Select Product Languages screen, leave the default setting of English, and click Next.
12. On the Select Database Edition screen, select Enterprise Edition, and click Next.
13. On the Specify Installation Location, leave the defaults, and click Next.
14. On the Create Inventory screen, leave the default settings, and click Next.
15. On the Privileged Operating System groups screen, keep the defaults, and click Next.
16. Allow the prerequisite checker to complete.
17. On the Summary screen, click Install.
18. Once the Execute Configuration scripts prompt appears, ssh into the server as root, and run the following
commands:
# /home/oracle/app/oraInventory/orainstRoot.sh
# /home/oracle/app/oracle/product/12.1.0/dbhome_1/root.sh
19. Return to the prompt, and click OK.
20. Once the installer completes, click Close.
Create the HammerDB database (OLAP only)
We used the following script to create our HammerDB database:
#!/bin/ksh
#############################
# create tpch database
#############################
echo 300GB database creation started at `date`
26. A Principled Technologies test report 26Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
sqlplus /NOLOG <<!
connect / as sysdba
set echo on
set timing on
shutdown abort;
startup pfile=?/dbs/inittpch.ora nomount;
create database
controlfile reuse
set default bigfile tablespace
logfile group 1 ('+REDO/redo01.log') size 10g reuse,
group 2 ('+REDO/redo02.log') size 10g reuse,
group 3 ('+REDO/redo03.log') size 10g reuse
datafile '+DATA/system.dbf' size 2g reuse
sysaux datafile '+DATA/sysaux.dbf' size 4g reuse
smallfile undo tablespace ts_undo
datafile '+DATA/ts_undo01.dbf' size 15g reuse
default temporary tablespace temp
tempfile '+TEMP/temp.dbf'
size 100000m reuse
extent management local uniform size 10m
maxdatafiles 2000
maxinstances 1;
!echo 300GB Database created
!echo 300GB Creating dictionary
set termout off
set echo off
spool /tmp/cat
@?/rdbms/admin/catalog.sql;
@?/rdbms/admin/catparr.sql;
@?/rdbms/admin/catproc.sql;
connect system/manager
@?/rdbms/admin/utlxplan.sql;
@?/sqlplus/admin/pupbld.sql;
spool off
exit;
!
echo End Database Creation at `date`
******End Script******
Add additional tablespace for data:
27. A Principled Technologies test report 27Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
SQL> create tablespace tpchtab datafile '+DATA/tpchtab.dbf' size 450g reuse
extent management local autoallocate ;
Creating the DVDstore database (OLTP only)
1. Type dbca, and press enter to open the Database configuration assistant.
2. At the Database Operation screen select Create Database, and click Next.
3. Under Creation Mode select Advanced Mode, and click Next.
4. At the Select Template screen select General Purpose or Transaction Processing. Click Next
5. Enter a Global database name and the appropriate SID.
6. At the Management Options screen select Configure Enterprise Manager (EM) Database Express. Click Next.
7. At the Database Credentials screen select Use the Same Administrative Password for All Accounts. Enter a password,
and click Next.
8. At the Network Configuration screen click Next.
9. At the Storage Configuration screen select File System, and specify the database location.
10. At the Database Options screen click Next.
11. At the Initialization Parameters screen click use Automatic Memory Management.
12. At the Creation Options select Create Database, and click Next.
13. At the summary screen click Finish.
14. Close the Database Configuration Assistant.
Generate HammerDB data (OLAP only)
We generated the data using two Windows Server 2008 VM clients with HammerDB installed.
1. Download the HammerDB install from hammerora.sourceforge.net/download.html
2. Double-click the executable to install HammerDB on the client.
3. Click Run.
4. Choose English, and click OK.
5. Click Yes on the install prompt.
6. Click Next.
7. Leave the default installation destination, and click Next.
8. Click Next.
9. Check Launch HammerDB, and click Finish.
10. In the HammerDB UI, click OptionsBenchmark, and check Oracle and TPC-H. Click OK.
11. Click OK again to confirm the benchmark choice.
12. Expand TPC-H and Schema Build.
13. Double-click Options to open the Build Options menu.
14. For Oracle Service Name, type <IP_of_TPC-H_Server>:1521/<name_of_database>
15. Leave the rest of the fields as default.
16. Choose 300 for the Scale Factor, and click OK.
17. Open the Driver Script Options, and set the Degree of Parallelism to 2. Click OK.
18. In the Virtual User Options, check Show Output, Log Output to Temp, and Use Unique Log Name. Click OK.
19. To start the database generation, double-click Build.
28. A Principled Technologies test report 28Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
Configuring the DVD Store database (OLTP only)
Data generation overview
We generated the data using the Install.pl script included with DVD Store version 2.1 (DS2), providing the
parameters for our 30GB database size and the database platform on which we ran: Oracle Database. We ran the
Install.pl script on each VM. The database schema was also generated by the Install.pl script.
We created VMware snapshots of each VM after we finished creating the databases. Between runs, we restored
each VM to the most recent snapshot.
We created additional indexes to improve lookup performance and reduce table scans. Additionally, we
modified the login stored procedure to bypass using the temporary tablespace. Finally, we also modified the data
generation scripts (see DVD store modifications section below). Specifically, we followed the steps below:
1. We generated the data and created the database and file structure using database creation scripts in the DS2
download. We made size modifications specific to our 30GB database.
2. We created database tables, stored procedures, and objects using the provided DVD Store scripts.
3. We loaded the data we generated into the database using sqlldr and the provided DVD Store load scripts.
4. We created indices, full-text catalogs, primary keys, and foreign keys using our modified database-creation scripts.
5. We created a database user, and mapped this user to the Oracle Database login.
6. We then took a VM snapshot to use as a restore point for resetting the test.
DVD Store modifications
We made a few modifications to the database creation scripts for DVD store to increase the databases’
performance. We modified the files in the following ways:
1. oracleds2_create_ind.sql
Created a new index on the CUST_HIST table that included the CUSTOMERID and PROD_ID columns.
Changed the PK_ORDERS index to a reverse index.
Created a new index on the ORDERS table that included the CUSTOMERID column.
Changed the PK_ORDERLINES index to a reverse index.
Created a new index on the PRODUCTS table that included the PROD_ID and COMMON_PROD_ID columns.
Created a new index on the PRODUCTS table that included the SPECIAL, CATEGORY, and PROD_ID columns.
Changed the IX_INV_PROD_ID index to a unique index and made it the primary key.
Created a new index on the REORDER table that included the PROD_ID column.
2. oracleds2_create_sp.sql
Removed the “derivedtable” global temporary table creation.
Modified the LOGIN procedure to not use the “derivedtable” global temporary table.
3. oracleds2_create_tablespaces_30GB.sql
Used five SMALLFILE data files with unlimited maximum size for the CUSTTBS tablesplace.
Used five SMALLFILE data files with unlimited maximum size for the INDXTBS tablesplace.
Used five SMALLFILE data files with unlimited maximum size for the ORDERTBS tablesplace.
Used one SMALLFILE data file with unlimited maximum size for the DS_MISC tablespace.
29. A Principled Technologies test report 29Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
Setting up the client clients
We ran eight DVD Store clients, and two HammerDB clients installed with Windows Server 2008, one targeting
each VM of the appropriate type. Each DVDstore client contained the ds2oracle driver executable, and a script that
could be run to start the executable with the necessary parameters for the run:
c:ds2ds2oracledriver.exe --target=10.41.5.223:1521/orcl --db_size=30GB --
run_time=1000 --n_threads=6 --think_time=.5 --warmup_time=1 --
detailed_view=Y --csv_output=c:ds2outputDS2_client1.csv
Each HammerDB client ran HammerDB with the parameters outlined above in the HammerDB data creation
section. To run HammerDB, we used RDP to connect to the two clients, loaded the Driver Script into the script editor,
clicked on the Create Users button, and started the run.
Running the tests
We positioned our VMs in the following manner across the four Cisco B200 M3 hosts:
Host 1
o 1 x OLAP VM
o 2 x OLTP VMs
Host 2
o 2 x OLTP VMs
Host 3
o 1 x OLAP VM
o 2 x OLTP VMs
Host 4
o 2 x OLTP VMs
With the use of various automation scripts, we performed three runs: a baseline run without vFRC enabled and
without performing a vMotion task; a run with vFRC enabled, but with no vMotion task; and a run with vFRC enabled
and performing a vMotion task roughly halfway through a TPC-H run.
1. Clean up prior outputs from the host system and all client driver systems.
2. Reboot the Cisco blades and all client systems.
3. Reset all server VMs to the latest snapshots.
4. Restart the Oracle instance.
5. Start the esxtop gathering script.
6. If using vFRC, start the vFRC data gathering scripts.
7. Start the DVD Store script.
8. Start the two HammerDB TPC-H runs.
9. If performing a vMotion task, migrate the VMs on Host 1 about 1 hour, 23 minutes (the halfway mark in first vFRC
run) into the TPC-H run.
10. When the TPC-H tasks finish on both VMs, sftop the esxtop, DVD Store, and cache scripts.
11. Gather all the output files from esxtop, esxcli cache, DVD Store clients, and TPC-H clients.
30. A Principled Technologies test report 30Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
APPENDIX D – SCRIPTS WE USED
We used the following scripts to help automate the run process.
DVD Store Scripts
Client Script
c:ds2ds2oracledriver.exe --target=10.41.5.223:1521/orcl --db_size=30GB --
run_time=1000 --n_threads=6 --think_time=.5 --warmup_time=1 --
detailed_view=Y --csv_output=c:ds2outputDS2_client1.csv
Controller start DVD Store script
start psexec 10.41.5.203 -u Administrator -p Password1 C:ds2runds2.bat
start psexec 10.41.5.204 -u Administrator -p Password1 C:ds2runds2.bat
start psexec 10.41.5.205 -u Administrator -p Password1 C:ds2runds2.bat
start psexec 10.41.5.206 -u Administrator -p Password1 C:ds2runds2.bat
start psexec 10.41.5.207 -u Administrator -p Password1 C:ds2runds2.bat
start psexec 10.41.5.208 -u Administrator -p Password1 C:ds2runds2.bat
start psexec 10.41.5.209 -u Administrator -p Password1 C:ds2runds2.bat
start psexec 10.41.5.210 -u Administrator -p Password1 C:ds2runds2.bat
Controller end DVD Store script
taskkill /s 10.41.5.203 /u Administrator /p Password1 /im ds2oracledriver.exe /f
taskkill /s 10.41.5.204 /u Administrator /p Password1 /im ds2oracledriver.exe /f
taskkill /s 10.41.5.205 /u Administrator /p Password1 /im ds2oracledriver.exe /f
taskkill /s 10.41.5.206 /u Administrator /p Password1 /im ds2oracledriver.exe /f
taskkill /s 10.41.5.207 /u Administrator /p Password1 /im ds2oracledriver.exe /f
taskkill /s 10.41.5.208 /u Administrator /p Password1 /im ds2oracledriver.exe /f
taskkill /s 10.41.5.209 /u Administrator /p Password1 /im ds2oracledriver.exe /f
taskkill /s 10.41.5.210 /u Administrator /p Password1 /im ds2oracledriver.exe /f
Esxtop data gathering scripts
We created one script for each host, and used command prompts to start each script. The locations used in the
scripts are specific to our testing and will be different when run in other tests.
plink.exe 10.41.5.241 -l root -pw Password1 esxtop -b -n $1 -d 5 >
C:Valenti_Resultsesxtop10.41.5.241.csv &
plink.exe 10.41.5.242 -l root -pw Password1 esxtop -b -n $1 -d 5 >
C:Valenti_Resultsesxtop10.41.5.242.csv &
plink.exe 10.41.5.243 -l root -pw Password1 esxtop -b -n $1 -d 5 >
C:Valenti_Resultsesxtop10.41.5.243.csv &
plink.exe 10.41.5.244 -l root -pw Password1 esxtop -b -n $1 -d 5 >
C:Valenti_Resultsesxtop10.41.5.244.csv &
Cache data gathering scripts
We stored these scripts on the storage LUNs that were not being used to hold the data VMDKs, and ran them
with command lines on each host that held cache. With each run, we had to use the esxcli storage vflash cache list
command to determine the cache names on each host. We also used the command after the vMotion task to determine
31. A Principled Technologies test report 31Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
the cache names on the target host once the OLAP VM moved. We edited the scripts with the correct task names, ran
the reset script, then started the gathering scripts that provided cache data every five minutes.
Cache gathering script
for i in $(seq 1 1 300)
do
esxcli storage vflash cache stats get -c vfc-1457171792-OLAP-002_1-000003 >>
/vmfs/volumes/53136407-e8f3c456-b35c-0025b510002f/cache_results3/OLAP-
002_cache1.log
esxcli storage vflash cache stats get -c vfc-1457171792-OLAP-002_2-000003 >>
/vmfs/volumes/53136407-e8f3c456-b35c-0025b510002f/cache_results3/OLAP-
002_cache2.log
esxcli storage vflash cache stats get -c vfc-1457171792-OLAP-002_3-000003 >>
/vmfs/volumes/53136407-e8f3c456-b35c-0025b510002f/cache_results3/OLAP-
002_cache3.log
esxcli storage vflash cache stats get -c vfc-1457171792-OLAP-002_4-000003 >>
/vmfs/volumes/53136407-e8f3c456-b35c-0025b510002f/cache_results3/OLAP-
002_cache4.log
sleep 300
done
Cache reset script (the reset command does not generate any output, so logging was not required)
for i in $(seq 1 1 300)
do
esxcli storage vflash cache stats reset -c vfc-1457171792-OLAP-002_1-000003
esxcli storage vflash cache stats reset -c vfc-1457171792-OLAP-002_2-000003
esxcli storage vflash cache stats reset -c vfc-1457171792-OLAP-002_3-000003
esxcli storage vflash cache stats reset -c vfc-1457171792-OLAP-002_4-000003
sleep 300
done
32. A Principled Technologies test report 32Accelerating virtualized Oracle 12c performance with vSphere 5.5
advanced features Flash Read Cache and vMotion
ABOUT PRINCIPLED TECHNOLOGIES
Principled Technologies, Inc.
1007 Slater Road, Suite 300
Durham, NC, 27703
www.principledtechnologies.com
We provide industry-leading technology assessment and fact-based
marketing services. We bring to every assignment extensive experience
with and expertise in all aspects of technology testing and analysis, from
researching new technologies, to developing new methodologies, to
testing with existing and new tools.
When the assessment is complete, we know how to present the results to
a broad range of target audiences. We provide our clients with the
materials they need, from market-focused data to use in their own
collateral to custom sales aids, such as test reports, performance
assessments, and white papers. Every document reflects the results of
our trusted independent analysis.
We provide customized services that focus on our clients’ individual
requirements. Whether the technology involves hardware, software, Web
sites, or services, we offer the experience, expertise, and tools to help our
clients assess how it will fare against its competition, its performance, its
market readiness, and its quality and reliability.
Our founders, Mark L. Van Name and Bill Catchings, have worked
together in technology assessment for over 20 years. As journalists, they
published over a thousand articles on a wide array of technology subjects.
They created and led the Ziff-Davis Benchmark Operation, which
developed such industry-standard benchmarks as Ziff Davis Media’s
Winstone and WebBench. They founded and led eTesting Labs, and after
the acquisition of that company by Lionbridge Technologies were the
head and CTO of VeriTest.
Principled Technologies is a registered trademark of Principled Technologies, Inc.
All other product names are the trademarks of their respective owners.
Disclaimer of Warranties; Limitation of Liability:
PRINCIPLED TECHNOLOGIES, INC. HAS MADE REASONABLE EFFORTS TO ENSURE THE ACCURACY AND VALIDITY OF ITS TESTING, HOWEVER,
PRINCIPLED TECHNOLOGIES, INC. SPECIFICALLY DISCLAIMS ANY WARRANTY, EXPRESSED OR IMPLIED, RELATING TO THE TEST RESULTS AND
ANALYSIS, THEIR ACCURACY, COMPLETENESS OR QUALITY, INCLUDING ANY IMPLIED WARRANTY OF FITNESS FOR ANY PARTICULAR PURPOSE.
ALL PERSONS OR ENTITIES RELYING ON THE RESULTS OF ANY TESTING DO SO AT THEIR OWN RISK, AND AGREE THAT PRINCIPLED
TECHNOLOGIES, INC., ITS EMPLOYEES AND ITS SUBCONTRACTORS SHALL HAVE NO LIABILITY WHATSOEVER FROM ANY CLAIM OF LOSS OR
DAMAGE ON ACCOUNT OF ANY ALLEGED ERROR OR DEFECT IN ANY TESTING PROCEDURE OR RESULT.
IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES, INC. BE LIABLE FOR INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN
CONNECTION WITH ITS TESTING, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES,
INC.’S LIABILITY, INCLUDING FOR DIRECT DAMAGES, EXCEED THE AMOUNTS PAID IN CONNECTION WITH PRINCIPLED TECHNOLOGIES, INC.’S
TESTING. CUSTOMER’S SOLE AND EXCLUSIVE REMEDIES ARE AS SET FORTH HEREIN.