This document outlines the steps to install AIX on Power servers using a NIM server. It describes powering on the HMC and configuring its IP addresses. LPARs are then created on the Power servers and connected to the HMC. AIX is installed on the LPARs by booting from media and navigating the installation steps. SSH is enabled and the NIM master server is configured by installing filesets and defining the NIM environment. A standalone NIM client is added and connectivity is tested between the client and NIM server using SMS ping.
AIX 6.1 introduces several new security features including role-based access control (RBAC) which allows privileged tasks to be delegated to non-privileged users. It also includes an encrypted filesystem that encrypts data for protection and an updated security tool called AIX Security Expert for centralized security management. The document discusses these features and others such as the new secure by default installation option and systems director console.
WebSphere App Server vs JBoss vs WebLogic vs TomcatWASdev Community
This document provides a competitive comparison of WebSphere Application Server and Liberty Profile versus Tomcat, JBoss, and WebLogic. It notes that WebSphere leverages over 100 open source software packages, contributes to over 350 open source projects, and has over 3,000 developers involved in open source. Charts from Gartner show that IBM holds the number one position in middleware software for the past 12 years according to their analysis. Additional charts and graphs show performance comparisons between WebSphere and other application servers on different hardware architectures and over time.
This document discusses best practices for backup and recovery planning. It covers common backup and recovery topics like different backup methods and topologies, the backup process, and managing backups. It also provides an overview of a typical backup application and the importance of backup reports and catalogs. The document is made up of multiple lessons intended to describe backup and recovery concepts and considerations.
This document discusses a presentation about WebLogic 12c and the WebLogic Management Pack. The presentation agenda includes discussing Fusion Middleware, WebLogic Server which is supported until 09/30/2017, and the WebLogic Management Pack which is supported until 12/31/2017. The document also includes questions to ask the audience about their use of WebLogic.
This document provides a reference architecture for deploying Pivotal Cloud Foundry (PCF) on Dell EMC VxRail Appliances. PCF is a cloud platform that allows developers to deploy and scale applications with no downtime. VxRail Appliances integrate compute and storage on a single appliance in a hyper-converged infrastructure. This solution provides developers with a ready-to-use environment to build and deploy cloud native applications, while leveraging existing VMware infrastructure investments. The document describes the key technologies involved, including PCF, VxRail, and VMware vSphere and vSAN. It also provides guidance on hardware and software requirements, management, storage configuration, networking, and sizing considerations for this
This document outlines the steps to install AIX on Power servers using a NIM server. It describes powering on the HMC and configuring its IP addresses. LPARs are then created on the Power servers and connected to the HMC. AIX is installed on the LPARs by booting from media and navigating the installation steps. SSH is enabled and the NIM master server is configured by installing filesets and defining the NIM environment. A standalone NIM client is added and connectivity is tested between the client and NIM server using SMS ping.
AIX 6.1 introduces several new security features including role-based access control (RBAC) which allows privileged tasks to be delegated to non-privileged users. It also includes an encrypted filesystem that encrypts data for protection and an updated security tool called AIX Security Expert for centralized security management. The document discusses these features and others such as the new secure by default installation option and systems director console.
WebSphere App Server vs JBoss vs WebLogic vs TomcatWASdev Community
This document provides a competitive comparison of WebSphere Application Server and Liberty Profile versus Tomcat, JBoss, and WebLogic. It notes that WebSphere leverages over 100 open source software packages, contributes to over 350 open source projects, and has over 3,000 developers involved in open source. Charts from Gartner show that IBM holds the number one position in middleware software for the past 12 years according to their analysis. Additional charts and graphs show performance comparisons between WebSphere and other application servers on different hardware architectures and over time.
This document discusses best practices for backup and recovery planning. It covers common backup and recovery topics like different backup methods and topologies, the backup process, and managing backups. It also provides an overview of a typical backup application and the importance of backup reports and catalogs. The document is made up of multiple lessons intended to describe backup and recovery concepts and considerations.
This document discusses a presentation about WebLogic 12c and the WebLogic Management Pack. The presentation agenda includes discussing Fusion Middleware, WebLogic Server which is supported until 09/30/2017, and the WebLogic Management Pack which is supported until 12/31/2017. The document also includes questions to ask the audience about their use of WebLogic.
This document provides a reference architecture for deploying Pivotal Cloud Foundry (PCF) on Dell EMC VxRail Appliances. PCF is a cloud platform that allows developers to deploy and scale applications with no downtime. VxRail Appliances integrate compute and storage on a single appliance in a hyper-converged infrastructure. This solution provides developers with a ready-to-use environment to build and deploy cloud native applications, while leveraging existing VMware infrastructure investments. The document describes the key technologies involved, including PCF, VxRail, and VMware vSphere and vSAN. It also provides guidance on hardware and software requirements, management, storage configuration, networking, and sizing considerations for this
Ibm spectrum scale_backup_n_archive_v03_ashAshutosh Mate
IBM Spectrum Scale can be used as both the source and destination for backup and archiving. As a source, Spectrum Scale data can be backed up to products like Spectrum Protect, Spectrum Archive, and third-party backup software. As a destination, Spectrum Protect can use Spectrum Scale and ESS storage for storing backed up or archived data, providing scalability, performance, and cost benefits over other solutions. Case studies demonstrate how large enterprises and regional hospital networks have consolidated backup infrastructure and improved availability, capacity, and backup/restore speeds by combining Spectrum Scale and Spectrum Protect.
The document discusses the benefits of using Veritas Cluster Server (VCS) 5 for VMware ESX Server. VCS 5 provides high availability and disaster recovery for virtual machines and applications. It protects against failures at all levels from physical servers to individual applications. VCS 5 also provides granular management of virtual environments similar to physical servers and allows configurations such as M+N clusters across multiple data centers for disaster recovery.
New Tools and Interfaces for Managing IBM MQMatt Leming
The document provides an overview of new tools and interfaces for managing IBM MQ, including the mqweb server, MQ REST API, and MQ Console. The mqweb server runs the MQ Console and REST API applications using WebSphere Liberty Profile. The MQ REST API allows administering MQ via REST and JSON, providing alternatives to PCF. The MQ Console is a browser-based graphical tool for administering and managing MQ queues, queue managers, and other objects.
This document discusses object storage and EMC's object storage solutions. It begins with an overview of how traditional storage is becoming inadequate to handle growing unstructured data and the advantages of object storage. Key characteristics of object storage like scalability, geo-distribution and support for large files are described. Example use cases that can benefit from object storage like global content repositories and IoT data collection are provided. The document then discusses EMC's object storage offerings like ECS and how they address the needs of these use cases through scalability, various access protocols and geo-distribution. It also covers EMC's HDFS data service and how it can address limitations of traditional HDFS.
The document provides an overview of Oracle Enterprise Manager 12c (OEM12c) with the following key points:
1. It introduces OEM12c and its capabilities for complete cloud lifecycle management including planning, building, testing, deploying, monitoring cloud services.
2. It discusses how to install OEM12c including checking requirements, using the bundle patch, and setting the correct hostname during installation.
3. It covers some common troubleshooting steps like resolving issues with configuration requirements and changing the hostname or IP address.
4. It provides some tips for OEM12c like creating scripts for starting, stopping and checking status, and backing up the admin server configuration.
5.
This document contains a summary of Assan Samba's career experience and qualifications. He has over 15 years of experience in IT networking roles, including network engineering, administration, and support. He has extensive expertise in Cisco routers, switches, firewalls, and wireless technologies. Currently, he works as a network engineer managing network infrastructure for multiple sites.
This document provides an overview of setting up a mail server on Linux. It discusses what Linux is and its features. It then describes the key components needed for a mail server, including Bind for DNS, Httpd for a web server, Dovecot for protocols, Postfix for accepting connections, and Squirrelmail for accessing the IMAP server. Instructions are provided on installing and configuring the necessary software packages to establish a functional mail server on a Linux system.
Metro Cluster High Availability or SRM Disaster Recovery?David Pasek
Presentation explains the difference between multi site high availability (aka metro cluster) and disaster recovery. General concepts are similar for any products but presentation is more tailored for VMware technologies.
The document provides details about installation, upgrade, hardware requirements, supported operating systems and databases for VMware ESX Server 3.0.1 and Virtual Center 2.0.1. It discusses the major components, minimum hardware requirements for VirtualCenter Server and Virtual Infrastructure Client. It also lists the supported databases, file extensions, differences between ESX and GSX, current ESX hardware version and various virtualization products.
This document provides an overview and syllabus for an AIX System Administration class that will take place over 5 days from 9:30am to 5:30pm. The class will cover topics such as Unix and AIX overviews, IBM POWER servers, installing the AIX operating system, and logging into the system. Hands-on experience will be provided through virtualized AIX systems on IBM POWER7 blades in the classroom lab network.
This document provides an overview and introduction to VMware Virtual SAN (VSAN). It discusses the VSAN architecture which uses SSDs for caching and HDDs for storage. It also covers how VSAN can be configured through storage policies assigned at the VM level. The document outlines how VSAN provides a software-defined storage solution that is hardware agnostic and can elastically scale storage performance and capacity by adding servers and disks.
Complete Training on Youtube with all topics - FREE
http://www.youtube.com/playlist?list=PLeHUvPtMTsdeaE4YBiPPZlMYVaDfKt_DH
Weblogic Application Server overview and concepts
Weblogic integration with apache and security hardening with multi user realms and SSL
JMS Overview with queues/topic and jms bridges
JDBC overview with failover and HA modes
WLST & Node manager commands and setup
Weblogic deployment concepts
Offline and online backup recovery comcepts
This is the planning part of a two-part session for system programmers and their managers who are planning on upgrading to z/OS V2.4. In part one, the focus is on preparing your current system for upgrading to either release. The system requirements to run and how to prepare your system for the upgrade are discussed. Part two covers the only upgrade details for upgrading to z/OS V2.4 from either V2.2 or V2.3. It is strongly recommended that you attend both sessions for an upgrade picture for z/OS V2.4.
The general availability date for z/OS V2.4 happened September 30 2019.
Exadata Deployment Bare Metal vs VirtualizedUmair Mansoob
This document compares bare metal and virtualized Exadata deployments. It discusses the layout and considerations of bare metal vs Oracle VM environments. Some key differences covered include licensing benefits, workload isolation, database consolidation use cases, and maintenance processes like patching. The document also outlines pros and cons of virtualization such as improved isolation but reduced efficiency. It provides guidance on migrating physical databases to a virtualized environment with minimum downtime.
There are 5 types of FSMO roles that each serve specific purposes in Active Directory: the Schema Master updates the directory schema; the Domain Naming Master manages domain names; the RID Master assigns SIDs to new objects; the PDC Emulator synchronizes time and processes password changes; and the Infrastructure Master updates object references across domains. These roles can be seized by another domain controller if the current owner becomes permanently unavailable using the Ntdsutil tool.
Barry Hesk: Cisco Unified Communications Manager training deck 1Barry Hesk
Cisco Unified Communications Manager (CUCM) training will cover CUCM basics and advanced configurations over three days. The instructor will use a demo environment including two CUCM servers and a Cisco Unity Connection voicemail server. Topics will include CUCM architecture, installation, upgrades, backups/restores, protocols, phones, gateways, and more. The goal is to explain how CUCM works from the perspective of a customer.
This document discusses continuous availability and data mobility solutions from EMC, including VPLEX and RecoverPoint. It provides an overview of these solutions, describing how they enable active-active configurations across data centers for always-on application access, automated disaster recovery without downtime, and non-disruptive data migration. It also shares statistics on VPLEX and RecoverPoint deployments and discusses how these solutions provide benefits like zero RPO/RTO recovery and removing restrictions of data centers and storage arrays.
The document provides an overview of the Oracle Exadata X10M Database Machine. Key points include:
- It features the latest 96-core AMD EPYC CPUs, up to 3TB of memory per database server, and 100Gb RDMA networking.
- Storage options include High Capacity servers with 264TB disk and 27.2TB flash, Extreme Flash servers with 122.88TB flash storage, and Extended servers with 264TB disk.
- The machines deliver extreme performance and scalability for all database workloads through automated management and database-optimized hardware and software.
Clone Oracle Databases In Minutes Without Risk Using Enterprise Manager 13cAlfredo Krieg
1) Oracle Enterprise Manager allows users to clone Oracle databases in minutes without risk by using its snap clone functionality.
2) Snap clones provide rapid, space efficient cloning of databases across storage systems. They also enable integrated database lifecycle management.
3) Enterprise Manager provides both administrator-driven and self-service user workflows for creating snap clones of databases for testing and development.
This document discusses IBM's virtualization technology for System p servers. It provides an overview of logical partitioning (LPAR), micro-partitioning, and dynamic LPAR capabilities. It also describes virtual I/O technologies like virtual SCSI and virtual Ethernet that allow sharing of network and storage devices between partitions. The document notes that these virtualization features help optimize resource utilization and reduce hardware costs through server consolidation.
Virtualization technologies allow servers to be consolidated onto fewer physical servers for improved efficiency. IBM's PowerVM allows one physical server to be divided into multiple logical partitions (LPARs), with each LPAR able to run its own operating system. Key PowerVM technologies include micro-partitioning which divides physical CPUs among LPARs, dynamic LPARs which moves resources between active partitions, and virtual I/O servers which allow partitions to share physical network and storage adapters. These technologies improve utilization, flexibility, and availability compared to using separate physical servers.
The document provides an introduction to operating systems, covering their basic functions and components. It discusses how operating systems manage hardware resources and provide abstraction for applications. The key components described include the kernel, drivers, utilities, and applications/processes. It also covers process scheduling, file systems, APIs/system calls, memory management, and popular operating systems like IBM z/OS, IBM i, and OpenVMS.
Ibm spectrum scale_backup_n_archive_v03_ashAshutosh Mate
IBM Spectrum Scale can be used as both the source and destination for backup and archiving. As a source, Spectrum Scale data can be backed up to products like Spectrum Protect, Spectrum Archive, and third-party backup software. As a destination, Spectrum Protect can use Spectrum Scale and ESS storage for storing backed up or archived data, providing scalability, performance, and cost benefits over other solutions. Case studies demonstrate how large enterprises and regional hospital networks have consolidated backup infrastructure and improved availability, capacity, and backup/restore speeds by combining Spectrum Scale and Spectrum Protect.
The document discusses the benefits of using Veritas Cluster Server (VCS) 5 for VMware ESX Server. VCS 5 provides high availability and disaster recovery for virtual machines and applications. It protects against failures at all levels from physical servers to individual applications. VCS 5 also provides granular management of virtual environments similar to physical servers and allows configurations such as M+N clusters across multiple data centers for disaster recovery.
New Tools and Interfaces for Managing IBM MQMatt Leming
The document provides an overview of new tools and interfaces for managing IBM MQ, including the mqweb server, MQ REST API, and MQ Console. The mqweb server runs the MQ Console and REST API applications using WebSphere Liberty Profile. The MQ REST API allows administering MQ via REST and JSON, providing alternatives to PCF. The MQ Console is a browser-based graphical tool for administering and managing MQ queues, queue managers, and other objects.
This document discusses object storage and EMC's object storage solutions. It begins with an overview of how traditional storage is becoming inadequate to handle growing unstructured data and the advantages of object storage. Key characteristics of object storage like scalability, geo-distribution and support for large files are described. Example use cases that can benefit from object storage like global content repositories and IoT data collection are provided. The document then discusses EMC's object storage offerings like ECS and how they address the needs of these use cases through scalability, various access protocols and geo-distribution. It also covers EMC's HDFS data service and how it can address limitations of traditional HDFS.
The document provides an overview of Oracle Enterprise Manager 12c (OEM12c) with the following key points:
1. It introduces OEM12c and its capabilities for complete cloud lifecycle management including planning, building, testing, deploying, monitoring cloud services.
2. It discusses how to install OEM12c including checking requirements, using the bundle patch, and setting the correct hostname during installation.
3. It covers some common troubleshooting steps like resolving issues with configuration requirements and changing the hostname or IP address.
4. It provides some tips for OEM12c like creating scripts for starting, stopping and checking status, and backing up the admin server configuration.
5.
This document contains a summary of Assan Samba's career experience and qualifications. He has over 15 years of experience in IT networking roles, including network engineering, administration, and support. He has extensive expertise in Cisco routers, switches, firewalls, and wireless technologies. Currently, he works as a network engineer managing network infrastructure for multiple sites.
This document provides an overview of setting up a mail server on Linux. It discusses what Linux is and its features. It then describes the key components needed for a mail server, including Bind for DNS, Httpd for a web server, Dovecot for protocols, Postfix for accepting connections, and Squirrelmail for accessing the IMAP server. Instructions are provided on installing and configuring the necessary software packages to establish a functional mail server on a Linux system.
Metro Cluster High Availability or SRM Disaster Recovery?David Pasek
Presentation explains the difference between multi site high availability (aka metro cluster) and disaster recovery. General concepts are similar for any products but presentation is more tailored for VMware technologies.
The document provides details about installation, upgrade, hardware requirements, supported operating systems and databases for VMware ESX Server 3.0.1 and Virtual Center 2.0.1. It discusses the major components, minimum hardware requirements for VirtualCenter Server and Virtual Infrastructure Client. It also lists the supported databases, file extensions, differences between ESX and GSX, current ESX hardware version and various virtualization products.
This document provides an overview and syllabus for an AIX System Administration class that will take place over 5 days from 9:30am to 5:30pm. The class will cover topics such as Unix and AIX overviews, IBM POWER servers, installing the AIX operating system, and logging into the system. Hands-on experience will be provided through virtualized AIX systems on IBM POWER7 blades in the classroom lab network.
This document provides an overview and introduction to VMware Virtual SAN (VSAN). It discusses the VSAN architecture which uses SSDs for caching and HDDs for storage. It also covers how VSAN can be configured through storage policies assigned at the VM level. The document outlines how VSAN provides a software-defined storage solution that is hardware agnostic and can elastically scale storage performance and capacity by adding servers and disks.
Complete Training on Youtube with all topics - FREE
http://www.youtube.com/playlist?list=PLeHUvPtMTsdeaE4YBiPPZlMYVaDfKt_DH
Weblogic Application Server overview and concepts
Weblogic integration with apache and security hardening with multi user realms and SSL
JMS Overview with queues/topic and jms bridges
JDBC overview with failover and HA modes
WLST & Node manager commands and setup
Weblogic deployment concepts
Offline and online backup recovery comcepts
This is the planning part of a two-part session for system programmers and their managers who are planning on upgrading to z/OS V2.4. In part one, the focus is on preparing your current system for upgrading to either release. The system requirements to run and how to prepare your system for the upgrade are discussed. Part two covers the only upgrade details for upgrading to z/OS V2.4 from either V2.2 or V2.3. It is strongly recommended that you attend both sessions for an upgrade picture for z/OS V2.4.
The general availability date for z/OS V2.4 happened September 30 2019.
Exadata Deployment Bare Metal vs VirtualizedUmair Mansoob
This document compares bare metal and virtualized Exadata deployments. It discusses the layout and considerations of bare metal vs Oracle VM environments. Some key differences covered include licensing benefits, workload isolation, database consolidation use cases, and maintenance processes like patching. The document also outlines pros and cons of virtualization such as improved isolation but reduced efficiency. It provides guidance on migrating physical databases to a virtualized environment with minimum downtime.
There are 5 types of FSMO roles that each serve specific purposes in Active Directory: the Schema Master updates the directory schema; the Domain Naming Master manages domain names; the RID Master assigns SIDs to new objects; the PDC Emulator synchronizes time and processes password changes; and the Infrastructure Master updates object references across domains. These roles can be seized by another domain controller if the current owner becomes permanently unavailable using the Ntdsutil tool.
Barry Hesk: Cisco Unified Communications Manager training deck 1Barry Hesk
Cisco Unified Communications Manager (CUCM) training will cover CUCM basics and advanced configurations over three days. The instructor will use a demo environment including two CUCM servers and a Cisco Unity Connection voicemail server. Topics will include CUCM architecture, installation, upgrades, backups/restores, protocols, phones, gateways, and more. The goal is to explain how CUCM works from the perspective of a customer.
This document discusses continuous availability and data mobility solutions from EMC, including VPLEX and RecoverPoint. It provides an overview of these solutions, describing how they enable active-active configurations across data centers for always-on application access, automated disaster recovery without downtime, and non-disruptive data migration. It also shares statistics on VPLEX and RecoverPoint deployments and discusses how these solutions provide benefits like zero RPO/RTO recovery and removing restrictions of data centers and storage arrays.
The document provides an overview of the Oracle Exadata X10M Database Machine. Key points include:
- It features the latest 96-core AMD EPYC CPUs, up to 3TB of memory per database server, and 100Gb RDMA networking.
- Storage options include High Capacity servers with 264TB disk and 27.2TB flash, Extreme Flash servers with 122.88TB flash storage, and Extended servers with 264TB disk.
- The machines deliver extreme performance and scalability for all database workloads through automated management and database-optimized hardware and software.
Clone Oracle Databases In Minutes Without Risk Using Enterprise Manager 13cAlfredo Krieg
1) Oracle Enterprise Manager allows users to clone Oracle databases in minutes without risk by using its snap clone functionality.
2) Snap clones provide rapid, space efficient cloning of databases across storage systems. They also enable integrated database lifecycle management.
3) Enterprise Manager provides both administrator-driven and self-service user workflows for creating snap clones of databases for testing and development.
This document discusses IBM's virtualization technology for System p servers. It provides an overview of logical partitioning (LPAR), micro-partitioning, and dynamic LPAR capabilities. It also describes virtual I/O technologies like virtual SCSI and virtual Ethernet that allow sharing of network and storage devices between partitions. The document notes that these virtualization features help optimize resource utilization and reduce hardware costs through server consolidation.
Virtualization technologies allow servers to be consolidated onto fewer physical servers for improved efficiency. IBM's PowerVM allows one physical server to be divided into multiple logical partitions (LPARs), with each LPAR able to run its own operating system. Key PowerVM technologies include micro-partitioning which divides physical CPUs among LPARs, dynamic LPARs which moves resources between active partitions, and virtual I/O servers which allow partitions to share physical network and storage adapters. These technologies improve utilization, flexibility, and availability compared to using separate physical servers.
The document provides an introduction to operating systems, covering their basic functions and components. It discusses how operating systems manage hardware resources and provide abstraction for applications. The key components described include the kernel, drivers, utilities, and applications/processes. It also covers process scheduling, file systems, APIs/system calls, memory management, and popular operating systems like IBM z/OS, IBM i, and OpenVMS.
The document provides an overview of operating systems, including processes, threads, interprocess communication, deadlocks, and scheduling. It discusses the evolution of operating systems from first to fourth generation. Key concepts covered include processes, files, system calls, command interpreters, and signals. Operating system structures like monolithic, layered, and client-server models are summarized. Common interprocess communication problems like the bounded buffer, readers-writers, and dining philosophers problems are also briefly outlined. Finally, it discusses process scheduling algorithms, deadlock conditions and strategies to handle deadlocks.
Future of Power: Aix in Future - Jan Kristian NielsenIBM Danmark
The document discusses upcoming features and enhancements for IBM Power Systems and AIX. Key points discussed include:
- Exploiting Power8 systems while maintaining support in Power7/Power6/Power6+ compatible modes.
- Enhancements to AIX such as live kernel updates, WPAR lifecycle management, and mksysb backups while systems are running.
- Trends in computing like more parallelization being required and technologies like transactional memory and accelerators.
- Simplification and automation features for manageability including system templates and REST APIs.
The document discusses IBM's PowerVM virtualization technology. It describes how PowerVM allows a single physical server to be divided into multiple logical partitions (LPARs), each with dedicated or shared CPUs and memory. It also discusses micro-partitioning which divides physical CPUs into smaller virtual CPUs that can be allocated in fractions to LPARs. The document further explains how LPARs can share physical I/O adapters through virtual I/O servers and how live partition mobility allows moving running LPARs between physical servers without downtime.
Operating Systems 1 (3/12) - ArchitecturesPeter Tröger
The document discusses operating system architectures and provides examples of different architectures. It begins by explaining that operating systems provide a layered architecture that hides the complexity of underlying hardware and manages system resources. It then provides examples of monolithic, layered, and microkernel architectures. Specifically, it summarizes Linux as having a layered kernel and user mode, VMS and Windows as having layered designs with executive and kernel layers, and microkernel designs placing most functionality in user-mode servers.
Student guide power systems for aix - virtualization i implementing virtual...solarisyougood
The document describes key concepts of logical partitioning for IBM Power Systems, including:
1) Partitions allocate system resources to create logically separate systems within the same physical footprint managed by the PowerVM Hypervisor.
2) Resources like processors, memory, and I/O can be dynamically allocated to partitions using DLPAR.
3) Advanced features allow shared processor pools, virtual I/O, live partition mobility, and capacity on demand.
4) The Hardware Management Console (HMC) manages partition configuration and resources.
This document discusses operating system structures. It covers operating system services, user interfaces, system calls, system programs, design and implementation, structure, debugging, generation, and booting. The key points are:
- Operating systems provide services like process and resource management, file systems, I/O, and protection. They have user interfaces like command lines and GUIs.
- System calls are the programming interface to OS services. They are implemented via system call tables. Common types include process, file, device, and protection calls.
- Operating systems are structured in various ways like layered, microkernel, and hybrid approaches. Modern OSes use modules and dynamic loading.
- System programs provide user interfaces
The document provides an overview of the Linux operating system, including:
- An introduction to Linux and its history as an open-source clone of UNIX.
- Descriptions of Linux's core functionality like multi-user support and virtual memory.
- Discussions of key Linux components like kernels, distributions, packages, and updates.
- Explanations of enterprise-level Linux features around performance, scalability, and reliability.
This document summarizes the history and development of the Xen virtualization project. It discusses how Xen addressed the issues with server sprawl and lack of isolation in early operating systems. It describes the benefits of server consolidation and manageability that virtualization provided. It also outlines the different approaches Xen took to virtualizing memory management and network interfaces to improve performance.
This document provides an overview of how to create your own cloud using Apache CloudStack. It discusses the key characteristics of clouds, different cloud service and deployment models supported by CloudStack, and the core components that make up a CloudStack deployment including zones, pods, clusters, primary and secondary storage, virtual routers, hypervisors, and the management server. The document also touches on CloudStack's networking, security, high availability, resource allocation, and usage accounting features.
This document provides the resume of Kalesha Ananthu, who has over 7 years of experience as a Senior AIX System Administrator. They have extensive expertise in AIX, HACMP, PowerHA, VIO and some experience in Red Hat Linux. They are currently working as an Sr. AIX Administrator at Hewlett Packard Enterprise and have previously worked at IBM India Private Limited. Their experience includes AIX system administration, cluster configuration, storage management, performance monitoring and troubleshooting. They hold professional certifications from IBM.
Os concepts 5 Storage and IO VirtualizationVaibhav Khanna
This document discusses storage and I/O virtualization, including characteristics of different storage types and how data moves through storage hierarchies. It describes the I/O subsystem's role in hiding hardware details from users through buffering, caching, and spooling. Protection and security mechanisms like access control lists and privilege escalation are examined. Virtualization allows multiple operating systems to run on a single machine through emulation, virtualization, or a virtual machine manager. Distributed systems connect separate systems over a network, with a network OS facilitating communication to create the illusion of a single system.
The document provides an overview of operating systems, including:
1. An operating system acts as a virtual machine that hides complex hardware details and provides services through system calls. It manages resources like time and memory allocation.
2. Core OS functions include process management, I/O device control, resource access control, error handling, and accounting. The kernel contains frequently used functions and privileges.
3. Early systems used serial processing but evolved to batch processing, multiprocessing, time-sharing, and today's graphical user interfaces across networks.
The document provides an introduction to operating systems, describing what an operating system is and how it acts as an intermediary between the computer hardware and user. It discusses the history of early operating systems and how they have evolved from simple batch processing systems to modern multi-tasking systems like Linux and Windows. The key components, structure, and functions of operating systems are explained, including how operating systems manage hardware resources, execute programs, and provide common services to users and programs.
This document outlines the structure and content of a course on operating system principles. It is divided into 5 units that cover topics like process management, memory management, distributed systems, and synchronization. The introduction defines key parts of a computer system like the operating system, hardware, and users. It describes the role of the operating system in allocating resources and controlling devices and programs. Examples are given of popular desktop, mobile, and server operating systems.
Windows is a popular operating system that runs on both PCs and servers. It provides a large collection of software solutions due to its popularity. While early versions of Windows were not true operating systems, modern versions like Windows Server provide stable and secure platforms for business applications and services. Failover clustering allows applications to remain highly available by failing over from one node to another in the case of hardware or software failures. The performance of an operating system depends on the underlying hardware, application load, and OS configuration.
Bca i-fundamental of computer-u-3-functions operating systemsRai University
The document discusses operating systems and their functions. It defines an operating system as software that manages computer hardware resources and provides interfaces. The key functions of an operating system include managing the CPU, RAM and hardware; providing security; and providing interfaces for users and application software. Common operating system types include batch processing, real-time processing, time-sharing, and examples of Windows, Linux/Unix, and mobile operating systems.
Operating System definitions and about system calls
Operating System Services
User and Operating System-Interface
System Calls
Types of system calls
System Programs
Similar to Presentation aix workload partitions (wpa rs) (20)
Engage for success ibm spectrum accelerate 2xKinAnx
IBM Spectrum Accelerate is software that extends the capabilities of IBM's XIV storage system, such as consistent performance tuning-free, to new delivery models. It provides enterprise storage capabilities deployed in minutes instead of months. Spectrum Accelerate runs the proven XIV software on commodity x86 servers and storage, providing similar features and functions to an XIV system. It offers benefits like business agility, flexibility, simplified acquisition and deployment, and lower administration and training costs.
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep divexKinAnx
The document provides an overview of IBM Spectrum Virtualize HyperSwap functionality. HyperSwap allows host I/O to continue accessing volumes across two sites without interruption if one site fails. It uses synchronous remote copy between two I/O groups to make volumes accessible across both groups. The document outlines the steps to configure a HyperSwap configuration, including naming sites, assigning nodes and hosts to sites, and defining the topology.
Software defined storage provisioning using ibm smart cloudxKinAnx
This document provides an overview of software-defined storage provisioning using IBM SmartCloud Virtual Storage Center (VSC). It discusses the typical challenges with manual storage provisioning, and how VSC addresses those challenges through automation. VSC's storage provisioning involves three phases - setup, planning, and execution. The setup phase involves adding storage devices, servers, and defining service classes. In the planning phase, VSC creates a provisioning plan based on the request. In the execution phase, the plan is run to automatically complete all configuration steps. The document highlights how VSC optimizes placement and streamlines the provisioning process.
This document discusses IBM Spectrum Virtualize 101 and IBM Spectrum Storage solutions. It provides an overview of software defined storage and IBM Spectrum Virtualize, describing how it achieves storage virtualization and mobility. It also provides details on the new IBM Spectrum Virtualize DH8 hardware platform, including its performance improvements over previous platforms and support for compression acceleration.
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...xKinAnx
HyperSwap provides high availability by allowing volumes to be accessible across two IBM Spectrum Virtualize systems in a clustered configuration. It uses synchronous remote copy to replicate primary and secondary volumes between the two systems, making the volumes appear as a single object to hosts. This allows host I/O to continue if an entire system fails without any data loss. The configuration requires a quorum disk in a third site for the cluster to maintain coordination and survive failures across the two main sites.
IBM Spectrum Protect (formerly IBM Tivoli Storage Manager) provides data protection and recovery for hybrid cloud environments. This document summarizes a presentation on IBM's strategic direction for Spectrum Protect, including plans to enhance the product to better support hybrid cloud, virtual environments, large-scale deduplication, simplified management, and protection for key workloads. The presentation outlines roadmap features for 2015 and potential future enhancements.
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...xKinAnx
The document provides instructions for installing and configuring Spectrum Scale 4.1. Key steps include: installing Spectrum Scale software on nodes; creating a cluster using mmcrcluster and designating primary/secondary servers; verifying the cluster status with mmlscluster; creating Network Shared Disks (NSDs); and creating a file system. The document also covers licensing, system requirements, and IBM and client responsibilities for installation and maintenance.
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...xKinAnx
This document discusses quorum nodes in Spectrum Scale clusters and recovery from failures. It describes how quorum nodes determine the active cluster and prevent partitioning. The document outlines best practices for quorum nodes and provides steps to recover from loss of a quorum node majority or failure of the primary and secondary configuration servers.
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...xKinAnx
IBM Spectrum Scale can help achieve ILM efficiencies through policy-driven, automated tiered storage management. The ILM toolkit manages file sets and storage pools and automates data management. Storage pools group similar disks and classify storage within a file system. File placement and management policies determine file placement and movement based on rules.
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...xKinAnx
The document provides an overview of IBM Spectrum Scale Active File Management (AFM). AFM allows data to be accessed globally across multiple clusters as if it were local by automatically managing asynchronous replication. It describes the various AFM modes including read-only caching, single-writer, and independent writer. It also covers topics like pre-fetching data, cache eviction, cache states, expiration of stale data, and the types of data transferred between home and cache sites.
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...xKinAnx
This document provides information about replication and stretch clusters in IBM Spectrum Scale. It defines replication as synchronously copying file system data across failure groups for redundancy. While replication improves availability, it reduces performance and increases storage usage. Stretch clusters combine two or more clusters to create a single large cluster, typically using replication between sites. Replication policies and failure group configuration are important to ensure effective data duplication.
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...xKinAnx
This document provides information about clustered NFS (cNFS) in IBM Spectrum Scale. cNFS allows multiple Spectrum Scale servers to share a common namespace via NFS, providing high availability, performance, scalability and a single namespace as storage capacity increases. The document discusses components of cNFS including load balancing, monitoring, and failover. It also provides instructions for prerequisites, setup, administration and tuning of a cNFS configuration.
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...xKinAnx
This document provides an overview of managing Spectrum Scale opportunity discovery and working with external resources to be successful. It discusses how to build presentations and configurations to address technical and philosophical solution requirements. The document introduces IBM Spectrum Scale as providing low latency global data access, linear scalability, and enterprise storage services on standard hardware for on-premise or cloud deployments. It also discusses Spectrum Scale and Elastic Storage Server, noting the latter is a hardware building block with GPFS 4.1 installed. The document provides tips for discovering opportunities through RFPs, RFIs, events, workshops, and engaging clients to understand their needs in order to build compelling proposal information.
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...xKinAnx
This document provides guidance on sizing and configuring Spectrum Scale and Elastic Storage Server solutions. It discusses collecting information from clients such as use cases, workload characteristics, capacity and performance goals, and infrastructure requirements. It then describes using tools to help architect solutions that meet the client's needs, such as breaking the problem down, addressing redundancy and high availability, and accounting for different sites, tiers, clients and protocols. The document also provides tips for working with the configuration tool and pricing the solution appropriately.
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...xKinAnx
The document provides an overview of key concepts covered in a GPFS 4.1 system administration course, including backups using mmbackup, SOBAR integration, snapshots, quotas, clones, and extended attributes. The document includes examples of commands and procedures for administering these GPFS functions.
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...xKinAnx
This document provides an overview of Spectrum Scale 4.1 system administration. It describes the Elastic Storage Server options and components, Spectrum Scale native RAID (GNR), and tips for best practices. GNR implements sophisticated data placement and error correction algorithms using software RAID to provide high reliability and performance without additional hardware. It features auto-rebalancing, low rebuild overhead through declustering, and end-to-end data checksumming.
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor Ivaniuk
Presentation aix workload partitions (wpa rs)
1. Materials may not be reproduced in whole or in part without the prior written permission of IBM. 5.3
2011
IBM Power Systems Technical University
October 10-14 | Fontainebleau Miami Beach | Miami, FL
AIX Workload Partitions (WPARs)
Session: VN24
Mehboob Mithaiwala
mehboobm@us.ibm.com
IBM Advanced Technical Skills
2. 2
IBM Power Systems Technical University Miami 2011
Session Evals in 3 Easy Steps
Go to: http://ibmtechu.com/up1
Select Register button and complete the form
(One time only)
2
3. 3
IBM Power Systems Technical University Miami 2011
Session Evals in 3 Easy Steps
Select “Session Evals” button and complete the online
form
VN24
Data pre-filled if
you used the
agenda planner
3
4. 4
IBM Power Systems Technical University Miami 2011
AIX Workload Partitions Specific Sessions
VN23 – LAB: Working with AIX Workload Partitions (WPARs)
Wednesday 2:30 – 3:45 Flicker 2
Thursday 1:00 – 2:15 Flicker 1
VN24 – AIX Workload Partitions (WPARs)
Thursday 10:30 – 11:45 Splash 13
VN25 – IBM PowerVM Workload Partitions Manager for AIX
Thursday 4:15 – 5:30 Glimmer 7
VN26 – Versioned Workload Partitions (WPARs) for AIX 7
Wednesday 1:00 – 2:15 Glimmer 7
Friday 10:30 – 11:45 Glimmer 3
5. 5
IBM Power Systems Technical University Miami 2011
Agenda
• AIX Workload Partitions (WPARs)
– Overview
– How it all works (in pictures)
– Resource Management & Control
– WPAR Lifecycle Management
• Functional Enhancements for WPARs
• Versioned Workload Partitions for AIX 7
• WPARs v/s LPARs – Comparison
• Software support in WPARs
• Use Case Scenarios
6. 6
IBM Power Systems Technical University Miami 2011
IBM Power Systems Virtualization Continuum
AIX Workload Partitions (WPARs) complement Logical Partitions
WPARs integrated with Workload Manager (WLM) classes
Workload Isolation
ResourceFlexibility
Workload
Partitions
AIX 6 on POWER4™
or later
LPAR/
MicroPartitions
AIX V5.3 on POWER5™ or later
AIX Workload
Manager
AIX V4.3.3 on POWER3™
or later
AIX 4.3.3 (1999)
POWER3
AIX 5.3 (2004)
POWER5™
AIX 6.1 (2007)
POWER4™
7. 7
IBM Power Systems Technical University Miami 2011
PowerVM LPARs and AIX Workload Partitions
PowerVM VirtualizationPowerVM Virtualization
Power Systems HardwarePower Systems Hardware
LPAR
i
SAP
LPAR
AIX 6
WPAR
WPAR
WebSphere
DB2
WPAR
Workload Partitions
•Multiple workloads
inside a single AIX
•AIX 6 or newer
feature
•Simple, efficient,
less work
LPAR
Logical Partitions
•Multiple separate OS
instances in a server
•Power Systems
hardware feature
•More disks, more
RAM, more admin
LPAR
AIX 5
DB2
LPAR
Linux
Apache
8. 8
IBM Power Systems Technical University Miami 2011
Consolidation of isolated workloads within a single AIX instance
Pre-requisite: AIX 6 or higher
WPAR Manager Live Application Mobility
– Move WPARs between AIX instances without restarting the applications
AIX6
Hardware/LPAR
App
AIX6
Hardware/LPAR
App
Workload Partitions (WPARs) Concept
AIX6
Hardware/LPAR
App
AIX6
Hardware/LPAR
App
AIX6
Hardware/LPAR
App
AIX6
Hardware/LPAR
App
AIX6
Hardware/LPAR
App
AIX6
Hardware/LPAR
App
AIX6
Hardware/LPAR
App
AIX6
Hardware/LPAR
App
AIX
Hardware/LPAR
App
App
App
App
AIX
Hardware/LPAR
App
App
App
App
App = WPAR
AIX6
Hardware/LPAR
App
9. 9
IBM Power Systems Technical University Miami 2011
AIX Workload Partitions – in a Nutshell
• Virtualized OS environments
– within a single AIX image
• Each Workload Partition (WPAR)
– shares single AIX OS kernel
– separate admin and security domain
– can get regulated share of CPU,
memory, and other resources
• Applications and users inside a WPAR
cannot affect resources outside the
WPAR
• Two types of WPARs
– System WPARs have
• separate security
• appear like a completely separate OS
– Application WPARs
• manageability wrapper around a single
application
Workload
Partition
App Srv
Workload
Partition
Web Srv
Workload
Partition
Billing Srv
AIX Image
Workload
Partition
Test Srv DB2
Workload
Partition
single memory space
10. 10
IBM Power Systems Technical University Miami 2011
Reasons for Workload Partitions (WPAR)
1) Reduced AIX System
Administration
3) Rapid environment creation
of a new application
5) Live Application Mobility ***
Can be automated, policy-based
2) Application Encapsulation,
monitoring and control
4) Separate Admin / Security
at WPAR level
6) Reduced memory use
Minimum WPAR ~ 64MB
*** Requires WPAR Manager
11. 11
IBM Power Systems Technical University Miami 2011
AIX Workload Partitions – How can they help?
Go Green & Save
• Consolidate workloads from underutilized servers
• Run multiple instances of the same application
Manage Growth, Complexity, & Risks
• Reduces cost & complexity by
– reducing number of AIX OS instances to manage
– single application maintenance can apply to many WPARs
• Provides increased flexibility by
– allowing administrators to quickly create or clone WPARs
• Improves administrative efficiency by
– providing for delegated root users
• Supports POWER4, POWER5, POWER6 and POWER7
Realize Innovation
• Provides new ways to manage your IT infrastructure
• Allows consolidation on a scale not previously practical
12. 12
IBM Power Systems Technical University Miami 2011
Types of WPARs
• Static WPARs
– Application WPARs
• No file system isolation
– System WPARs
• Process level and file system isolation
• Look and feel of a stand-alone system
• File system usually on local disk
• Mobile WPARs
– Mobile Application WPARs
– Mobile System WPARs
• NFS based System WPARs
– File system on a NFS server (accessible from departure and arrival)
• SAN based System WPARs
– File system on a SAN (accessible from departure and arrival)
• VIOS vSCSI disk based System WPARs
– File system on VIOS allocated vSCSI disk (accessible from departure and arrival)
13. 13
IBM Power Systems Technical University Miami 2011
Basic WPAR Lifecycles - Application WPARs
– Isolate an individual application
– Light-weight, single application
• can start further processes
– Created & started in seconds
– Starts when created
• when application starts
– Automatically removed
• when application stops
– Shares global file systems
– Good for HPC
• Long running applications
Application Workload Partitions provide lightweight isolation
Create and run
Stop and remove
14. 14
IBM Power Systems Technical University Miami 2011
Basic WPAR Lifecycles - System WPARs
– Complete virtualized OS environment
• Runs multiple services and applications
– Needs to be created
– Removed only when requested
– Like stand-alone AIX system
• Own system services like inetd, cron, syslog
• Own root user, users, and groups
• Can be stopped and restarted
– Isolated file systems
• /, /tmp, /var, /home
• Optional: /usr, /opt
– Integrated with RBAC
• granular privilege and security
– Good for most purposes
Create
Stop
Run
Remove
Defined Active
System Workload Partitions are easy to setup and manage
15. 15
IBM Power Systems Technical University Miami 2011
Workload Partitions Isolation
• Process environment
– Processes only see and can signal other processes within WPAR
– Processes can only construct IPC mechanisms within a WPAR
• Pipes, shared memory, semaphores & message queues
• File system space (system WPARs)
– Separate file systems created & WPAR processes chroot to these
– Processes within a WPAR only see files in these file systems
– Typically unique copy of / (root), /home, /var, /tmp file systems
– Default read-only /usr, /opt from Global AIX
• Network environment
– WPAR has separate IP address, hostname, and hostids
– For users, other nodes on the network, and network applications
• WPAR appears to be a stand-alone system
– Users may login to the System WPAR using telnet, ssh, rlogin, etc.
16. 16
IBM Power Systems Technical University Miami 2011
Processes (view from Global)
• root@ec10 / # ps -ef -@
18. 18
IBM Power Systems Technical University Miami 2011
File systems (view from Global)
19. 19
IBM Power Systems Technical University Miami 2011
File systems (view from WPAR)
20. 20
IBM Power Systems Technical University Miami 2011
Network Environment
• From Global
• From WPAR
21. 21
IBM Power Systems Technical University Miami 2011
Workload Partitions Environment
• Security
– WPAR root is less privileged than Global root
– Integrated with Role Based Access Control (RBAC)
– Configurable privilege profile applied to WPAR at creation
– Included as part of AIX Common Criteria Certifications
• System services
– Mail, NFS client, inetd, syslog, cron, ... executed independently per WPAR
• Resource Controls
– Memory, CPU, paging space, PTYs
– Threads and process counts, other resources allocated to WPAR
• WPAR Serviceability
– Error Logging, Trace, Auditing, Accounting from within a WPAR
– WPAR aware statistics and performance tools (e.g. vmstat, wlmstat, topas,
iostat, svmon, …)
22. 22
IBM Power Systems Technical University Miami 2011
Workload Partitions Environmental Differences
• No global system view from within a WPAR
• Privileged operation with system level effects are disallowed
from a WPAR
– i.e. adjust system clock, set timer granularities, set system wide tuning
options
• By default /usr and /opt are shared read-only in the WPAR
– Applications cannot install in these read-only locations
• Can mount read-write directories onto read-only /usr and /opt
– Application supporting relocatable install (USIL) can be installed in an
alternate location
– Optionally, /usr and /opt can be private and read-write for the WPAR
• Creates a private copy for the WPAR at creation time ( -l flag )
23. 23
IBM Power Systems Technical University Miami 2011
Workload Partitions (WPARs) Product Offerings
• Base WPAR comes with AIX 6 / AIX 7
– Base Native WPAR functions
– AIX command line / SMIT interface
– No Live Application Mobility
• Workload Partitions Manager for AIX
– FC 5765-G83
– Systems Director Plug-in (60 day trial)
– Enablement for Live Application Mobility
– Manual or Automated (Policy-based)
Application Mobility
– Cross System Management for WPARs
– Licensed Program Product
• part of AIX Enterprise Edition
24. 24
IBM Power Systems Technical University Miami 2011
Versioned WPARs Product Offerings
• AIX 5.2 WPARs for AIX 7
– FC 5765-H38
– Licensed Program Product (Fee)
– GA: September 10, 2010
–– POWER7 & AIX 7.1POWER7 & AIX 7.1
– SWMA includes phone and limited
defect support for AIX 5.2
• AIX 5.3 WPARs for AIX 7
– FC 5765-WP7
– Licensed Program Product (Fee)
– GA: October 14, 2011
–– POWER7 & AIX 7.1POWER7 & AIX 7.1
– SWMA includes phone and limited
defect support for AIX 5.3
25. 25
IBM Power Systems Technical University Miami 2011
Agenda
• AIX Workload Partitions (WPARs)
– Overview
– How it all works (in pictures)
– Resource Management & Control
– WPAR Lifecycle Management
• Functional Enhancements for WPARs
• Versioned Workload Partitions for AIX 7
• WPARs v/s LPARs – Comparison
• Software support in WPARs
• Use Case Scenarios
26. 26
IBM Power Systems Technical University Miami 2011
AIX running Multiple Applications
Application A Application B
A
I
X
K
E
R
N
E
L
CPUs in the machine or LPAR
Memory in the machine or LPAR
Global AIX
/ ---/etc
---/usr
---/home
---/opt
---/var
---/bin
---/sbin
---/appA
---/appB
Processes
27. 27
IBM Power Systems Technical University Miami 2011
Global AIX & WPARs
A
I
X
K
E
R
N
E
L
Global AIX WPAR A WPAR B
/ ---/etc
---/usr
---/home
---/wpars/A
---/etc
---/usr
---/home
---/wpars/B
--- ditto
Processes
CPUs in the machine or LPAR
Memory in the machine or LPAR
28. 28
IBM Power Systems Technical University Miami 2011
Global AIX & WPARs Isolation of Processes
A
I
X
K
E
R
N
E
L
Global AIX WPAR A WPAR B
Processes
CPUs in the machine or LPAR
Memory in the machine or LPAR
Global AIX “sees” everything & WPAR only itself
Like: ps command, signals, IPC, etc.
Each WPAR has its own init, inetd, syslog, etc./ ---/etc
---/usr
---/home
---/wpars/A
---/etc
---/usr
---/home
---/wpars/B
--- ditto
29. 29
IBM Power Systems Technical University Miami 2011
Global AIX & WPAR Isolation of File Systems
Global AIX “sees” All Files & WPAR only their files
A
I
X
K
E
R
N
E
L
Global AIX WPAR A WPAR B
/ ---/etc
---/usr
---/home
/ ---/etc
---/usr
---/home
Processes
chroot
chroot
CPUs in the machine or LPAR
Memory in the machine or LPAR
/ ---/etc
---/usr
---/home
---/wpars/A
---/etc
---/usr
---/home
---/wpars/B
--- ditto
30. 30
IBM Power Systems Technical University Miami 2011
Global AIX & WPAR Networking
A
I
X
K
E
R
N
E
L
Global AIX WPAR A WPAR B
/ ---/etc
---/usr
---/home
/ ---/etc
---/usr
---/home
Processes
Network
Using
Aliases
Network packets passed to the right process = right WPAR
CPUs in the machine or LPAR
Memory in the machine or LPAR
chroot
chroot
/ ---/etc
---/usr
---/home
---/wpars/A
---/etc
---/usr
---/home
---/wpars/B
--- ditto
31. 31
IBM Power Systems Technical University Miami 2011
Agenda
• AIX Workload Partitions (WPARs)
– Overview
– How it all works (in pictures)
– Resource Management & Control
– WPAR Lifecycle Management
• Functional Enhancements for WPARs
• Versioned Workload Partitions for AIX 7
• WPARs v/s LPARs – Comparison
• Software support in WPARs
• Use Case Scenarios
32. 32
IBM Power Systems Technical University Miami 2011
WPAR Resource Management
• Resources are controlled by the WPAR configuration
• WPAR resource control is performed in the global AIX
• WPARs may be associated with a resource set (rset)
• Each WPAR has its own set of allowed privileges (WPAR
privilege set)
– commands and processes inside a WPAR get, at most, the inherited
privileges of that WPAR privilege set
• WPAR root is less privileged than root in the global AIX
• Configurable privilege profile applied to WPAR at creation
(-S flag)
33. 33
IBM Power Systems Technical University Miami 2011
Resource Control
• GUI WPAR Manager Edit WPAR
• Command-Line Interface (CLI)
chwpar –R
active=yes/no rset=rset
shares_CPU=n CPU=min%-soft%,hard%
shares_memory=n memory=min%-soft%,hard%
totalProcesses=n totalThreads=n
procVirtMem=n[M|G|T] totalVirtMem=n[M|G|T]
totalPTYs=n totalLargePages=n
pct_msgIDs=n% pct_semIDs=n%
pct_shmIDs=n% pct_pinMem=n%
Examples:
– Switch control on (default) chwpar –R active=yes wp04
– 200 CPU shares chwpar –R shares_CPU=200 wp04
– Set min, softmax, hardmax chwpar –R CPU=10-50,75 wp04
– Use only CPU four (rset) chwpar –R rset=sys/cpu.00004 wp04
• Removing resource control
use –K option:
chwpar –K –R rset wp04 or chwpar –K –R shares_CPU wp04
then flip “Resource Control” off and back to “Activate” (-R active=no/yes)
34. 34
IBM Power Systems Technical University Miami 2011
Agenda
• AIX Workload Partitions (WPARs)
– Overview
– How it all works (in pictures)
– Resource Management & Control
– WPAR Lifecycle Management
• Functional Enhancements for WPARs
• Versioned Workload Partitions for AIX 7
• WPARs v/s LPARs – Comparison
• Software support in WPARs
• Use Case Scenarios
35. 35
IBM Power Systems Technical University Miami 2011
WPAR Commands
• Basic management:
– mkwpar create a system WPAR
– wparexec create and run an application WPAR
– lswpar list WPAR details and status
– chwpar make changes to the WPAR
– rmwpar remove the WPAR
– clogin Initiate a user console session
– startwpar activate a system WPAR
– stopwpar deactivate an active WPAR
– rebootwpar stop and restart a system WPAR
• Software maintenance:
– syncwpar synchronize global environment and WPAR software levels
– syncroot synchronize non-shared portion of installed software
– inuwpar perform software installation in a detached WPAR
– migwpar after OS migration of global environment to AIX 7, migrates a
WPAR that was created on global AIX 6 to AIX 7 (AIX 7.1 only)
36. 36
IBM Power Systems Technical University Miami 2011
WPAR Commands (contd.)
• Backup and Restore:
– savewpar backup WPAR files
– restwpar recreate the WPAR from backup image
– mkwpardata create a file used by the savewpar / restwpar
– restwparfiles restore files from a backup source
– lssavewpar lists the contents of a WPAR backup
37. 37
IBM Power Systems Technical University Miami 2011
Performance Monitoring for WPARs
• Performance tools updated to support proper tracking and
filtering of system performance data for WPARs
• Global convention “-@” flag denotes WPAR specific operations
Executes normally and prints the summary of all
WPARs. A workload partition name is displayed
(does not apply to commands not providing
summaries)
Fails with usage message as
option -@ ALL is made illegal
inside the WPAR
-@ ALL
Prints relevant information for given WPAR only.
Equivalent to running commands inside the
specified WPAR
Fails with usage message as -@
WPARName option is made
illegal inside the WPAR
-@ WPARName
Prints relevant information for the global
environment only
Global option is not allowed-@ Global
Many commands support -@ alone.
Some commands fail with workload partition not
found message. Need to specify a WPAR name.
Some commands support the -@
flag. Ex: iostat / vmstat provide
system-level statistics
-@
Executes normally with no changes from previous
versions of AIX
Executes the default report and
displays an @ above the metrics
specific to the WPAR
None
Behavior in Global environmentBehavior in WPARFlag/Argument
38. 38
IBM Power Systems Technical University Miami 2011
SMIT Panels
• SMIT fastpath: root@p7wphost1:/# smit wpar
39. 39
IBM Power Systems Technical University Miami 2011
IBM PowerVM Workload Partitions Manager for AIX
40. 40
IBM Power Systems Technical University Miami 2011
IBM PowerVM Workload Partitions Manager for AIX
• What is it?
– IBM Systems Director advanced manager plug-in (60-day trial)
• federates management of WPARs across multiple systems
– WPARs can be created, cloned, stopped, started and monitored from
a single browser-based console
– Enablement for AIX Live Application Mobility
– Part of AIX Enterprise Edition or can be purchased separately
• How can it help?
– Single point of management for all WPARs and enablement for AIX
Live Application Mobility
– Can reduce cost and complexity through centralized management of
WPARs across multiple systems
– Policy based relocation and federated management of WPARs
provides new ways to manage your IT infrastructure
41. 41
IBM Power Systems Technical University Miami 2011
AIX Live Application Mobility
• Move a running Workload Partition from one server to another
for outage avoidance and multi-system workload balancing
• Works on any hardware supported by AIX 6 and AIX 7
Workload
Partition
QA
AIX # 2
Workload
Partition
Data Mining
Workload
Partition
App Server
Workload
Partition
Web
AIX # 1
Workload
Partition
Dev
Workload
Partition
e-mail
Workload
Partitions
Manager
for AIX
Policy
Workload
Partition
Billing
WPAR’s filesystem on Shared NFS or SAN Storage
42. 42
IBM Power Systems Technical University Miami 2011
AIX Live Application Mobility – Why? & How?
• Multi-system workload balancing
– Optimize the resource usage of CPU, memory, and I/O
– Vacate a machine to avoid application outages ( upgrade hardware, AIX, or
firmware or for machine repair )
• Mobility requires shared storage model
– WPAR filesystem has to be on shared NFS, SAN storage, or VIOS vSCSI
• Requires departure and arrival systems to be compatible
– Same processor family
– Software levels on source and target systems must match
– Networking: source and target systems on same subnet
– For IPv6 networks: NFSv4 is required
• Policy Based Relocation
– WPAR needs to be tagged as “Policy Enabled”
– WPAR must be checkpointable
– Both Global Environments belong to same Relocation Domain
43. 43
IBM Power Systems Technical University Miami 2011
Agenda
• AIX Workload Partitions (WPARs)
– Overview
– How it all works (in pictures)
– Resource Management & Control
– WPAR Lifecycle Management
• Functional Enhancements for WPARs
• Versioned Workload Partitions for AIX 7
• WPARs v/s LPARs – Comparison
• Software support in WPARs
• Use Case Scenarios
44. 44
IBM Power Systems Technical University Miami 2011
Workload Partitions Enhancements
AIX 6.1 GA (Nov 2007)
– Base Support (including NFS based mobility)
AIX 6.1 TL01 (April 2008)
– NFS Support (NFS Client)
AIX 6.1 TL02 (Nov 2008)
– Named Network Interfaces
– Asynchronous mobility performance improvements
– Network Isolation - optional WPAR-specific routing
• Addresses requirement for firewall/boundary enforcement,
between WPARs on the same node
• WPAR shares global routes (default)
– NIM Client Support for WPAR
– IPv6 Support for WPAR
– Additional Resource Control (PTYs, LargePages, IPC, PinnedMem)
45. 45
IBM Power Systems Technical University Miami 2011
Workload Partitions Enhancements (contd.)
AIX 6.1 TL03 (May 2009)
– WPAR Storage Devices Support
• Storage devices must be connected via SAN
• Disk, Tape, and CD-ROM devices supported
– Clone WPAR
• Create another instance of an existing WPAR
• No need to recreate WPAR and re-install application(s) if another WPAR
instance is required
AIX 6.1 TL04 (Oct 2009)
– WPAR owned Root Volume Group (rootvg)
• File system management within WPAR
• WPAR owns all the root file systems (/, /tmp, /var, etc.)
– SAN based mobility
– Virtual Memory Limit per WPAR
– IBM PowerVM Workload Partitions (WPAR) Manager 2.1.0
• IBM Systems Director advanced manager plug-in
46. 46
IBM Power Systems Technical University Miami 2011
Root Volume Group
• Global-owned Root Volume Group
• WPAR-owned Root Volume Group
Global Environment
/wpars/alice
/usr
/opt
/wpars/alice/tmp
/wpars/alice/home
/wpars/alice/var
WPAR Environment: Alice
/
/usr
/opt
/tmp
/home
/var
/usr
/opt
/usr
/opt
/
/tmp
/home
/var
Quick provisioning
Shared Global file system resources
Global admin manages file systems
File systems must be NFS for mobility
Greater file system isolation
WPAR admin manages file systems
JFS2 file systems for mobility
hdisk0
hdisk0
hdisk20
47. 47
IBM Power Systems Technical University Miami 2011
WPAR SAN Mobility (conceptual picture)
WPAR MGR
Node A
Node B
Mobile SAN WPAR: Alice
SAN disk: root FSs
SAN disk: NON-root FSsAliceBob
SAN disk array
48. 48
IBM Power Systems Technical University Miami 2011
Workload Partitions Enhancements (contd.)
AIX 6.1 TL04 (contd.) (Oct 2009)
– Support for fuser command in WPAR
– VxFS Support
• Support via namefs mount (any fs that supports POSIX fs semantics)
AIX 6.1 TL05 (April 2010)
– WPAR Error Logging Framework
AIX 6.1 TL06 (Sept 2010)
– VIOS vSCSI disk support
– WPAR upgrade (migration to AIX 7.1) support
AIX 7.1 (Sept 2010)
– FC Adapter + resources attached under it, MPIO management
– Trusted Kernel Extensions support
– AIX 5.2 WPARs for AIX 7 (Versioned WPARs) *** (LPP)
49. 49
IBM Power Systems Technical University Miami 2011
FibreChannel Support in WPARs
• Current Shared Adapter Model
– Supports sharing standard FC between WPARs
– WPAR gets a LUN
– Global AIX manages MPIO and FC devices
– Supports default AIX MPIO and limited storage devices
– New support for VIOS vSCSI disk
• New System WPAR-owned FC Adapter (AIX 7.1 only)
– No relocation possible
– Not supported with Versioned WPARs
– WPAR gets the FC adapter
– WPAR manages MPIO and FC devices
– Architecturally supports
• any multi-pathing solution
• any storage device
50. 50
IBM Power Systems Technical University Miami 2011
FibreChannel, Virtual SCSI, NPIV FibreChannel Support
NPIV FibreChannel Configuration
Storage Area Network
fcs0
vfchost0
LPAR 1
fcs0 (NPIV)
hdisk0
hdisk0
WPAR A
LPAR 2
fcs0 (NPIV)
hdisk0
vfchost1
VirtualI/OServer
Storage Subsystem LUN
Virtual SCSI disk configuration
vscsi1
hdisk1
LPAR 2
VirtualI/OServer
fcs0
hdisk0
vtscsi0 vtscsi1
vhost1vhost0
vscsi0
hdisk0
LPAR 1
hdisk0
WPAR A
Storage Subsystem LUN
Storage Area Network
VirtualI/OServer
FibreChannel disk configuration
hdisk0
LPAR 2
fcs0
hdisk0
LPAR 1
WPAR A
fcs0
hdisk0
Storage Subsystem LUN
Storage Area Network
Storage subsystem:
- LUN (Logical Unit Number) - A unit of storage on the storage subsystem. This will eventually get mapped (via vscsi, physical FC, or NPIV FC) to an hdisk## on the LPAR
VIOS devices:
- fcs0 - The logical device name of the FC adapter, has a unique WWPN (World Wide Port Name)
- vtscsi - The virtual target device on the VIOS, this maps to a vscsi hdisk on the LPAR
- vhost - The vscsi host device on the VIOS, this maps to a vscsi adapter device on the LPAR
- vfchost - The NPIV host device on the VIOS, this maps to a NPIV fcs0 adapter device on the LPAR and generate its WWPN
Client LPAR (global) devices:
- fcs0 - The logical device name of the FC adapter, can either be physical or NPIV. Has a a unique WWPN. For NPIV, WWPN is generated by the VIOS.
- vscsi - The vscsi adapter device on the LPAR
NEW
51. 51
IBM Power Systems Technical University Miami 2011
• These configurations not yet supported by WPAR Manager (Mobility NOT possible)
Adapter support Inside a WPAR (AIX 7.1 only)
FibreChannel adapter configuration
Storage Area Network
LPAR 2
fcs0
VirtualI/OServer
LPAR 1
hdisk0
WPAR A
fcs0
fcs0
Storage Subsystem LUN
Storage subsystem:
- LUN (Logical Unit Number) - A unit of storage on the storage subsystem. This will eventually get mapped (via vscsi, physical FC, or NPIV FC) to an hdisk## on the LPAR
VIOS devices:
- fcs0 - The logical device name of the FC adapter, has a unique WWPN (World Wide Port Name)
- vfchost - The NPIV host device on the VIOS, this maps to a NPIV fcs0 adapter device on the LPAR and generate its WWPN
Client LPAR (global) devices:
- fcs0 - The logical device name of the FC adapter, can either be physical or NPIV. Has a a unique WWPN. For NPIV, WWPN is generated by the VIOS.
NPIV FibreChannel adapter configuration
LPAR 1
hdisk0
WPAR A
fcs0
fcs0 (NPIV)
LPAR 2
hdisk0
fcs0 (NPIV)
VirtualI/OServer
fcs0
vfchost1vfchost0
Storage Area Network
Storage Subsystem LUN
52. 52
IBM Power Systems Technical University Miami 2011
Trusted Kernel Extensions (AIX 7.1 only)
• Some applications need kernel extension(s) to be loaded
• AIX 7.1 adds support for System WPARs to load one or more
trusted kernel extension(s)
• Use the –X flag with mkwpar or chwpar commands to specify
the extensions to be loaded by the WPAR
• Extension information can be provided
– in a file:
• #mkwpar –n mywpar –X exportfile=filename . . . .
– as attributes on the command line:
• #mkwpar –n mywpar –X kext=/usr/lib/drivers/skippy
• Information on extensions configured for WPARs can be listed:
– #lswpar -X
53. 53
IBM Power Systems Technical University Miami 2011
Agenda
• AIX Workload Partitions (WPARs)
– Overview
– How it all works (in pictures)
– Resource Management & Control
– WPAR Lifecycle Management
• Functional Enhancements for WPARs
• Versioned Workload Partitions for AIX 7
• WPARs v/s LPARs – Comparison
• Software support in WPARs
• Use Case Scenarios
54. 54
IBM Power Systems Technical University Miami 2011
AIX 5.2 WPAR for AIX 7 - Product Info
• AIX 5.2 level supported is final SP of final TL (TL10 SP8)
– AIX 5.2 system must be brought to 5200-10-08 to be supported
• WPAR restrictions apply
– HACMP support from global
– NFS server not supported in a WPAR
– Device support is limited to devices directly supported in WPAR
• AIX 5.2 Workload Partitions for AIX 7 is a LPP
• Only supported on POWER7 hardware
• AIX 5.2 is not provided
– client must already have AIX 5.2 environment
55. 55
IBM Power Systems Technical University Miami 2011
AIX 5.3 WPAR for AIX 7 - Product Info
• AIX 5.3 level supported is current SP of current TL (TL12 SP4)
– AIX 5.3 system must be brought to 5300-12-04 to be supported
• WPAR restrictions apply
– HACMP support from global
– NFS server not supported in a WPAR
– Device support is limited to devices directly supported in WPAR
• AIX 5.3 Workload Partitions for AIX 7 is a LPP
• Only supported on POWER7 hardware
• AIX 5.3 is not provided
– client must already have AIX 5.3 environment
56. 56
IBM Power Systems Technical University Miami 2011
Keeping Up
• Many Power Systems clients have
– large numbers of older generations of POWER servers
– running AIX 5.2 / AIX 5.3
• Lower performance and less energy efficient hardware
– compared to POWER7
• Usually for Tier 2 applications
• They would like to consolidate them but:
– lack the time & resources to upgrade AIX and the rest of the
application stack
57. 57
IBM Power Systems Technical University Miami 2011
Why clients using Older AIX releases
• Why still running on 4 to 10 years old systems
– If it ain’t broke, don’t fix it
– Application not supported on new AIX release
– AIX upgrade to 6.1 / 7.1?
= retest of newer Middle-ware & App version & OS
“Are you crazy!!” “No way man!!”
• Versioned WPARs offer:
– Same AIX, same Middle-ware and same Apps
• on new POWER7 hardware with AIX 7 features
– Lots of hidden bonuses
58. 58
IBM Power Systems Technical University Miami 2011
Versioned WPARs – Client Benefits
• New Licensed Program Product offerings
• Simplify migration of old AIX 5.2/5.3 workloads to POWER7
– Runs on AIX 7 and a POWER7 processor-based server
• Client benefits
– Simplifies consolidation of old workloads on new hardware
– Reclaims floor space and eliminates hardware support for old servers
– Reduced electricity & cooling costs
– Protects customer investment in application stacks
– Enables advanced capabilities
• VIOS virtual disks, virtual networks, Live Partition Mobility
• SMT4, AMS, AME, …
• Live Application Mobility (requires WPAR Manager)
– Boost performance
• reduce CPU count = reduced software licenses
– Provides a way for AIX 5.2 clients to move up to POWER7
– Offering includes phone and limited fix support for AIX 5.2 / AIX 5.3
59. 59
IBM Power Systems Technical University Miami 2011
Agenda
• AIX Workload Partitions (WPARs)
– Overview
– How it all works (in pictures)
– Resource Management & Control
– WPAR Lifecycle Management
• Functional Enhancements for WPARs
• Versioned Workload Partitions for AIX 7
• WPARs v/s LPARs – Comparison
• Software support in WPARs
• Use Case Scenarios
60. 60
IBM Power Systems Technical University Miami 2011
WPAR versus LPAR
• Hardware partitions (LPARs, Virtual Systems)
– Run multiple OS’s on a single hardware system
• i.e. AIX, IBM i, and Linux
– I/O sharing across OS’s (with VIOS)
– Best resource control, isolation
– LPAR Mobility – can move a partition to a different hardware system
• AIX Workload Partitions (WPARs)
– Single AIX instance with multiple AIX WPARs
– Protect applications from interfering with each other (“good enough”
isolation)
– Confines root users to specific resources
– Easy to setup
– WPAR/Application Mobility – with WPAR Manager (AIX EE)
61. 61
IBM Power Systems Technical University Miami 2011
WPAR versus LPAR (contd.)
• Creation
– Create WPAR in seconds & LPAR in 10s of minutes
– LPAR=VIOS LV/disk/LUNs – WPARs supports NFS + Global AIX filesystem
– Rapid cloning is easy = “disposable images” – create, use, & delete
• Less memory
– AIX boot memory per LPAR = 512MB but WPAR ~64MB
– Share application code. WPARs = lower man-power, disk, memory
• Less updates
– Maintain 1 Global AIX plus syncwpar command – not dozens of AIX
– Run WPAR buckets: run Global AIX at different AIX levels. WPAR upgrade
means move + syncwpar
• Less admin
– Global AIX admin can access ALL WPAR processes and files
– Global AIX simpler Performance Monitoring than lots of AIX LPARs
– WPAR mess up! Use Global AIX to fix it!
– Backups are easy & small – default WPAR backup ~75MB file
– AIX 7.1 upgrade
62. 62
IBM Power Systems Technical University Miami 2011
Agenda
• AIX Workload Partitions (WPARs)
– Overview
– How it all works (in pictures)
– Resource Management & Control
– WPAR Lifecycle Management
• Functional Enhancements for WPARs
• Versioned Workload Partitions for AIX 7
• WPARs v/s LPARs – Comparison
• Software support in WPARs
• Use Case Scenarios
63. 63
IBM Power Systems Technical University Miami 2011
IBM Products Supporting Shared System WPARs
• Subset of IBM Products that support shared system WPARs
– DB2 Enterprise Server Edition – 9.7
– Lotus Domino – 8.5.1, 8.5.2
– Tivoli Directory Server – 6.1, 6.2
– IBM License Metric Tool – 7.2, 7.2.1
– WebSphere Application Server Community Edition – 2.1
– WebSphere MQ – 6.0.2, 7.0, 7.0.1
Note: This is not an exhaustive list. Since there are multiple
types of shared system WPAR configurations, please consult
individual product documentation to understand the details and
any limitations of their shared system WPAR support
64. 64
IBM Power Systems Technical University Miami 2011
IBM Products Supporting Detached System WPARs
• Subset of IBM Products that support detached system WPARs
– DB2 Enterprise Server Edition – 9.1, 9.5, 9.7
– Lotus Domino – 8.5.1, 8.5.2
– Tivoli Directory Server – 6.1, 6.2
– Tivoli Access Manager for e-Business – 6.1, 6.1.1
– Tivoli Composite Application Manager for WebSphere - 6.1
– IBM License Metric Tool – 7.2, 7.2.1
– WebSphere Application Server Community Edition – 2.1
– WebSphere Application Server – 6.1, 7.0
– WebSphere MQ – 6.0.2, 7.0, 7.0.1
– WebSphere Enterprise Service Bus – 7.0
– WebSphere Portal Server – 6.1.5, 7.0
– WebSphere Process Server – 7.0
– IBM SDK for Java – 5.0 SR8+ 6.01+
– WebSphere eXtreme Scale - 7.1
65. 65
IBM Power Systems Technical University Miami 2011
DB2 in WPARs
• DB2 installed inside Detached (non-shared) System WPARs
• DB2 v9.7 installed in global AIX under /usr or /opt file systems
– DB2 binaries shared with Shared System WPARs
– DB2 instances and DAS created in Shared System WPARs
– Each System WPAR manages own DB2 instances and DAS related to
the DB2 copy
– DB2 instances and DAS created in a WPAR not visible from other
WPARs or global
66. 66
IBM Power Systems Technical University Miami 2011
WebSphere in WPARs
• WebSphere Application Server Version 6.1 or 7.0
– Detached (non-shared) System WPARs
• WebSphere Application Server Version 6.0 (or older)
– not supported
• Related WebSphere Technotes:
– Guidelines for using IBM WebSphere Application Server with an IBM
AIX application WPAR
– An insufficient free disk space error might occur when installing
WebSphere Application Server
– Guidelines for installing WebSphere Application Server into an AIX
System WPAR from the global environment
– Guidelines for installing WebSphere Application Server V7.0 into an
AIX System WPAR from within the WPAR
– Guidelines for installing WebSphere Application Server into an AIX
System WPAR from within the WPAR
– Installing WebSphere MQ in AIX Workload Partitions
– Installing WebSphere Portal on an AIX Workload Partition (WPAR)
67. 67
IBM Power Systems Technical University Miami 2011
Oracle in WPARs
• Only AIX 6.1 System WPARs supported
– Oracle DB 10gR2 (10.2.0.4 or higher)
– Oracle DB 11gR2 (11.2.0.2)
• Installation requires APAR # IZ52319 & IZ54871
• Single Instance Mode
• Oracle’s ASM (Automatic Storage Management)
– not supported
• Live Application Mobility (LAM)
– not supported
• Related My Oracle Support website documents:
– IBM AIX Workload Partition (WPAR) Installation Doc [ID 889220.1]
– SUPPORT FOR ORACLE DATABASE ON WPARS UNDER AIX 6.1
[ID 753601.1]
68. 68
IBM Power Systems Technical University Miami 2011
SAP in WPARs
• SAP supports WPARs for production systems
– Applications based on SAP NetWeaver Application Server for ABAP
and JAVA
• AIX 6.1 or higher
– SAP NetWeaver Master Data Management (MDM)
• currently not supported
– System WPARs only (Shared or Detached)
• Pre-install preparation steps required for Shared System WPARs
– Live Application Mobility (LAM)
• currently not supported
– Versioned WPARs (AIX 5.2 WPARs)
• currently not supported
– No resource control
– Dedicated or Shared Processor Pool LPAR
69. 69
IBM Power Systems Technical University Miami 2011
SAP Notes
• Related SAP Notes
Support for AIX 7.11458918
Installing Oracle on WPAR shared system on IBM AIX 6.11380038
Using SAP systems with AIX 6.11137862
Support for AIX 6.11087498
System requirement AIX - liveCache/MaxDB 7.4 to 7.8720261
MaxDB release for virtual systems1142243
5.) corrections new operating system monitor1243397
Startup of Instance failed on AIX 6.1 TL1 SP2 in a WPAR1278153
saposcol on AIX: High cpu usage by saposcol in WPARs1515356
FAQ: which saposcol should be used on AIX710975
General Support Statement for Virtual Environments1492000
SAP Installations in AIX WPARs1105456
Short DescriptionNumber
70. 70
IBM Power Systems Technical University Miami 2011
IBM Virtualization Capacity License Counting Rules
• To be eligible for Virtualization Capacity (Sub-Capacity)
licensing, Customers must agree to the terms of the
International Passport Advantage Agreement (IPAA)
Attachment for Sub-capacity Licensing Terms
– http://www.ibm.com/software/lotus/passportadvantage/subcapacityattachments.html
– http://www.ibm.com/software/lotus/passportadvantage/subcaplicensing.html
• IBM AIX System Workload Partitions
– ftp://ftp.software.ibm.com/software/passportadvantage/SubCapacity/Scenarios_Power
_Systems_AIX_System_WPARs.pdf
• IBM AIX Application Workload Partitions
– not supported
71. 71
IBM Power Systems Technical University Miami 2011
Agenda
• AIX Workload Partitions (WPARs)
– Overview
– How it all works (in pictures)
– Resource Management & Control
– WPAR Lifecycle Management
• Functional Enhancements for WPARs
• Versioned Workload Partitions for AIX 7
• WPARs v/s LPARs – Comparison
• Software support in WPARs
• Use Case Scenarios
72. 72
IBM Power Systems Technical University Miami 2011
WPAR Use Case: Unique Workload Consolidation
• Enable multiple instances of the same application deployed
across WPARs
– Many WPARS running DB2, WebSphere or Apache in same AIX
image
– Greatly increases the ability to consolidate workloads
• often same application used to provide different business services
– Enables consolidation of separate discrete workloads that require
separate instances of databases or applications unto a single system
or LPAR
– Reduced costs through optimized placement of workloads between
systems to yield the best performance and resource utilization
73. 73
IBM Power Systems Technical University Miami 2011
WPAR Use Case: Increased Server Utilization
• Enables management and consolidation of diverse workloads on a single
server increasing server utilization rates
• Hundreds of WPARs within a single AIX instance supported
– Far exceeding the capability of many partitioning technologies
• Fast resource adjustments in response to normal/unexpected demands
• Fast provisioning in response to normal/unexpected demands
– WPARs can be created in minutes
• Resource controls enable over-provisioning of resources
– If a WPAR’s resource usage is below allocated levels, the unused allocation
is automatically available to other WPARs
• Supports Live Application Mobility in response to normal/unexpected
demands
• All of the above, enables more consolidation on a single server or LPAR
74. 74
IBM Power Systems Technical University Miami 2011
WPAR Use Case: Lowering Administration Cost
• Multiple WPARs supported within one instance of AIX
– Smaller number of OS images to maintain (many providers charge per
OS image)
– Single maintenance to the operating system is applied, potentially, to
hundreds of WPARs
• Multiple WPARs can be supported with single instance of an
application
– Single application install/maintenance is applied, potentially, to
hundred of WPARs
• Ability to delegate administrative tasks to the WPAR users
(typically the WPAR root user)
– user/group administration, backup/restore
– Scope of delegation controlled by WPAR RBAC profile
• Reduced costs through simplified administration
75. 75
IBM Power Systems Technical University Miami 2011
WPAR Use Case: Workload Life Cycle Management
• Enable development, test and production cycles of one
workload to be placed on a single system or LPAR
– Different levels of applications (production1, production2, test1, test2,
etc.) may be deployed in separate WPARs
– Quick and easy roll out/roll back to production environments
– Reduced costs through the sharing of hardware resources
– Reduced costs through the sharing of software resources such as the
operating system, databases, and tools
77. 77
IBM Power Systems Technical University Miami 2011
New Redbook – Hot off the press
IBM Redbooks
http://www.redbooks.ibm.com
SG24-7955
Released: August 2011
78. 78
IBM Power Systems Technical University Miami 2011
Resources
Redbooks:
Exploiting IBM AIX Workload Partitions (Aug 2011)
– http://www.redbooks.ibm.com/abstracts/sg247955.html?Open
Workload Partition Management in IBM AIX Version 6.1 (Dec 2008)
– http://www.redbooks.ibm.com/abstracts/sg247656.html?Open
Introduction to Workload Partition Management in IBM AIX Version 6.1 (Nov 2007)
– http://www.redbooks.ibm.com/abstracts/sg247431.html?Open
Product Documentation:
IBM Workload Partitions for AIX (AIX 6.1)
– http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.wpar/wpar_pdf.pdf
IBM Workload Partitions for AIX (AIX 7.1)
– http://publib.boulder.ibm.com/infocenter/aix/v7r1/topic/com.ibm.aix.wpar/wpar_pdf.pdf
AIX 5.2 Workload Partitions for AIX 7
– http://publib.boulder.ibm.com/infocenter/aix/v7r1/topic/com.ibm.aix.cre/cre_pdf.pdf
IBM PowerVM Workload Partitions Manager for AIX
– http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.wparlpp/wparlpp_pdf.pdf
IBM Systems Director:
IBM Systems Director Download Site
– http://www-03.ibm.com/systems/software/director/downloads/
79. 79
IBM Power Systems Technical University Miami 2011
WPAR – the movies
• By Nigel Griffiths (of nmon fame)
http://tinyurl.com/AIXmovies
1. WPAR Theory and Background
2. WPAR Manager Introduction
3. WPAR Mobility/Relocation
4. Create WPAR Simple
5. Create WPAR Detailed
6. Power5 to Power6 Mobility
7. WPAR Full Properties
8. WPAR Command Line
9. Compare Global and WPAR Environment
10.WPAR Backup and Cloning
11.Faster Relocation
12.Static Relocation
13.Director based WPAR Manager
14.Versioned WPAR
Total=
3HourDem
o
Total=
3HourDem
o
81. 81
IBM Power Systems Technical University Miami 2011
81
Thank you for your time
Don’t forget the Session Evals
82. 82
IBM Power Systems Technical University Miami 2011
WPAR Courses available from IBM Training
AIX Workload Partitions Installation and Management (ILO)
– Course code: AX170 Duration: 4.0 days
AIX Workload Partitions I - Creating WPARs (D-ILO)
– Course code: AQ600 Duration: 1.0 days
AIX Workload Partitions II - File Systems and Devices (D-ILO)
– Course code: AQ610 Duration: 1.0 days
AIX Workload Partitions III - Administration and Resource Control
(D-ILO)
– Course code: AQ620 Duration: 1.0 days
83. 83
IBM Power Systems Technical University Miami 2011
Stay Connected & Continue Skills Transfer via IBM Training
Training paths
What to take,
when to take it
Social media
Join the
conversation
Custom catalog
Create a catalog
that meets your
interest areas
RSS feeds
Up-to-date
information on
the training
you need
IBM Training
News
Targeted to
your needs
New to
Instructor Led
Online (ILO)?
Take a free
test drive!
Education Packs
Online discount program for ALL IBM
Training courses for your company
Questions? Email Lisa Ryan (lisaryan@us.ibm.com)
84. 84
IBM Power Systems Technical University Miami 2011
This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in
other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM
offerings available in your area.
Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions
on the capabilities of non-IBM products should be addressed to the suppliers of those products.
IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give
you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY
10504-1785 USA.
All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives
only.
The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or
guarantees either expressed or implied.
All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the
results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations
and conditions.
IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions
worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment
type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal
without notice.
IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies.
All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are
dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this
document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-
available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document
should verify the applicable data for their specific environment.
Revised September 26, 2006
Special notices
85. 85
IBM Power Systems Technical University Miami 2011
IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 6 (logo), AS/400, Active Memory, BladeCenter, Blue Gene, CacheFlow, ClusterProven, DB2, ESCON, i5/OS, i5/OS
(logo), IBM Business Partner (logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries,
Rational, RISC System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, AIX 5L, Chiphopper, Chipkill,
Cloudscape, DB2 Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Purpose File System, , GPFS, HACMP,
HACMP/6000, HASM, IBM Systems Director Active Energy Manager, iSeries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power
Architecture, Power Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo),
POWER2, POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, POWER7, pureScale, System i, System p, System p5, System Storage, System z, Tivoli
Enterprise, TME 10, TurboCore, Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation
in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (®
or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be
registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at
www.ibm.com/legal/copytrade.shtml
The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.
UNIX is a registered trademark of The Open Group in the United States, other countries or both.
Linux is a registered trademark of Linus Torvalds in the United States, other countries or both.
Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both.
Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both.
AMD Opteron is a trademark of Advanced Micro Devices, Inc.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both.
TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC).
SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are
trademarks of the Standard Performance Evaluation Corp (SPEC).
NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both.
AltiVec is a trademark of Freescale Semiconductor, Inc.
Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc.
InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association.
Other company, product and service names may be trademarks or service marks of others.
Revised February 9, 2010
Special notices (contd.)