This document provides guidelines for running Sybase ASE 15.0 in a Solaris container. It describes how to create a non-global zone container with resources isolated for the ASE instance. The key steps are to create a resource pool with designated CPUs, create a non-global zone associated with that pool, and carefully configure devices, filesystems and other resources to remain isolated within the zone. Special considerations for running ASE in a container include restricted privileges, separate network and loopback interfaces, and limited access to devices.
Consolidating Applications With Solaris Containersvaishal
This document discusses how Solaris containers can be used to consolidate applications by providing isolation and allocating resources. It explains that Solaris containers use zones and resource pools to partition a single Solaris operating system instance. Zones isolate applications by providing separate namespaces, while resource pools allocate CPU resources. The document demonstrates how an email server can be isolated with its own dedicated resource pool, while two web servers share another pool and are further isolated using shares allocated by the fair share scheduler.
This document discusses RMAN backup and recovery configurations for Real Application Clusters (RAC) databases in three scenarios:
1. Using a cluster file system for backups, where each node backs up to a shared storage location. Recovery can be performed from any node with access to the backups.
2. Using local non-clustered file systems for backups, where each node backs up locally but nodes can access each other's backups over NFS. Recovery requires NFS access to read archived logs from all nodes.
3. Performing instance recovery in RAC, where the surviving instance(s) perform recovery for failed instances using online redo logs to commit pending transactions and roll back uncommitted ones.
This document provides an introduction to Oracle architecture. It describes the basic client-server environment with applications running on client computers connecting to the Oracle database server. It outlines the major Oracle versions and supported platforms. It explains the key Oracle database files including the parameter file, control file, redo log files, and data files. It describes the logical and physical storage structures including tables, tablespaces, and schemas. It provides an overview of the system global area components including the shared pool, redo log buffer, and data buffer pool. Finally, it briefly summarizes parsing and execution in Oracle.
Configuring Oracle Enterprise Manager Cloud Control 12c for High AvailabilityLeighton Nelson
This document discusses configuring Oracle Enterprise Manager Cloud Control 12c for high availability. It outlines three levels of high availability - Level 1 uses a single OMS and repository, Level 2 uses an active/passive OMS with a local Data Guard repository, and Level 3 uses multiple active/active OMS instances behind a load balancer with a RAC Data Guard repository. It provides recommendations for configuring high availability for the repository, OMS instances, agents, and software library. The presentation also covers backup and recovery procedures.
High Availability Options for Oracle Enterprise Manager 12c Cloud ControlSimon Haslam
This document discusses high availability options for Oracle Enterprise Manager 12c. It describes the architecture with a web tier, application tier, database, and agents. It outlines approaches for high availability including using a load balancer with two OMS nodes and a single database instance. Additional licensing is required for high availability configurations beyond a single database instance like RAC, Data Guard, or multiple OMS nodes. It concludes with a demonstration of simulating an OMS node or network failure in an environment with a load balancer and dual OMS nodes.
This document discusses configuring the Oracle Network environment. It covers using Enterprise Manager to create listeners and aliases, configure failover, and control the listener. It also discusses using tnsping to test connectivity and when to use shared vs dedicated servers. Tools covered include Enterprise Manager, Oracle Net Manager, lsnrctl, and SQL commands for configuring database links.
Customer ABC implemented Oracle Enterprise Manager 12c Cloud Control to monitor their infrastructure. This provided current and historical data on server state and usage, as well as detailed analysis of requests, responses, and integrated services. Administrators were alerted to errors or availability issues. Daily reporting was also implemented on server state and resource utilization. The tools provided a single console to manage and monitor the complete infrastructure, helping to troubleshoot complex issues. Alerts were set up to notify administrators when thresholds were met, reducing manual monitoring and incidents by 20% per month.
Active directory domain administration toolsImran Khan
This document lists and describes various tools used for administering Active Directory domains. It discusses tools for managing domains, trusts and replication like Active Directory Domains and Trusts. It also covers tools for managing users and groups like Active Directory Users and Computers. Finally, it mentions other tools for tasks like backing up, monitoring, diagnosing and troubleshooting Active Directory.
Consolidating Applications With Solaris Containersvaishal
This document discusses how Solaris containers can be used to consolidate applications by providing isolation and allocating resources. It explains that Solaris containers use zones and resource pools to partition a single Solaris operating system instance. Zones isolate applications by providing separate namespaces, while resource pools allocate CPU resources. The document demonstrates how an email server can be isolated with its own dedicated resource pool, while two web servers share another pool and are further isolated using shares allocated by the fair share scheduler.
This document discusses RMAN backup and recovery configurations for Real Application Clusters (RAC) databases in three scenarios:
1. Using a cluster file system for backups, where each node backs up to a shared storage location. Recovery can be performed from any node with access to the backups.
2. Using local non-clustered file systems for backups, where each node backs up locally but nodes can access each other's backups over NFS. Recovery requires NFS access to read archived logs from all nodes.
3. Performing instance recovery in RAC, where the surviving instance(s) perform recovery for failed instances using online redo logs to commit pending transactions and roll back uncommitted ones.
This document provides an introduction to Oracle architecture. It describes the basic client-server environment with applications running on client computers connecting to the Oracle database server. It outlines the major Oracle versions and supported platforms. It explains the key Oracle database files including the parameter file, control file, redo log files, and data files. It describes the logical and physical storage structures including tables, tablespaces, and schemas. It provides an overview of the system global area components including the shared pool, redo log buffer, and data buffer pool. Finally, it briefly summarizes parsing and execution in Oracle.
Configuring Oracle Enterprise Manager Cloud Control 12c for High AvailabilityLeighton Nelson
This document discusses configuring Oracle Enterprise Manager Cloud Control 12c for high availability. It outlines three levels of high availability - Level 1 uses a single OMS and repository, Level 2 uses an active/passive OMS with a local Data Guard repository, and Level 3 uses multiple active/active OMS instances behind a load balancer with a RAC Data Guard repository. It provides recommendations for configuring high availability for the repository, OMS instances, agents, and software library. The presentation also covers backup and recovery procedures.
High Availability Options for Oracle Enterprise Manager 12c Cloud ControlSimon Haslam
This document discusses high availability options for Oracle Enterprise Manager 12c. It describes the architecture with a web tier, application tier, database, and agents. It outlines approaches for high availability including using a load balancer with two OMS nodes and a single database instance. Additional licensing is required for high availability configurations beyond a single database instance like RAC, Data Guard, or multiple OMS nodes. It concludes with a demonstration of simulating an OMS node or network failure in an environment with a load balancer and dual OMS nodes.
This document discusses configuring the Oracle Network environment. It covers using Enterprise Manager to create listeners and aliases, configure failover, and control the listener. It also discusses using tnsping to test connectivity and when to use shared vs dedicated servers. Tools covered include Enterprise Manager, Oracle Net Manager, lsnrctl, and SQL commands for configuring database links.
Customer ABC implemented Oracle Enterprise Manager 12c Cloud Control to monitor their infrastructure. This provided current and historical data on server state and usage, as well as detailed analysis of requests, responses, and integrated services. Administrators were alerted to errors or availability issues. Daily reporting was also implemented on server state and resource utilization. The tools provided a single console to manage and monitor the complete infrastructure, helping to troubleshoot complex issues. Alerts were set up to notify administrators when thresholds were met, reducing manual monitoring and incidents by 20% per month.
Active directory domain administration toolsImran Khan
This document lists and describes various tools used for administering Active Directory domains. It discusses tools for managing domains, trusts and replication like Active Directory Domains and Trusts. It also covers tools for managing users and groups like Active Directory Users and Computers. Finally, it mentions other tools for tasks like backing up, monitoring, diagnosing and troubleshooting Active Directory.
This document provides an overview of managing the Oracle database instance. It covers starting and stopping the Oracle database and components using Oracle Enterprise Manager and SQL*Plus. It describes accessing databases with SQL*Plus and modifying initialization parameters. It also discusses the stages of database startup, shutdown options, viewing the alert log, and accessing dynamic performance views.
JOnAS application server uses profiles and micro profiles to allow for lightweight and customizable configurations based on specific needs, resources can be added or removed dynamically via deployment plans and profiles can be updated without rebooting through OSGi modularity. μJOnAS provides a minimal profile that serves as the basis for building customized profiles by adding additional services and components as needed via deployment plans.
This document discusses kernel modules and process management in Linux. It covers the following key points:
Kernel modules allow sections of kernel code to be loaded and unloaded independently. They implement drivers, file systems, or networking protocols. Process management in Linux uses fork to create new processes and execve to run new programs. A process encompasses the context needed to track a program's execution, including its identity, environment, and scheduling context. Linux represents both processes and threads internally but threads share the same address space as their parent process.
The document discusses the OSEK/VDX standard which defines interfaces and protocols for automotive embedded systems to enable portability, reusability, and extensibility of software. It provides an overview of the OSEK/VDX operating system including the task concept, scheduler, events, alarms, resources, and interrupt processing. The OSEK/VDX standard aims to improve quality and reduce costs for automotive embedded control software.
The document describes managing the Oracle Automatic Storage Management (ASM) instance. It discusses initializing and starting the ASM instance, creating and dropping ASM disk groups, adding and removing disks from disk groups, and retrieving ASM metadata. The key benefits of ASM include eliminating tasks such as file system management and performance tuning of storage.
Configuring Oracle Enterprise Manager Cloud Control 12c for HA White PaperLeighton Nelson
This document discusses configuring Oracle Enterprise Manager Cloud Control 12c for high availability. It outlines four levels of high availability configurations, with levels 1-3 utilizing separate hosts, active/passive failover, and multiple active/active OMS instances respectively. Level 2 implements an active/passive configuration with the OMS on shared storage and a virtual IP address, while the repository uses local Data Guard. The document provides detailed steps for setting up a level 2 configuration using Oracle Clusterware for failover of the virtual IP and OMS between nodes.
This document discusses various deployment scenarios and best practices for optimized SF Oracle RAC deployment in Oracle VM Server for SPARC environments using N-port ID virtualization (NPIV) technology. NPIV provides multiple paths to Oracle VM servers (formerly ORACLE VMs) and helps to enable I/O fencing and Veritas Dynamic Multipathing (DMP) to leverage SFHA solutions features in a cost-effective manner within virtualized environments. This configuration also helps to make the
SF Oracle RAC database instance highly available in virtualized environments.
SAP HANA System Replication - Setup, Operations and HANA MonitoringLinh Nguyen
SAP HANA Distributed System Replication setup, operations and associated HANA Monitoring of Disaster Recovery (DR) scenario using OZSOFT HANA Management Pack for SCOM
The document provides an overview of Oracle 10g database architecture including its physical and logical structures as well as processes. Physically, a database consists of datafiles, redo logs, and control files. Logically, it is divided into tablespaces containing schemas, segments, and other objects. The Oracle instance comprises the system global area (SGA) shared memory and background processes that manage tasks like writing redo logs and checkpointing data blocks. User processes connect to the database through sessions allocated in the program global area.
E-Business Suite Rapid Provisioning Using Latest Features Of Oracle Database 12cAndrejs Karpovs
1) ACFS is an Oracle file system that can be used for rapid provisioning of databases and applications using features like snapshots and cloning.
2) Using ACFS, a database can be rapidly provisioned from a snapshot of a source database for test/development purposes in a space-efficient manner without impacting the source.
3) While EBS is not officially certified on ACFS, many customers run it successfully on ACFS without issues, placing components like logs and output on ACFS file systems. With 12c, ACFS also supports database data files directly.
This document provides definitions and explanations of various Oracle database concepts and components. It defines terms like log switch, online redo log, archived redo log, database startup process, instance recovery, full backup restrictions, mounting modes, ARCHIVELOG mode advantages, database shutdown process, restricted instance startup, partial backup, mirrored redo log, and control file usage. It also answers questions on topics like views, tablespaces, schemas, segments, clusters, integrity constraints, indexes, extents, synonyms, and transactions.
The document provides an overview of the Oracle database architecture including its major components, memory structures, background processes, logical and physical storage structures, and Automatic Storage Management (ASM) storage components. Specifically, it discusses the System Global Area (SGA) and Program Global Area (PGA), background processes like the database writer (DBWn) and log writer (LGWR), tablespaces and data files, and how ASM manages Oracle database files. The objectives are to explain these various architectural elements that make up the Oracle database.
The document discusses managing an Oracle database instance. It covers:
1. Starting and stopping the Oracle database and components like Database Control using commands like emctl and sqlplus.
2. Using tools like SQL*Plus, Enterprise Manager, and dynamic performance views to access and modify initialization parameters, view alert logs, and manage the database.
3. The stages of database startup including nomount, mount, and open and database shutdown options like normal, transactional, and immediate.
This document discusses high performance extensible logging (HPEL) in WebSphere Application Server v8. HPEL improves log and trace performance by storing data in binary format repositories instead of text files. It describes HPEL configuration options for logs, traces, and text logs. Administrators can view and export HPEL data using the administration console or logviewer command line tool. The author is a WebSphere consultant who helps clients with administration, troubleshooting and architecture.
Group of independent servers interconnected through a dedicated network to work as one centralized data processing resource.
Clusters are capable of performing multiple complex instructions by distributing workload across all connected servers.
Clustering improves the system's availability to users, its aggregate performance, and overall tolerance to faults and component failures.
The document provides an overview of the OSEK/VDX automotive software standard, including its key characteristics, specification, architecture, and concepts. It describes OSEK/VDX's scalability, portability, and configurability. It also explains the standard's three processing levels, four conformance classes, the concepts of basic and extended tasks, interrupt processing, event mechanism, and mixed preemptive scheduling policy.
CETPA INFOTECH PVT LTD is one of the IT education and training service provider brands of India that is preferably working in 3 most important domains. It includes IT Training services, software and embedded product development and consulting services.
http://www.cetpainfotech.com
This document discusses Apache Mesos, an open-source cluster manager and distributed systems kernel. Mesos abstracts CPU, memory, storage, and other computer resources away from machines in a data center and shares those resources between existing distributed applications. It discusses Mesos' features such as high availability, linear scalability, multi-resource scheduling, web UIs, and pluggable isolation. The document also outlines Mesos' architecture including Zookeepers, Mesos masters, Mesos slaves, and frameworks. It notes some organizations that use Mesos/DC/OS including Twitter, Airbnb, Apple, and Uber.
HTTP Session Replication with Oracle Coherence, GlassFish, WebLogicOracle
The document discusses session replication and management across WebLogic Server, GlassFish Server, and Oracle Coherence. It covers deployment models, session models, locking modes, and cluster isolation for Coherence*Web. It also provides details on integrating Coherence with WebLogic Server using ActiveCache and with GlassFish Server.
This document provides an overview of managing the Oracle database instance. It covers starting and stopping the Oracle database and components using Oracle Enterprise Manager and SQL*Plus. It describes accessing databases with SQL*Plus and modifying initialization parameters. It also discusses the stages of database startup, shutdown options, viewing the alert log, and accessing dynamic performance views.
JOnAS application server uses profiles and micro profiles to allow for lightweight and customizable configurations based on specific needs, resources can be added or removed dynamically via deployment plans and profiles can be updated without rebooting through OSGi modularity. μJOnAS provides a minimal profile that serves as the basis for building customized profiles by adding additional services and components as needed via deployment plans.
This document discusses kernel modules and process management in Linux. It covers the following key points:
Kernel modules allow sections of kernel code to be loaded and unloaded independently. They implement drivers, file systems, or networking protocols. Process management in Linux uses fork to create new processes and execve to run new programs. A process encompasses the context needed to track a program's execution, including its identity, environment, and scheduling context. Linux represents both processes and threads internally but threads share the same address space as their parent process.
The document discusses the OSEK/VDX standard which defines interfaces and protocols for automotive embedded systems to enable portability, reusability, and extensibility of software. It provides an overview of the OSEK/VDX operating system including the task concept, scheduler, events, alarms, resources, and interrupt processing. The OSEK/VDX standard aims to improve quality and reduce costs for automotive embedded control software.
The document describes managing the Oracle Automatic Storage Management (ASM) instance. It discusses initializing and starting the ASM instance, creating and dropping ASM disk groups, adding and removing disks from disk groups, and retrieving ASM metadata. The key benefits of ASM include eliminating tasks such as file system management and performance tuning of storage.
Configuring Oracle Enterprise Manager Cloud Control 12c for HA White PaperLeighton Nelson
This document discusses configuring Oracle Enterprise Manager Cloud Control 12c for high availability. It outlines four levels of high availability configurations, with levels 1-3 utilizing separate hosts, active/passive failover, and multiple active/active OMS instances respectively. Level 2 implements an active/passive configuration with the OMS on shared storage and a virtual IP address, while the repository uses local Data Guard. The document provides detailed steps for setting up a level 2 configuration using Oracle Clusterware for failover of the virtual IP and OMS between nodes.
This document discusses various deployment scenarios and best practices for optimized SF Oracle RAC deployment in Oracle VM Server for SPARC environments using N-port ID virtualization (NPIV) technology. NPIV provides multiple paths to Oracle VM servers (formerly ORACLE VMs) and helps to enable I/O fencing and Veritas Dynamic Multipathing (DMP) to leverage SFHA solutions features in a cost-effective manner within virtualized environments. This configuration also helps to make the
SF Oracle RAC database instance highly available in virtualized environments.
SAP HANA System Replication - Setup, Operations and HANA MonitoringLinh Nguyen
SAP HANA Distributed System Replication setup, operations and associated HANA Monitoring of Disaster Recovery (DR) scenario using OZSOFT HANA Management Pack for SCOM
The document provides an overview of Oracle 10g database architecture including its physical and logical structures as well as processes. Physically, a database consists of datafiles, redo logs, and control files. Logically, it is divided into tablespaces containing schemas, segments, and other objects. The Oracle instance comprises the system global area (SGA) shared memory and background processes that manage tasks like writing redo logs and checkpointing data blocks. User processes connect to the database through sessions allocated in the program global area.
E-Business Suite Rapid Provisioning Using Latest Features Of Oracle Database 12cAndrejs Karpovs
1) ACFS is an Oracle file system that can be used for rapid provisioning of databases and applications using features like snapshots and cloning.
2) Using ACFS, a database can be rapidly provisioned from a snapshot of a source database for test/development purposes in a space-efficient manner without impacting the source.
3) While EBS is not officially certified on ACFS, many customers run it successfully on ACFS without issues, placing components like logs and output on ACFS file systems. With 12c, ACFS also supports database data files directly.
This document provides definitions and explanations of various Oracle database concepts and components. It defines terms like log switch, online redo log, archived redo log, database startup process, instance recovery, full backup restrictions, mounting modes, ARCHIVELOG mode advantages, database shutdown process, restricted instance startup, partial backup, mirrored redo log, and control file usage. It also answers questions on topics like views, tablespaces, schemas, segments, clusters, integrity constraints, indexes, extents, synonyms, and transactions.
The document provides an overview of the Oracle database architecture including its major components, memory structures, background processes, logical and physical storage structures, and Automatic Storage Management (ASM) storage components. Specifically, it discusses the System Global Area (SGA) and Program Global Area (PGA), background processes like the database writer (DBWn) and log writer (LGWR), tablespaces and data files, and how ASM manages Oracle database files. The objectives are to explain these various architectural elements that make up the Oracle database.
The document discusses managing an Oracle database instance. It covers:
1. Starting and stopping the Oracle database and components like Database Control using commands like emctl and sqlplus.
2. Using tools like SQL*Plus, Enterprise Manager, and dynamic performance views to access and modify initialization parameters, view alert logs, and manage the database.
3. The stages of database startup including nomount, mount, and open and database shutdown options like normal, transactional, and immediate.
This document discusses high performance extensible logging (HPEL) in WebSphere Application Server v8. HPEL improves log and trace performance by storing data in binary format repositories instead of text files. It describes HPEL configuration options for logs, traces, and text logs. Administrators can view and export HPEL data using the administration console or logviewer command line tool. The author is a WebSphere consultant who helps clients with administration, troubleshooting and architecture.
Group of independent servers interconnected through a dedicated network to work as one centralized data processing resource.
Clusters are capable of performing multiple complex instructions by distributing workload across all connected servers.
Clustering improves the system's availability to users, its aggregate performance, and overall tolerance to faults and component failures.
The document provides an overview of the OSEK/VDX automotive software standard, including its key characteristics, specification, architecture, and concepts. It describes OSEK/VDX's scalability, portability, and configurability. It also explains the standard's three processing levels, four conformance classes, the concepts of basic and extended tasks, interrupt processing, event mechanism, and mixed preemptive scheduling policy.
CETPA INFOTECH PVT LTD is one of the IT education and training service provider brands of India that is preferably working in 3 most important domains. It includes IT Training services, software and embedded product development and consulting services.
http://www.cetpainfotech.com
This document discusses Apache Mesos, an open-source cluster manager and distributed systems kernel. Mesos abstracts CPU, memory, storage, and other computer resources away from machines in a data center and shares those resources between existing distributed applications. It discusses Mesos' features such as high availability, linear scalability, multi-resource scheduling, web UIs, and pluggable isolation. The document also outlines Mesos' architecture including Zookeepers, Mesos masters, Mesos slaves, and frameworks. It notes some organizations that use Mesos/DC/OS including Twitter, Airbnb, Apple, and Uber.
HTTP Session Replication with Oracle Coherence, GlassFish, WebLogicOracle
The document discusses session replication and management across WebLogic Server, GlassFish Server, and Oracle Coherence. It covers deployment models, session models, locking modes, and cluster isolation for Coherence*Web. It also provides details on integrating Coherence with WebLogic Server using ActiveCache and with GlassFish Server.
Veritas Database Edition/Advanced Cluster for Oracle9i RAC provides several benefits for managing Oracle databases in a clustered environment including simplified storage layout, simplified management of tablespaces with file systems, shared Oracle home and application directories, and improved storage utilization and faster recovery. It uses Cluster Volume Manager and Cluster File System to manage raw partitions and physical devices more easily while increasing performance and availability.
This document discusses considerations for planning Oracle VM 3 server pool deployments for scalability, availability, and reliability. It describes key concepts of Oracle VM 3 including Oracle VM Manager, Oracle VM Server, and server pools. Server pools group multiple physical servers with shared storage so virtual machines can run on any server and live migrate between servers. The document provides best practices for configuring server pools for high availability, including enabling high availability options, sizing the server pool file system, using live migration, ensuring excess pool capacity, and planning multiple pools for large infrastructures.
Introduction to OS LEVEL Virtualization & ContainersVaibhav Sharma
This Presentation contains information about os level virtualization and Containers internals. It has used other material on slide share which is referenced in Notes of PPT
This document discusses how Docker containerization was leveraged to power a large-scale, globally available Software as a Service (SaaS) platform serving over 3000 developers from over 125 countries. Some key points:
1) Docker provided process isolation, CPU/memory partitioning, volume partitioning, and file sharing which optimized resource utilization and allowed high density deployment of containers hosting user applications and data.
2) A container allocation strategy using Docker allowed new users to quickly obtain isolated environments while existing inactive users' containers were passivated to disk for later reactivation, improving efficiency.
3) Automated passivation and reactivation handled inactive users by hibernating their containers and restoring from disk on next request, while
Apache Mesos is an open-source cluster manager developed at UC Berkeley that provides efficient resource isolation and sharing across distributed applications. It enables fine-grained resource sharing to improve cluster utilization. Since being developed at UC Berkeley, Mesos has been adopted by several large companies and is currently used by over 50 organizations. Mesos runs on every machine in a distributed cluster and acts as a centralized scheduler, assigning resources to applications and frameworks like Hadoop and Spark upon request.
The customer currently runs WebSphere Portal on two Sun T1000 servers with Solaris 10 zones and wants to know if they can add an additional portal node to the existing cluster by creating a new Solaris 10 zone on the same hardware. Solaris 10 zones provide operating system-level virtualization that allows creating isolated virtual servers on a single machine to reduce costs. Reviewing the hardware capacity and expected workloads would be needed to determine if adding another zone is feasible without performance issues.
EOUG95 - Client Server Very Large Databases - PaperDavid Walker
The document discusses building large scaleable client/server solutions. It describes breaking the solution into four server components: database server, application server, batch server, and print server. It focuses on the database server, discussing how to make it resilient through clustering and scaleable by partitioning applications and using parallel query options. It also covers backup and recovery strategies.
Sofware architure of a SAN storage Control SystemGrupo VirreySoft
The document describes the software architecture of a storage control system that uses a cluster of Linux servers to provide storage virtualization and management in a heterogeneous storage area network (SAN) environment. The storage control system, also called the "virtualization engine", aggregates storage resources into a common pool and allocates storage to hosts. It enables advanced functions like fast-write caching, point-in-time copying, remote copying, and transparent data migration. The system is built using commodity hardware and open source software to reduce costs compared to traditional proprietary storage controllers.
This document provides an overview of the Red Hat Cluster Suite, which delivers high availability solutions. It discusses the Cluster Manager technology, which provides application failover capability to make applications highly available. Cluster Manager uses shared storage, service monitoring, and communication between servers to detect failures and restart applications on healthy nodes. It ensures data integrity through techniques like I/O barriers, quorum partitions, and active/passive or active/active application configurations across nodes.
This document summarizes Scality's object storage solution. It discusses the challenges of object storage like exponential data growth and rising storage costs. Scality is recommended for cold or infrequently accessed data. Key benefits include robustness, scalability, cost efficiency, self-service capabilities, hardware agnosticism, and support for object and file storage. The Scality RING architecture is distributed, parallel and scale-out. It provides data protection, resiliency and self-healing. The RING can be monitored and managed through APIs and user interfaces.
Mesos is an open source cluster management framework that provides efficient resource isolation and sharing across distributed applications or frameworks. It divides resources into CPU, memory, storage, and other compute resources and shares those resources dynamically and efficiently across applications. Mesos abstracts the underlying infrastructure to provide a unified API to applications while employing operating system-level virtualization through interfaces like Docker to maximize resource utilization. It works by having a Mesos master that negotiates resources among Mesos slaves to run applications or frameworks, which are made up of a scheduler to negotiate for resources and executors to run tasks. Common frameworks that run on Mesos include Spark, Hadoop and Docker containers.
This document provides an overview of implementing Oracle 10g RAC with Automatic Storage Management (ASM) on AIX. It describes ASM, which allows Oracle databases to store data in raw device files that are managed by ASM for striping and mirroring. The document discusses storage and administration considerations for using ASM, tuning parameters, and provides a sample ASM installation process and references.
Configuring oracle enterprise manager cloud control 12 c for high availabilitySon Hyojin
This document discusses configuring Oracle Enterprise Manager Cloud Control 12c for high availability. It outlines four levels of high availability configurations, with levels 1-3 being covered. Level 1 has the OMS and repository on separate hosts with no redundancy. Level 2 introduces redundancy with an active/passive OMS configuration using a shared storage and local Data Guard for the repository database. Level 3 uses multiple active/active OMS instances and a RAC database for the repository. The document provides details on implementing different high availability options for the management repository and service.
The document discusses various technical questions related to Active Directory. It begins by defining Active Directory as a directory structure used on Microsoft Windows to store network and domain information. It then discusses LDAP, connecting Active Directory to third-party directories, the AD database location, SYSVOL folder, application partitions, Global Catalog, and support tools. The remainder of the document provides answers to questions on replication, sites, KCC, ISTG, demoting domain controllers, and other AD administration topics.
This document presents the design and implementation of MySQL/JVM, a framework for embedding the Java Virtual Machine (JVM) runtime environment into the MySQL database server. This allows stored procedures and functions in MySQL to be written in the Java programming language, leveraging Java's robust libraries. Currently, MySQL only supports a basic procedural language for stored procedures. MySQL/JVM aims to address this limitation and enable functionality like XML validation and encryption that are difficult to implement in MySQL's native language.
This document establishes naming standards for Oracle database objects, schemas, accounts, and files within the IRS environment. Key points include:
- Database objects, schemas, and accounts will include the project identifier or acronym to uniquely identify each project.
- Critical database files like control files, redo logs, archive logs, and data files will be placed in standardized locations and follow consistent naming conventions.
- Tablespaces will be used to logically group related database objects together and simplify administration. Datafiles will be explicitly created for each tablespace.
- Standards cover areas like mount points, directory structures, application file paths, schema and account formats to promote organization and ease of management.
The document provides an overview of the Oracle Private Cloud Appliance (PCA). It discusses that the PCA is a pre-configured, integrated system that provides private cloud capabilities dedicated to a single organization. The PCA hardware includes management nodes, compute nodes, a ZFS storage appliance for system storage, and a network infrastructure with Cisco switches. The document describes the roles and configurations of these core PCA components.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
System Design Case Study: Building a Scalable E-Commerce Platform - Hiike
Sybase ase solaris-containers
1. WHITE PAPER
Best Practices for Running ASE 15.0
in Solaris™ Containers
DHIMANT CHOKSHI
SERVER PERFORMANCE ENGINEERING AND DEVELOPMENT GROUP
SYBASE, INC.
2. TABLE OF CONTENTS
1 1.0 Introduction
1 2.0 Document Scope
1 3.0 Overview
1 3.1 Solaris Containers
1 3.2 Solaris Zones Partitioning Technology
2 3.3 Solaris Resource Manager
4 4.0 Consolidating Multiple ASE instances
5 5.0 Creating a Container
5 5.1 Requirements
5 5.2 Creating a Resource Pool
6 5.3 Creating a Non-Global Zone
6 Special Considerations
6 6.1 Devices in Containers
7 6.2 File Systems in Containers
8 6.3 Volume Management
9 6.4 CPU Visibility
10 7.0 Appendix
10 7.1 Appendix 1: Script to Create a Container
17 7.2 Appendix 2: Setting System V IPC Kernel Parameters
20 8.0 References
i
3. 1.0 EXECUTIVE SUMMARY
This document provides an overview of Solaris Containers in the Solaris 10 Operating System (OS) and guidelines for
running Sybase ASE 15.0 server in a container. Sybase ASE 12.5.2 and ASE 15.0 have been certified to run in a global
zone. This document concentrates on running an ASE 15.0 in a container in the Solaris OS, and it explains in detail the
process of creating a non-global zone appropriate for deploying a Sybase ASE 15.0 server. Additionally, it captures
special considerations for running ASE 15.0 in a container in the Solaris OS. For the remainder of this document a
“Solaris Container in the Solaris 10 OS” will be referred to as a “Solaris container” or just a “container” and it will be
assumed that it is associated with a non-global zone unless explicitly stated otherwise.
2.0 DOCUMENT SCOPE
The scope of this document is to define a container in the Solaris 10 OS appropriate for running a Sybase ASE 15.0. In
addition, it captures limitations and special cases of running Sybase ASE 15.0 in containers. It is beyond the scope of
this document to explain how Solaris Containers technology can be used to consolidate multiple Sybase ASE server
instances in separate containers on the same system. For more information on that subject, see references [1] and [3].
3.0 OVERVIEW
This section provides a brief overview of Solaris Containers technology. It also gives an introduction to the Solaris
Zones feature and Solaris Resource Manager, which are the two major components of Solaris Containers (for detailed
information about these technologies, see references [2] and [4]).
3.1 Solaris Containers
Solaris Containers are designed to provide a complete, isolated and secure runtime environment for applications. This
technology allows application components, in particular ASE servers, to be isolated from each other using flexible,
software-defined boundaries. Solaris Containers are designed to provide fine-grained control over resources that the
application uses, allowing multiple applications such as ASE servers to operate on a single server while maintaining
specified service levels. Solaris Containers are a management construct intended to provide a unified model for
defining and administering the environment within which a collection of Solaris processes executes. Solaris
Containers use Solaris Resource Manager (SRM) features along with Solaris Zones to deliver a virtualized environment
that can have fixed resource boundaries for application workloads.
3.2 Solaris Zones Partitioning Technology
Solaris Zones, a component of the Solaris Containers environment, is a software partitioning technology that
virtualizes operating system services and provides an isolated and secure environment for running multiple
applications or ASE servers. Solaris Zones are ideal for environments that consolidate multiple ASE 15.0 running on
different machines to a single server. There are two types of zones: global zones and non-global zones. The underlying
OS, which is the Solaris instance booted by the system hardware, is called the global zone. There is only one global
zone per system, which is both the default zone for the system and the zone used for system-wide administrative
control. An administrator of a global zone can create one or more non-global zones. Once created, individual non-
global zone administrators, whose privileges are confined to that non-global zone, can administer these non-global
zones. Two types of non-global zones can be created using different root file system models: sparse and whole root.
ASE 15.0 can work with both.
1
4. The sparse root zone model optimizes the sharing of objects by only installing a subset of the root packages and
using read-only loopback file system to gain access to other files. In this model, by default, the directories /lib,
/platform, /sbin, and /usr will be mounted as loopback file systems. The advantages of this model are improved
performance due to efficient sharing of executables and shared libraries, and a much smaller disk footprint for the
zone itself. The whole root zone model provides for maximum configurability by installing the required packages and
any selected optional zones into the private file systems of the zone. The advantages of this model include the ability
for zone administrators to customize their zones' file system layout and add arbitrary unbundled or third-party
packages. Solaris Zones provide the standard Solaris interfaces and application environment. They do not impose a
new ABI or API. In general, applications do not need to be ported to Solaris Zones. However, applications running in
non-global zones need to be aware of non-global zone behavior, in particular:
• All processes running in a zone have a reduced set of privileges, which is a subset of the privileges available in the
global zone. Processes that require a privilege not available in a non-global zone can fail to run, or in a few cases
fail to achieve full performance.
• Each of non-global zones has its own logical network and loopback interface. ASE server uses logical network
address in the interfaces file. Bindings between upper-layer streams and logical interfaces are restricted such that
a stream may only establish bindings to logical interfaces in the same zone. Likewise, packets from a logical
interface can only be passed to upper-layer streams in the same zone as the logical interface.
• Non-global zones have access to a restricted set of devices. In general, devices are shared resources in a system.
Therefore, restrictions within zones are put in place so that security is not compromised.
3.3 Solaris Resource Manager
By default, the Solaris OS provides all workloads running on the system equal access to all system resources. This
default behavior of the Solaris OS can be modified by Solaris Resource Manager, which provides a way to control
resource usage.
SRM provides the following functionality:
• A method to classify a workload, so the system knows which processes belong to a given workload. In case of
Sybase database server, ASE server can be assigned to a given workload.
• The ability to measure the workload to assess how much of the system resources the workload is actually using.
• The ability to control the workloads so they do not interfere with one another and also get the required system
resources to meet predefined service-level agreements.
SRM provides three types of workload control mechanisms:
• The constraint mechanism, which allows the Solaris system administrator to limit the resources a workload is
allowed to consume.
• The scheduling mechanism, which refers to the allocation decisions that accommodate the resource demands of
all the different workloads in an under-committed or over-committed scenario.
• The partitioning mechanism, which ensures that pre-defined system resources are assigned to a given workload.
3.3.1 Workload Identification
Projects:
Projects are a facility that allows the identification and separation of workloads. A workload can be composed of
several applications or ASE servers and processes belonging to several different groups and users. The identification
mechanism provided by projects serves as a tag for all the processes of a workload. This identifier can be shared
across multiple machines through the project name service database. The location of this database can be in files,
NIS, or LDAP, depending on the definition of the projects database source in the / etc/nsswitch.conf file. Attributes
assigned to the projects are used by the resource control mechanism to provide a resource administration context on
a per-project basis.
2
5. Tasks:
Tasks provide a second level of granularity in identifying a workload. A task collects a group of processes into a
manageable entity that represents a workload component. Each login creates a new task that belongs to the project,
and all processes started during that login session belong to the task. The concept of projects and tasks has been
incorporated in several administrative commands such as ps, pgrep, pkill, prstat, and cron.
3.3.2 Resource Controls
Placing bounds on resource usage can control the resource usage of workloads. These bounds can be used to prevent
a workload from over-consuming a particular resource and interfering with other workloads. The Solaris Resource
Manager provides a resource control facility to implement constraints on resource usage.
Each resource control is defined by the following three values:
• Privilege level
• Threshold value
• Action that is associated with the particular threshold
The privilege level indicates the privilege needed to modify the resource. It must be one of the following three types:
• Basic, which can be modified by the owner of the calling process
• Privileged, which can be modified only by privileged (super user) callers
• System, which is fixed for the duration of the operating system instance
The threshold value on a resource control constitutes an enforcement point where actions can be triggered. The
specified action is performed when a particular threshold is reached. Global actions apply to resource control values
for every resource control on the system. Local action is taken on a process that attempts to exceed the control value.
There are three types of local actions:
• None: No action is taken on resource requests for an amount that is greater than the threshold.
• Deny: Deny resource requests for an amount that is greater than the threshold.
• Signal: Enable a global signal message action when the resource control is exceeded.
For example, task.max-lwp=(privileged, 10, deny) would tell the resource control facility to deny more than 10
lightweight processes to any process in that task.
3.3.3 CPU and Memory Management
SRM enables the end user to control the available CPU resources and physical memory consumption of different
workloads on a system by providing Fair Share Scheduler (FSS) and Resource Capping Daemon facilities, respectively.
Fair Share Scheduler
The default scheduler in the Solaris OS provides every process equal access to CPU resources. However, when multiple
workloads are running on the same system one workload can monopolize CPU resources. Fair Share Scheduler
provides a mechanism to prioritize access to CPU resources based on the importance of the workload. With FSS the
number of shares the system administrator allocates to the project representing the workload expresses the
importance of a workload. Shares define the relative importance of projects with respect to other projects. If project A
is deemed twice as important as Project B, project A should be assigned twice as many shares as project B. It is
important to note that FSS only limits CPU usage if there is competition for CPU resources. If there is only one active
project on the system, it can use 100% of the system CPUs resources, regardless of the number of shares assigned to
it. The figure on page 7 illustrates running multiple applications in the same zone using FSS.
3
6. Resource Capping Daemon
The resource-capping daemon (rcapd) can be used to regulate the amount of physical memory consumed by projects
with resource caps defined. The rcapd daemon repeatedly samples the memory utilization of projects that are
configured with physical memory caps. The administrator specifies the sampling interval. When the system’s physical
memory utilization exceeds the threshold for cap enforcement and other conditions are met, the daemon takes
action to reduce the memory consumption of projects with memory caps to levels at or below the caps. Please note
that the rcapd daemon cannot determine which pages of memory are shared with other processes or which are
mapped multiple times within the same process. Hence, it is not recommended that shared memory-intensive
applications, like Sybase ASE servers, run under projects that use rcapd to limit physical memory usage.
To regulate memory used by ASE 15.0, a combination of project and newtask can be used. The following command
creates the project financial and sets the resource controls specified as arguments to the –K option.
Projadd –Usybase –K “project.max-shm-memory=(priv,1.2G,deny) financial
This command will produce the following entry in /etc/project
financial:101::sybase:project.max-shm-memory=(priv,1288490188,deny)
Once project is created, user is placed in the project, and newtask command is used to start the ASE server, the server
will not start if ASE is configured with more memory than 1.2 Gb.
4.0 CONSOLIDATING MULTIPLE ASE INSTANCES
Consolidating multiple ASE servers from separate systems into separate containers on the same system enables
competing applications, such as financial applications and warehousing applications, to run with resource allocation
changing as business needs change. For example, the financial application is allocated 80 shares of the CPU
resources, while the data warehouse is allocated 20 shares, resulting in 8:2 ratios of CPU resources allocated to each
container. Shares allow unused cycles to be used by other applications, or the allocation can be changed dynamically
during peak time to provide more CPU resources to either container. Each database administrator can have complete
control over their isolated environment.
Container Container
Financial Warehouse
Zone Global
OLTP Server
Zone Financial Zone Warehouse
Financial Server Warehouse Server
Pool Default
Pool Financial Pool Warehouse
16 CPUs
4 CPUs 4 CPUs
Figure 1. Example of Container for ASE 15.0 in the Solaris 10 OS
Another option is to host multiple applications in a container. In addition, the resources in the container are further
subdivided, allocating a portion of resources to each project within container. This helps ensure that each project
always has the resources it requires to function predictably.
4
7. 5.0 CREATING A CONTAINER
This section provides instructions for creating a Solaris 10 container appropriate for installing and running ASE 15.0.
These instructions have been followed in the sample scripts documented in Appendix 1, which provide a convenient
way of creating such containers.
5.1 Requirements
1. Ensure that the file system in which the root directory for the container will be placed has at least 1.5 GB of
physical disk space. This will be enough to create the container and install ASE 15.0. Non-global zone can have
either a whole root model or a sparse root model. We recommend using the sparse root model.
2. Identify the physical interface that will be used to bring up the virtual interface for the container. To find the
physical interfaces available in the global container, execute the command:
/usr/sbin/ifconfig –a
Examples of common interfaces are ce0, bge0 or hme0.
3. Obtain an IP address and a hostname for the container. This IP address must be in the same subnet as the IP
assigned to the physical interface selected in the previous step.
4. Ensure that the netmask for the IP address of the container can be resolved in the global container according to
the databases used in the /etc/nsswitch.conf file. If this is not the case, update the file /etc/netmasks in the
global container with the netmask desired for the subnet to which the IP address belongs.
5. Determine the number of CPUs to be reserved for the container. To find the number of CPUs available in the
default pool execute the command poolstat. In order to run poolstat command, first execute pooladm –e
command to activate resource pools facility. The default pool will indicate the number of CPUs available. Keep in
mind that the default pool must always have at least one CPU.
5.2 Creating a Resource Pool
The resource pool can be created by the root user in the global container by following these steps:
• Enable the pool facility with this command:
Pooladm –e
• Use the pool_cmd_template.txt provided in the appendix as a template to create a command file. Replace the
strings PSET_NAME, NUM_CPU_MIN, NUM_CPU_MAX and POOL_NAME with appropriate values. This command
file has instructions to create a resource pool and a processor set (pset), and then to associate the pset to the
resource pool
• If default configuration file /ect/pooladm.conf does not exist, create it by executing this command:
Pooladm –s /etc/pooladm.conf
• Create the resource pool by executing this command:
Poocfg –f pool_commands.txt
Where pool_commands.txt is the file you created two steps before.
• Instantiate the changes made to the static configuration by executing the command:
Pooladm –c
• Validate your configuration by executing this command:
Pooladm -n
5
8. 5.3 Creating a Non-Global Zone
Once the resource pool is available, a non-global zone can be created and bound to it. For the purposes of installing
and running ASE 15.0 the non-global zone can have either a whole root model or a sparse root model. Unless it
conflicts with specific requirements, we recommend using the sparse root model. A non-global Solaris zone can be
created as follows:
1. Create as root a directory where the root of the non-global zone will be placed
(For example, /export/home/myzone) and set the access permissions to 700. The name of this directory should
match the name of the zone (myzone, in thisexample).
2. Unless special instructions are added, the directory /usr in the container will be a loopback file system (lofs). This
means that the container will mount in read-only mode /usr from the global container into /usr of its own file
system’s tree.
3. Use the zone_cmd_template.txt file in Appendix 1 as a model to create a command file to create the zone and
bind it to the resource pool previously created. Replace the strings ZONE_DIR, ZONE_NAME, POOL_NAME, NET_IP,
and NET_PHYSICAL with appropriate values. In this file, the command create is used to create a sparse root.
Replacing this command with create -b would create a whole root. Also, in this file the zone is bound to the pool
with the command set pool=POOL_NAME. Another way to bind a pool to a zone is to use the command
poolbind(1M) once the pool and the zone have been created.
4. Create the zone by executing as root this command:
zonecfg -z <ZONE_NAME> -f zone_commands.txt
Where zone_commands.txt is the file you created in the previous step.
5. Install the zone by executing as root this command:
zoneadm -z <ZONE_NAME> install
6. Boot the zone with this command:
zoneadm -z <ZONE_NAME> boot
7. Finish the configuration of your container with this command:
zlogin -C <ZONE_NAME>
6.0 SPECIAL CONSIDERATIONS
In this section we point out some special considerations when running a Sybase ASE 15.0 in a container.
6.1 Devices in Containers
To guarantee that security and isolation are not compromised, certain restrictions regarding devices are placed on
non-global zones:
• By default, only a restricted set of devices (which consist primarily of pseudo devices) such as /dev/null, /dev/zero,
/dev/poll, /dev/random, and /dev/tcp, are accessible in the non-global zone.
• Devices that expose system data like dtrace, kmem, and ksyms are not available in non-global zones.
• By default, physical devices are also not accessible by containers.
The zone administrator can make physical devices available to non-global zones. It is the administrator’s responsibility
to ensure that the security of the system is not compromised by doing so, mainly for two reasons:
1. Placing a physical device into more than one zone can create a covert channel between zones.
2.Global zone applications that use such a device risk the possibility of compromised data or data corruption by a
non-global zone.
6
9. The global zone administrator can use the add device sub-command of zonecfg to include additional devices in the
non-global zone. For example to add the block device /dev/dsk/c1t1d0s0 to the non-global zone, the administrator
executes the following commands:
zonecfg -z my-zone
zonecfg:my-zone> add device
zonecfg:my-zone:device> set match=/dev/dsk/c1t1d0s0
zonecfg:my-zone:device> end
zonecfg:my-zone>exit
zoneadm -z my-zone reboot
All the slices of /dev/dsk/c1t1d0 could be added to the non-global zone by using /dev/dsk/c1t1d0* in the match
command. This same procedure can be used for character devices (also known as raw devices) or any other kind of
device. If you plan to install a Sybase ASE server in a non-global zone by using ASE 15.0 installation CDs, you will need
to make the CD-ROM device visible to the non-global zone. To do this you can either loopback mount the /cdrom
directory from the global zone or export the physical device (which is discouraged). For details about how to gain
access to the CD-ROM device from a non-global zone, see reference [8].
6.2 File Systems in Containers
Each zone has its own file system hierarchy, rooted at a directory known as zone root. Processes in the zones can
access only files in the part of the hierarchy that is located under the zone root. Here we present four different ways
of mounting a file system from a global zone to a non-global zone. ASE server works with all four.
1. Create a file system in a global zone and mount it in a non-global zone as a loopback file system (lofs).
• Log in as global zone administrator.
• Create a file system in global zone
global# newfs /dev/rdsk/c1t0d0s0
• Mount the file system in the global zone
global#mount /dev/dsk/c1t0d0s0 /mystuff
• Add the file system of type lofs to the non-global zone. This is the preferred way of using file system devices in
ASE 15.0.
global#zonecfg -z my-zone
zonecfg:my-zone> add fs
zonecfg:my-zone:fs>set dir=/usr/mystuff
zonecfg:my-zone:fs> set special=/mystuff
zonecfg:my-zone:fs>set type=lofs
zonecfg:my-zone:fs>end
2. Create a file system in the global zone and mount it to the non-global zone as UFS.
• Log in as global zone administrator.
• Create a file system in the global zone
global# newfs /dev/rdsk/c1t0d0s0
• Add the file system of type ufs to the non-global zone
global# zonecfg -z my-zone
zonecfg:my-zone>add fs
zonecfg:my-zone:fs> set dir=/usr/mystuff
zonecfg:my-zone:fs>set special=/dev/dsk/c1t0d0s0
zonecfg:my-zone:fs>set raw=/dev/rdsk/c1t0d0s0
zonecfg:my-zone:fs>set type=ufs
zonecfg:my-zone:fs>end
7
10. 3. Export a device from a global zone to a non-global zone and mount it from the nonglobal zone.
• Log in as global zone administrator.
• Export a raw device to the non-global zone
global # zonecfg -z my-zone
zonecfg:my-zone> add device
zonecfg:my-zone:device> set match=/dev/rdsk/c1t0d0s0
zonecfg:my-zone:device>end
zonecfg:my-zone>add device
zonecfg:my-zone:device>set match=/dev/dsk/c1t0d0s0
zonecfg:my-zone:device>end
• Log in as root in non-global zone.
• Create a file system in the non-global zone:
my-zone# newfs /dev/rdsk/c1t0d0s0
• Mount the file system in the non-global zone:
my-zone# mount /dev/dsk/c1t0d0s0 /usr/mystuff
4. Mount a UFS file system directly into the non-global zone’s directory structure. This assumes that the device is
already made available to the non-global zone.
• Log in as non-global zone administrator.
• Mount the device in non-global zone:
my-zone#mount /dev/dsk/c1t1d0s0 /usr/mystuff
6.3 Volume Management
Volume managers are often third-party products, so their usability in containers cannot be generalized. As of this
writing, volume managers cannot be installed or managed from containers. Currently, the recommended approach is
to install and manage volume managers from global containers. Once the devices, file systems, and volumes are
created in the global container, they can be made available to the container by using zonecfg subcommands. Once
the devices, file systems, and volumes are available to the local zone, they can be used by ASE server as Sybase devices.
In the case of VERITAS, the following features are not supported in containers:
• Admin ioctls
• Administration commands
• VERITAS Volume Manager software
• VERITAS Storage Migrator (VSM)
• VERITAS File System (VxFS)/VERITAS Federated Mapping Service (VxMS)
• Quick I/O and CQIO
• Cluster File System
The following VERITAS features are supported in non-global zones:
• VERITAS file systems
• Access to VxFS in the global zone from the non-global zone of a lofs file system
• Access to ODM files from non-global zones
• Concurrent I/O with files from non-global zones
VERITAS commands will be accessible in non-global zones, but they have been modified to ensure they are not
executed in a non-global zone. When VERITAS Volume Manager commands detect non-global zone execution, the
following error message will be presented:
VxVM command_xxx ERROR msg_id: Please execute this operation in global zone.
In a similar way, Solaris Volume Manager (SVM) should also be installed and managed from global zones in
containers. Once the storage has been configured in the desired way from the global zone, the metadevices can be
made available to the non-global zones.
8
11. 6.4 CPU Visibility
Users in a container can expect a virtualized view of the system with respect to CPU visibility when the zone is bound
to a resource pool. In this case, zone will only see those CPU’s associated with the resource pool it is bound to. Table 1
shows the interfaces that have been modified in the Solaris OS and that will return the expected value in this scenario.
INTERFACE TYPE
P_online(2) System call
Processor_bind(2) System call
Processor_info(2) System call
Pset_info(2) System call
Pset_getattr(2) System call
Pset_getloadavg(3c) System call
Getloadavg(3c) System call
_SC_NPROCESSORS_CONF Sysconf(3c) arg
_SC_NPROCESSORS_ONLN Sysconf(3c) arg
Pbind(1M) Command
Psrset(1M) Command
Psrinfo(1M) Command
Mpstat(1M) Command
Vmstat(1M) Command
Iostat(1M) Command
Sar(1M) Command
Table 1. List of CPU-related Solaris 10 Interfaces that are container aware
In addition to these interfaces, certain kernel statistics (kstats) are used commonly by tools such as psrinfo and
mpstat, to retrieve information about the system. All consumers of these kstats will only see information for a pset in
the pool bound to the zone.
9
12. 7.0 APPENDIX
7.1 Appendix 1: Script to Create a Container
The scripts documented in this Appendix can be used to create a container appropriate for installing and running
instances of a Sybase ASE server. These scripts do not represent the only way in which the container can be created.
They are provided as sample code and should be modified to fit specific requirements and constraints.
These scripts will first create a resource pool and a processor set (pset) resource with a minimum and maximum
number of CPUs in it (both values specified by the user). These new resources will be configured in the default
configuration file / etc/pooladm.conf (see pooladm(1M) for details). Once created, the pset will be associated with the
resource pool. Next, a sparse root zone will be created with the root directory, IP address, and physical interface
provided by the user. A special mount point for /usr/local will be created in /opt/<zone_name>/local to facilitate the
ASE 15.0 installation, since /usr/local is the default directory for the installation of some of the ASE 15.0 utilities. Once
created, the zone will be bound to the resource pool. The combination of the zone bound to the resource pools will
define the container in which ASE 15.0 can be installed and executed. This container (and all processes running in it)
will only have access to the CPUs associated to the resource pool. To use these scripts, save all the files in the same
directory and follow these steps:
1. Edit the file setenv.sh with appropriate values for the following variables:
- ZONE_NAME: hostname for the zone
- ZONE_DIR: directory for the root directory of the zone
- NET_IP: IP address for the zone
- NET_PHYSICAL: physical interface in which the virtual interface for the zone will be created
- NUM_CPUS_MAX: maximum number of CPUs in the pset resource
- NUM_CPUS_MIN: minimum number of CPUs in the pset resource
2. Issue the following command from the global container:
./create_container.sh
3. Configure this container by executing the following command from global container:
zlogin -C <zone_name>
The files composing these scripts are presented and explained next.
7.1.1 README.txt
This file describes how a container is created when these scripts are used. It also gives some tips about how to
execute some common operations such as giving the zone access to raw devices or removing resource pools.
The scripts in this directory can be used to create a container suitable for installing and running Sybase ASE server.
These scripts do not represent the only way in which you can create an appropriate container for ASE 15.0; depending
on your requirements and constraints, you can modify these scripts to fit your needs.
1. creating a container for ASE 15.0
The scripts will first create a resource pool and a processor set (pset) resource with a minimum and maximum
number of CPUs in it (both values specified by the user). These new resources will be configured in the default
configuration file /etc/pooladm.conf (see pooladm(1M) for details).
Once created, the pset will be associated with the resource pool. Next, a sparse root zone will be created with the root
directory, IP, and interface provided by the user. A special mount point for /usr/local will be created in
/opt/<zone_name>/local to facilitate the ASE 15.0 installation, since /usr/local is the default directory for the
installation of some of the ASE 15.0 utilities. Once created, the zone will be bound to the resource pool. The
combination of the zone bound to the resource pool will define the container in which ASE 15.0 can be installed and
used. This non-global container (and all processes running in it) will only have access to the CPUs associated to the
resource pool. To use the scripts, follow these steps:
10
13. a. edit the file setenv.sh with appropriate values for:
- ZONE_NAME: hostnames for the zone
- ZONE_DIR: directory for the root directory of the zone
- NET_IP: IP for the zone
- NET_PHYSICAL: physical interface in which the virtual interface for the zone
will be created
- NUM_CPUS_MAX: maximum number of CPUs in the pset resource
- NUM_CPUS_MIN: minimum number of CPUs in the pset resource
b. from the global container run ./create_container.sh
c. Once the container has been created run “zlogin -C <zone_name>” from the global container to finish configuring
the zone.
2. giving your container access to raw devices
If you need to give your container access to a raw device follow this example once the container has been created
(these commands must be issued from the global container):
zonecfg -z my-zone
zonecfg:my-zone> add device
zonecfg:my-zone:device> set match=/dev/rdsk/c3t40d0s0
zonecfg:my-zone:device> end
zonecfg:my-zone> exit
zonecfg -z my-zone halt
zonecfg -z my-zone boot
3. giving your container access to a file system
If you need to give your container access to a file system created in the global container follow this example once
the non-global container has been created:
global# newfs /dev/rdsk/c1t0d0s0
global# zonecfg -z my-zone
zoncfg:my-zone> add fs
zoncfg:my-zone> set dir=/usr/mystuff
zoncfg:my-zone> set special=/dev/dsk/c1t0d0s0
zoncfg:my-zone> set raw=/dev/rdsk/c1t0d0s0
zoncfg:my-zone> set type=ufs
zoncfg:my-zone> end
zonecfg -z my-zone halt
zonecfg -z my-zone boot
4. to remove pool resources previously created by operating directly on the kernel (see poolcfg(1M) for details), use
these commands:
poolcfg -d -c 'destroy pool my_pool'
poolcfg -d -c 'destroy pset my_pset'
5. to uninstall and delete a previously created zone, use these commands:
zoneadm -z $ZONE_NAME halt
zoneadm -z $ZONE_NAME uninstall -F
zonecfg -z $ZONE_NAME delete –F
11
14. 7.1.2 setenv.sh
This file is where the user defines the parameters to create the container.
#!/usr/bin/sh
#host name for the zone
ZONE_NAME=financial
#directory where to place root dir for the zone
ZONE_DIR=/export/solzone1
#IP for the zone (make sure netmask can be resolved for this IP according to
# the databases defined in nsswitch.conf)
NET_IP=10.22.105.127
#interface used by the zone
NET_PHYSICAL=eri0
#min and max CPUs for the pool bound to the zone
NUM_CPUS_MIN=1
NUM_CPUS_MAX=4
# do not make changes beyond this point
POOL_NAME=pool_$ZONE_NAME
PSET_NAME=ps_$ZONE_NAME
export ZONE_NAME ZONE_DIR ZONE_DIR NET_IP
export POOL_NAME PSET_NAME NUM_CPUS_MIN NUM_CPUS_MAX
7.1.3 zone_cmd_template.txt
This file contains a template set of commands to create the zone. After replacing some strings by user-defined values,
it will be used to create the zone.
create
set zonepath=ZONE_DIR/ZONE_NAME
set autoboot=true
set pool=POOL_NAME
add net
set address=NET_IP
set physical=NET_PHYSICAL
end
add fs
set dir=/solzone1/tpcc
set special=/solzone1/tpcc
set type=lofs
end
add fs
set dir=/solzone1/gal_perf
set special=/gal_perf
set type=lofs
end
12
15. add fs
set dir=/solzone1/tpcc_data1
set special=/tpcc_data1
set type=lofs
end
add fs
set dir=/solzone1/gal_perf2
set special=/gal_perf2
end
verify
commit
7.1.4 pool_cmd_template.txt
This file contains a template set of commands to create the resource pool and the pset resource, and to associate
them. After replacing some strings with user-defined values, it will be used to create the zone.
create pset PSET_NAME ( uint pset.min = NUM_CPUS_MIN ; uint pset.max =NUM_CPUS_MAX)
create pool POOL_NAME
associate pool POOL_NAME ( pset PSET_NAME )
7.1.5 create_zone_cmd.sh
This script will use the sed utility to create a command file that creates the zone. It replaces the user-given
parameters in the zone command template file. It is called by
create_container.sh.
#!/bin/sh
# Copyright (c) 2005 Sun Microsystems, Inc. All Rights Reserved.
#
# SAMPLE CODE
# SUN MAKES NO REPRESENTATIONS OR WARRANTIES ABOUT
# THE SUITABILITY OF THE SOFTWARE, EITHER EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS
# FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT.
# SUN SHALL NOT BE LIABLE FOR ANY DAMAGES SUFFERED
# BY LICENSEE AS A RESULT OF USING, MODIFYING OR
# DISTRIBUTING THIS SOFTWARE OR ITS DERIVATIVES.
echo $ZONE_DIR > /tmp/ZP.$$
REG_ZONE_DIR=`sed 's/////g' /tmp/ZP.$$`
rm -rf /tmp/ZP.$$ > /dev/null
sed -e "
/ZONE_DIR/ {
s/ZONE_DIR/$REG_ZONE_DIR/
}
13
16. /ZONE_NAME/ {
s/ZONE_NAME/$ZONE_NAME/
}
/NET_IP/ {
s/NET_IP/$NET_IP/
}
/NET_PHYSICAL/ {
s/NET_PHYSICAL/$NET_PHYSICAL/
}
/POOL_NAME/ {
s/POOL_NAME/$POOL_NAME/
}
/./ {
p
d
}"
7.1.6 create_pool_cmd.sh
This script will use the sed utility to create a command file that creates the resources. It replaces the user-given
parameters in the pool command template file. It is called by
create_container.sh.
#!/bin/sh
# Copyright (c) 2005 Sun Microsystems, Inc. All Rights Reserved.
#
# SAMPLE CODE
# SUN MAKES NO REPRESENTATIONS OR WARRANTIES ABOUT
# THE SUITABILITY OF THE SOFTWARE, EITHER EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS
# FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT.
# SUN SHALL NOT BE LIABLE FOR ANY DAMAGES SUFFERED
# BY LICENSEE AS A RESULT OF USING, MODIFYING OR
# DISTRIBUTING THIS SOFTWARE OR ITS DERIVATIVES.
sed -e "
/NUM_CPUS_MIN/ {
s/NUM_CPUS_MIN/$NUM_CPUS_MIN/g
}
/NUM_CPUS_MAX/ {
s/NUM_CPUS_MAX/$NUM_CPUS_MAX/g
}
/POOL_NAME/ {
s/POOL_NAME/$POOL_NAME/
14
17. }
/PSET_NAME/ {
s/PSET_NAME/$PSET_NAME/
}
/./ {
p
d
}"
7.1.7 create_container.sh
This is the main script. It will use the parameters given in setenv.sh to create the container.
#!/usr/bin/ksh
# Copyright (c) 2005 Sun Microsystems, Inc. All Rights Reserved.
#
# SAMPLE CODE
# SUN MAKES NO REPRESENTATIONS OR WARRANTIES ABOUT
# THE SUITABILITY OF THE SOFTWARE, EITHER EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS
# FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT.
# SUN SHALL NOT BE LIABLE FOR ANY DAMAGES SUFFERED
# BY LICENSEE AS A RESULT OF USING, MODIFYING OR
# DISTRIBUTING THIS SOFTWARE OR ITS DERIVATIVES.
# script to create a container to run ASE 15.0 RDBMS.
# to use this script follow the instructions in the README.txt file
# located in this directory.
. ./setenv.sh
# 1)............................... validate setenv.sh values
#zone path exists?
if [ ! -d $ZONE_DIR/$ZONE_NAME ]
then
mkdir -p $ZONE_DIR/$ZONE_NAME
if [ $? = 1 ]
then
echo ERROR: could not create root directory
exit 1
fi
fi
chmod 700 $ZONE_DIR/$ZONE_NAME
#zone already exists?
15
18. zonecfg -z $ZONE_NAME info > /tmp/z.$$ 2>&1
cat /tmp/z.$$ | grep "No such zone" > /dev/null 2>&1
if [ $? -eq 1 ]
then
echo "ERROR: zone $ZONE_NAME already exists. IF you want to remove it do:"
echo "use zoneadm -z $ZONE_NAME halt"
echo "use zoneadm -z $ZONE_NAME uninstall -F"
echo "use zonecfg -z $ZONE_NAME delete -F"
exit 1
fi
rm -rf /tmp/z.$$ > /dev/null 2>&1
#pset already created?
pooladm -e
pooladm | grep pset | grep $PSET_NAME > /dev/null 2>&1
if [ $? -eq 0 ]
then
echo "ERROR: pset $PSET_NAME already exists. Please choose a different
pset name "
exit 1
fi
#/usr/local directory exists?
if [ ! -d /usr/local ]
then
mkdir /usr/local
fi
#special mnt point for /usr/local exists?
if [ ! -d /opt/$ZONE_NAME/local ]
then
mkdir -p /opt/$ZONE_NAME/local
fi
# 2)............................... pool creation
./create_pool_cmd.sh < pool_cmd_template.txt > /tmp/pool_commands.txt
pooladm -e # enable facility
#check for default config file exists, if not there
#create one with active configuration
if [ ! -f /etc/pooladm.conf ]
then
pooladm -s /etc/pooladm.conf
fi
poolcfg -f /tmp/pool_commands.txt # configure
pooladm -n > /tmp/pool.out 2>&1 # validate
16
19. if [ ! $? -eq 0 ]
then
echo ERROR: invalid pool configuration. please see /tmp/pool.out
exit 1
fi
#instantiate config at /etc/pooladm.conf
pooladm -c
# 3)............................... zone creation
./create_zone_cmd.sh < zone_cmd_template.txt > /tmp/zone_commands.txt
zonecfg -z $ZONE_NAME -f /tmp/zone_commands.txt
echo $ZONE_NAME was configured with this information:
echo ---------------------------------------------------------
zonecfg -z $ZONE_NAME info
echo ---------------------------------------------------------
zoneadm -z $ZONE_NAME install
zoneadm -z $ZONE_NAME boot
echo "to finish configuring your container please run: zlogin -C $ZONE_NAME"
7.2 Appendix 2: Setting System V IPC Kernel Parameters
Prior to the Solaris 10 OS, the System V IPC resources, consisting primarily of shared memory, message queues, and
semaphores, were set in the /etc/system file. This implementation had the following shortcomings:
• Relying on /etc/system as an administrative mechanism meant reconfiguration required a reboot.
• A simple typo in setting the parameter in /etc/system could lead to hard-to-track configuration errors.
• The algorithms used by the traditional implementation assumed statically-sized data structures.
• There was no way to allocate additional resources to one user without allowing all users those resources. Since
the amount of resources was always fixed, one user could have trivially prevented another from performing its
desired allocations.
• There was no good way to observe the values of the parameters.
• The default values of certain tunables were too small.
In the Solaris 10 OS, all these limitations were addressed. The System V IPC implementation in the Solaris 10 OS no
longer requires changes in the /etc/system file. Instead, it uses the resource control facility, which brings the following
benefits:
• It is now possible to install and boot an ASE 15.0 instance without needing to make changes to /etc/system file
(or to resource controls in most cases).
• It is now possible to limit use of the System V IPC facilities on a per-process or per-project basis (depending on
the resource being limited), without rebooting the system.
• None of these limits affect allocation directly. They can be made as large as possible without any immediate
effect on the system. (Note that doing so would allow a user to allocate resources without bound, which would
have an effect on the system.)
• Implementation internals are no longer exposed to the administrator, thus simplifying the configuration tasks.
• The resource controls are fewer and are more verbosely and intuitively named than the previous tunables.
• Limit settings can be observed using the common resource control interfaces, such as prctl(1) and getrctl(2).
17
20. • Shared memory is limited based on the total amount allocated per project, not per segment. This means that an
administrator can give a user the ability to allocate a lot of segments and large segments, without having to give
the user the ability to create a lot of large segments.
• Because resource controls are the administrative mechanism, this configuration can be persistent using project(4)
and be made via the network.
In the Solaris 10 OS, the following changes were made:
• Message headers are now allocated dynamically. Previously all message headers were allocated at module load time.
• Semaphore arrays are allocated dynamically. Previously semaphore arrays were allocated from a
seminfo_semmns sized vmem arena, which meant that allocations could fail due to fragmentation.
• Semaphore undo structures are dynamically allocated per process and per semaphore array. They are unlimited in
number and are always as large as the semaphore array they correspond to. Previously there were a limited
number of per-process undo structures, allocated at module load time. Furthermore, the undo structures each
had the same, fixed size. It was possible for a process to not be able to allocate an undo structure, or for the
process’s undo structure to be full.
• Semaphore undo structures maintain their undo values as signed integers, so no semaphore value is too large to
be undone.
• All facilities were used to allocate objects from a fixed-size namespace, and were allocated at module load time.
All facility namespaces are now resizable, and will grow as demand increases. As a consequence of these changes,
the following related parameters have been removed. If these parameters are included in the /etc/system file on
a Solaris system, the parameters are ignored (see Table 2).
Parameter Name Brief Description
Semsys:seminfo_semmns Max number of system V semaphore
Total number of undo structures supported in system V
Semsys:seminfo_semmnu
semaphore system
Semsys:seminfo_semmap Number of entries in semaphore map
Semsys:seminfo_semvmx Max value a semaphore can be set to
Semsys:seminfo_semaem Max value that semaphore’s undo structure can be set to
Semsys:seminfo_semusz The size of the undo structure
Semsys:seminfo_shmseg Number of segments, per process
Semsys:seminfo_shmmin Minimum shared memory segment size
Msgsys_msginfo_msgmap The number of entries in a message map
Msgsys_msginfo _msgssz Size of the message segment
Msgsys_msginfo_msgseg Max number of message segments
Msgsys_msginfo_msgmax Max size of the system V message
Table 2. System parameters no longer needed in the Solaris 10 OS
As described above, many /etc/system parameters are removed simply because they are no longer required. The
remaining parameters have more reasonable defaults, enabling more applications to work out-of-the-box without
requiring these parameters to be set.
18
21. Table 3 describes the default value of the remaining /etc/system parameters.
Resource Control Obsolete Tunable Old Default Value New Default Value
Process.max-msg-qbytes Msginfo_msgmnb 4096 65536
Process.max-msg-messages Msginfo_msgtq1 40 8192
Process.max-sem-ops Seminfo_semopm 10 512
Process.max-sem-nsems Seminfo_semms1 25 512
Project.max-shm-memory Shminfo_shmmax 0x800000 1/4 of physical memory
Project.max-shm-ids Shminfo_shmmni 100 128
Project.max—msg-ids Msginfo_msgmni 50 128
Project.max-mes-ids Seminfo_semmni 10 128
Table 3. Default values for system parameters in the Solaris 10 OS
Setting System V IPC Parameters for ASE 15.0 Installation
Table 4 identifies the values recommended for /etc/system parameters by ASE 15.0 installation guide and the
corresponding Solaris controls.
Parameter Sybase Required in Resource Control Default Value
Recommendation Solaris 10 OS
Shmsys:shminfo_shmmax 32 MB (minimum) Yes Project.max-shm- 1/4 of physical
memory memory
Shmsys:shminfo:shmseg 10 No N/A N/A
Using Resource Control Commands to Set System V IPC Parameters
The prctl command can be used to view and change the value of resource control. The prctl command is invoked with
the -n option to display the value of a certain resource control. The following command displays the value of max-file-
descriptor resource control for the specified process:
prctl -n process.max-file-descriptor <pid>
The following command updates the value of project.cpu-shares in the project
group.staff:
prctl -n project.cpu-shares -v 10 -r -l project group.staff
19
22. 8.0 REFERENCES
[1] Consolidating Applications with Solaris 10 Containers, Sun Microsystems, 2004
http://www.sun.com/datacenter/consolidation/solaris10_whitepaper.pdf
[2] Solaris 10 System Administrator Collection -- System Administration Guide: Solaris Containers-Resource
Management and Solaris Zones, Sun Microsystems, 2005
http://docs.sun.com/app/docs/doc/817-1592
[3] Solaris Containers--What They Are and How to Use Them, Menno Lageman, Sun BluePrints Online, 2005
http://www.sun.com/blueprints/0505/819-2679.html
[4] Solaris Zones section on BigAdmin System Administration Portal, Sun Microsystems
http://www.sun.com/bigadmin/content/zones
[5] Bigadmin Feature Article: Best Practices for Running Oracle Databases in Solaris Containers
http://www.sun.com/bigadmin/features/articles/db_in_containers.html
20