This document summarizes the work done to set up a computing cluster at Florida Tech. It describes installing Rocks on a frontend server and nodes, setting up Condor as the batch job system, and using OpenFiler for network attached storage. The cluster originally had one frontend and one compute node from the University of Florida. Future work involves recovering from a hard drive failure on the frontend and continuing the installation of the Open Science Grid.
This document describes how to install Oracle 10g RAC on Linux using NFS for shared storage. Key steps include:
1. Installing Oracle Enterprise Linux on two nodes and configuring networking and prerequisites.
2. Setting up NFS shares on one node for shared file systems and disks.
3. Installing the Oracle Clusterware software and configuring the two-node cluster.
This document provides instructions for implementing an Oracle 11g R2 Real Application Cluster on a Red Hat Enterprise Linux 5.0 system using a two-node configuration. It describes pre-installation steps including hardware and network configuration, installing prerequisite packages and libraries, and configuring the Oracle ASM library driver. Detailed steps are provided for installing Oracle Grid Infrastructure and database software, and configuring the single client access name and storage area network.
EBS in an hour: Build a Vision instance - FAST - in Oracle Virtualboxjpiwowar
Slides from OAUG Connection Point conference in Pittsburgh, July 2013. Presentation discussed how to create an EBS Vision instance in Oracle Virtualbox, using OVM templates to avoid some of the pain of installation and patching. Content based on this blog post: http://www.pythian.com/blog/build-ebs-sandbox-1hr/ , with some minor modifications: resulting EBS instance is single-node, not two-node, instance.
Slides by themselves are of questionable value, since much of the presentation was a live demo. Still, I believe in sharing, so here they are. ;)
This document provides information on upgrading Oracle Clusterware from version 11gR2 to 12cR1. It begins with an introduction to the presenter and their experience. The agenda then outlines discussing introduction to Clusterware, prerequisites for upgrade, differences between traditional and Flex clusters, the upgrade process, recovering from failures, downgrade process, and tips for monitoring the RAC environment.
The document describes setting up a two-node Oracle 12c RAC cluster on two Oracle Linux VMs hosted on Oracle VirtualBox. Key steps include:
1. Installing Oracle Linux on VirtualBox and preparing it for the Oracle installation. This includes installing VirtualBox additions, configuring storage and networks, and disabling unnecessary services.
2. Cloning the first node VM to create an identical second node and reconfiguring its storage, networking and hostname.
3. Configuring DNS and hosts files on both nodes to resolve virtual IPs, scan name, and establish connectivity.
4. Installing Oracle Grid Infrastructure for a cluster using the Oracle installer, configuring SCAN name, adding the second
Oracle 12c RAC On your laptop Step by Step Implementation Guide 1.0Yury Velikanov
The document provides instructions for setting up a two-node Oracle 12c RAC environment within Oracle VirtualBox on a Windows laptop. The main steps include:
1. Configuring VirtualBox with a host-only network and installing Oracle Linux 6 on the first virtual machine.
2. Creating shared virtual disks for the ASM storage and installing Oracle Grid Infrastructure.
3. Cloning the first virtual machine to create the second node, and installing the Oracle 12c database software.
This allows users to test an Oracle 12c RAC sandbox environment locally without requiring additional physical hardware.
This document provides an overview of setting up an Oracle 11gR2 Real Application Clusters (RAC) environment. It discusses system requirements, storage options like SAN and NAS, the Single Client Access Name (SCAN), and components like the Oracle Cluster Registry (OCR) and voting disk. It also explains Oracle Automatic Storage Management (ASM), extent distribution, and provides step-by-step instructions and references for installing Oracle 11gR2 Clusterware and database software on a RAC configuration.
This document provides instructions for installing an Oracle 11gR2 RAC database using raw devices on an AIX system. It discusses hardware and network requirements including configuring shared storage using HACMP. It provides details on installing Oracle Clusterware and database software, and creating the database. Key steps include preparing the system, installing Grid Infrastructure, installing the database software, and using DBCA or manual methods to create the database.
This document describes how to install Oracle 10g RAC on Linux using NFS for shared storage. Key steps include:
1. Installing Oracle Enterprise Linux on two nodes and configuring networking and prerequisites.
2. Setting up NFS shares on one node for shared file systems and disks.
3. Installing the Oracle Clusterware software and configuring the two-node cluster.
This document provides instructions for implementing an Oracle 11g R2 Real Application Cluster on a Red Hat Enterprise Linux 5.0 system using a two-node configuration. It describes pre-installation steps including hardware and network configuration, installing prerequisite packages and libraries, and configuring the Oracle ASM library driver. Detailed steps are provided for installing Oracle Grid Infrastructure and database software, and configuring the single client access name and storage area network.
EBS in an hour: Build a Vision instance - FAST - in Oracle Virtualboxjpiwowar
Slides from OAUG Connection Point conference in Pittsburgh, July 2013. Presentation discussed how to create an EBS Vision instance in Oracle Virtualbox, using OVM templates to avoid some of the pain of installation and patching. Content based on this blog post: http://www.pythian.com/blog/build-ebs-sandbox-1hr/ , with some minor modifications: resulting EBS instance is single-node, not two-node, instance.
Slides by themselves are of questionable value, since much of the presentation was a live demo. Still, I believe in sharing, so here they are. ;)
This document provides information on upgrading Oracle Clusterware from version 11gR2 to 12cR1. It begins with an introduction to the presenter and their experience. The agenda then outlines discussing introduction to Clusterware, prerequisites for upgrade, differences between traditional and Flex clusters, the upgrade process, recovering from failures, downgrade process, and tips for monitoring the RAC environment.
The document describes setting up a two-node Oracle 12c RAC cluster on two Oracle Linux VMs hosted on Oracle VirtualBox. Key steps include:
1. Installing Oracle Linux on VirtualBox and preparing it for the Oracle installation. This includes installing VirtualBox additions, configuring storage and networks, and disabling unnecessary services.
2. Cloning the first node VM to create an identical second node and reconfiguring its storage, networking and hostname.
3. Configuring DNS and hosts files on both nodes to resolve virtual IPs, scan name, and establish connectivity.
4. Installing Oracle Grid Infrastructure for a cluster using the Oracle installer, configuring SCAN name, adding the second
Oracle 12c RAC On your laptop Step by Step Implementation Guide 1.0Yury Velikanov
The document provides instructions for setting up a two-node Oracle 12c RAC environment within Oracle VirtualBox on a Windows laptop. The main steps include:
1. Configuring VirtualBox with a host-only network and installing Oracle Linux 6 on the first virtual machine.
2. Creating shared virtual disks for the ASM storage and installing Oracle Grid Infrastructure.
3. Cloning the first virtual machine to create the second node, and installing the Oracle 12c database software.
This allows users to test an Oracle 12c RAC sandbox environment locally without requiring additional physical hardware.
This document provides an overview of setting up an Oracle 11gR2 Real Application Clusters (RAC) environment. It discusses system requirements, storage options like SAN and NAS, the Single Client Access Name (SCAN), and components like the Oracle Cluster Registry (OCR) and voting disk. It also explains Oracle Automatic Storage Management (ASM), extent distribution, and provides step-by-step instructions and references for installing Oracle 11gR2 Clusterware and database software on a RAC configuration.
This document provides instructions for installing an Oracle 11gR2 RAC database using raw devices on an AIX system. It discusses hardware and network requirements including configuring shared storage using HACMP. It provides details on installing Oracle Clusterware and database software, and creating the database. Key steps include preparing the system, installing Grid Infrastructure, installing the database software, and using DBCA or manual methods to create the database.
This document discusses setting up a highly available SAP system on Linux using two clusters - an Oracle9i RAC cluster for the database and a Red Hat cluster for SAP services. It describes configuring the Red Hat cluster to make the $ORACLE_HOME directory and SAP central instance services highly available, including setting up the network, shared storage, and clustered NFS service.
Real Application Cluster (RAC) allows multiple computers to simultaneously run Oracle RDBMS while accessing a single database, providing clustering. RAC provides high availability, scalability, and ease of administration by making multiple instances transparent to users. Nodes must have identical environments. Oracle Clusterware manages node additions and removals. Instances from different nodes write to the same physical database. The presentation covers RAC architecture, components, startup sequence, single instance configuration, node eviction, and tips for monitoring and improving the RAC environment.
The document outlines the steps for installing ERP software R12.2, including running the rapidwiz file on the database server with the IP 192.168.14.47 and on the application server. It also provides installation details for the Tabadul software over 28 pages, with instructions for Omnix.
This document provides an introduction and overview of Oracle Linux and its suitability for running Oracle databases. It discusses the Unbreakable Enterprise Kernel, installation of Oracle Linux, directory structure considerations, useful Linux commands for Oracle DBAs, file system options like OCFS2 and BTRFS, and demonstrates cloning a database using OCFS2 snapshot capabilities. The presenter has extensive experience with Oracle databases and various Oracle Linux versions.
Node management in Oracle Clusterware involves monitoring nodes and evicting nodes if necessary to prevent split-brain situations. The CSSD process monitors nodes through network heartbeats over the private interconnect and disk heartbeats using the voting disks. If a node fails to respond within the configured time limits for either heartbeat, it will be evicted from the cluster. Eviction involves sending a "kill request" to the node over the remaining communication channels to forcibly remove it. With Oracle Clusterware 11.2.0.2, reboots of nodes can be avoided by gracefully shutting down the Oracle Clusterware stack instead of an immediate reboot when fencing a node.
C-mode refers to an OS mode in ONTAP 8.0 and beyond where two or more controllers operate as a single shared resource pool. In c-mode, volumes can span multiple storage systems and clients see a single global namespace rather than individual shares. Key features of c-mode include non-disruptive operations where volumes can be moved between nodes and aggregates transparently, with no change to the client view of the filesystem. The major commands for storage failover in c-mode are 'storage failover giveback' to return ownership and 'storage failover show' to check the status of aggregates.
图文详解安装Net backup 6.5备份恢复oracle 10g rac 数据库maclean liu
This document describes how to install and configure NetBackup 6.5 to backup an Oracle 10g RAC database. It discusses installing NetBackup server software on a NAS host, defining storage units and backup policies, installing NetBackup client software and the Oracle agent on RAC nodes, and linking the Oracle homes. It also provides an example of using RMAN to backup the control file and archive logs to the NetBackup server.
Vbox virtual box在oracle linux 5 - shoug 梁洪响maclean liu
The document describes setting up an Oracle 11g Release 2 RAC environment using VirtualBox virtual machines on Oracle Linux 5.7. It outlines planning the RAC logical architecture and installation requirements. It then details steps to create two virtual machines, install Oracle Linux on them, configure user accounts and directories for the Grid and Oracle software installations, and prepare the systems for the Oracle software installations.
GoldenGate is a replication utility that provides flexible data propagation between databases. It consists of extract, replicat, and data pump processes that access trail files containing change data. An extract process mines source database redo logs and writes changes to trail files. A replicat process reads from trail files and applies changes to target database tables. The demo will show two scenarios for replicating data from a Windows source database to a Linux target database using different GoldenGate configuration methods.
NetApp Cluster-Failover Giveback allows a failed NetApp filer head to recover by having the operational head give back its resources. When a head fails, the other head takes over its disks and network connections in a takeover state. To recover the failed head, the operational head must be instructed to perform a giveback command, which will sync up the peers and allow the failed head to resume normal operations. An HA pair consists of two matching storage controllers connected to each other's disk shelves to provide fault tolerance for nondisruptive maintenance.
DataStax | DSE: Bring Your Own Spark (with Enterprise Security) (Artem Aliev)...DataStax
Connecting Apache Spark to C* is easy, thanks to DataStax Spark Cassandra Connector. But what about Security?
The DSE bring Enterprise security and Kerberos support to C*. Latest Hadoop distribution has Spark support and also support Kerberos. So now you can add a Cassandra to you Hadoop infrastructure with integrated security and build reliable speed level and streaming applications by combining data from both worlds.
This presentation will show all that fun around security configurations
1. DSE client with SSL and Kerberos
2. Connect from Hadoop Spark to DSE
3. Connect DSE Spark to HDFS sources.
4. And all above even with Widows DC :)
About the Speaker
Artem Aliev Software Developer, DataStax
Artem Aliev is a software developer in the DataStax Analytics team. He works on integrating C* database with analytics solution like Spark and Hive.
The document provides information about finding the location of OCR and voting disks in an Oracle RAC environment. It states that the OCR location can be found in the /etc/oracle/ocr.loc file and the voting disk location can be found using the crsctl query css votedisk command. It also provides information on backing up the OCR and voting disks, such as using dd to backup voting disks and ocrconfig to backup and restore OCR.
The document discusses new features in Oracle Database 12c Release 2 related to Oracle Multitenant architecture. Key points include:
- PDBs can now have local undo tablespaces for improved flashback and other features.
- PDBs can be plugged/unplugged into archive files, cloned with hot cloning, refreshed periodically, and relocated between CDBs.
- New resource management features allow limiting I/O rates, configuring memory usage, and assigning performance profiles for PDBs.
- PDB lockdown profiles provide a way to restrict features and operations on a per-PDB basis.
This document provides instructions for an exercise to familiarize users with cluster administration basics in Data ONTAP. The objectives are to connect to the command shell, explore the command hierarchy, manage privileges and licenses, and install and configure OnCommand System Manager. The tasks include connecting to the cluster shell, exploring commands and options, comparing privilege levels, using tab completion, installing and configuring OnCommand System Manager, and managing feature licenses.
MySQL Cluster Performance Tuning - 2013 MySQL User ConferenceSeveralnines
Slides from a presentation given at Percona Live MySQL Conference 2013 in Santa Clara, US.
Topics include:
- How to look for performance bottlenecks
- Foreign Key performance in MySQL Cluster 7.3
- Sharding and table partitioning
- efficient use of datatypes (e.g. BLOBS vs varbinary)
Unleash oracle 12c performance with cisco ucssolarisyougood
This document discusses performance testing of Oracle 12c on Cisco UCS blade servers. An 8-node Oracle RAC cluster was tested achieving 750K IOPS and 25GB/sec bandwidth. OLTP workloads achieved 330K IOPS and DSS workloads achieved 17GB/sec bandwidth running together. Pluggable databases were also compared to traditional containers, showing higher throughput with pluggable databases. Various hardware failures were tested to demonstrate high availability of the Oracle RAC cluster on Cisco UCS.
This document provides an overview and table of contents for the RH401 course on Red Hat Enterprise Linux deployment, virtualization, and systems management. It covers topics such as system management tools, provisioning with DHCP and PXE, installing Red Hat Network satellite servers, building RPM packages, using CVS for configuration management, virtualization with KVM, and Red Hat Network management. The document lists learning objectives, prerequisites, and sequences of exercises for each unit.
White Paper: Using Perforce 'Attributes' for Managing Game Asset MetadataPerforce
Perforce attributes are used to organize and manipulate game assets. Attributes store metadata like categorization and dependency information. A local SQLite database caches attribute information for faster searching. Integrating assets between branches is challenging since attributes cannot be merged like text files.
This document outlines an instructional plan for crisis management and emergency response training at Maxim Medical. The goal is to educate staff to protect employee and organizational safety during emergencies. The objective is for staff to demonstrate knowledge by scoring 95% or higher on a final exam. The training will use various instructional strategies like collaborative learning, critical thinking, and graphic organizers. It will also use technologies like PowerPoint, online exams, tutorials, and virtual activities. The document lists references to support the instructional strategies.
This communications portfolio summarizes web design and social media work done from 2012-2014 at American University. It includes creating success stories and an image slider using the CMS Common Spot, designing flyers and invitations, growing a Pinterest account to 14 followers and 180 pins, and increasing the number of Facebook fans by 94 and Twitter followers by 109 over a few months. Contact information of a LinkedIn profile and email are provided.
This document discusses setting up a highly available SAP system on Linux using two clusters - an Oracle9i RAC cluster for the database and a Red Hat cluster for SAP services. It describes configuring the Red Hat cluster to make the $ORACLE_HOME directory and SAP central instance services highly available, including setting up the network, shared storage, and clustered NFS service.
Real Application Cluster (RAC) allows multiple computers to simultaneously run Oracle RDBMS while accessing a single database, providing clustering. RAC provides high availability, scalability, and ease of administration by making multiple instances transparent to users. Nodes must have identical environments. Oracle Clusterware manages node additions and removals. Instances from different nodes write to the same physical database. The presentation covers RAC architecture, components, startup sequence, single instance configuration, node eviction, and tips for monitoring and improving the RAC environment.
The document outlines the steps for installing ERP software R12.2, including running the rapidwiz file on the database server with the IP 192.168.14.47 and on the application server. It also provides installation details for the Tabadul software over 28 pages, with instructions for Omnix.
This document provides an introduction and overview of Oracle Linux and its suitability for running Oracle databases. It discusses the Unbreakable Enterprise Kernel, installation of Oracle Linux, directory structure considerations, useful Linux commands for Oracle DBAs, file system options like OCFS2 and BTRFS, and demonstrates cloning a database using OCFS2 snapshot capabilities. The presenter has extensive experience with Oracle databases and various Oracle Linux versions.
Node management in Oracle Clusterware involves monitoring nodes and evicting nodes if necessary to prevent split-brain situations. The CSSD process monitors nodes through network heartbeats over the private interconnect and disk heartbeats using the voting disks. If a node fails to respond within the configured time limits for either heartbeat, it will be evicted from the cluster. Eviction involves sending a "kill request" to the node over the remaining communication channels to forcibly remove it. With Oracle Clusterware 11.2.0.2, reboots of nodes can be avoided by gracefully shutting down the Oracle Clusterware stack instead of an immediate reboot when fencing a node.
C-mode refers to an OS mode in ONTAP 8.0 and beyond where two or more controllers operate as a single shared resource pool. In c-mode, volumes can span multiple storage systems and clients see a single global namespace rather than individual shares. Key features of c-mode include non-disruptive operations where volumes can be moved between nodes and aggregates transparently, with no change to the client view of the filesystem. The major commands for storage failover in c-mode are 'storage failover giveback' to return ownership and 'storage failover show' to check the status of aggregates.
图文详解安装Net backup 6.5备份恢复oracle 10g rac 数据库maclean liu
This document describes how to install and configure NetBackup 6.5 to backup an Oracle 10g RAC database. It discusses installing NetBackup server software on a NAS host, defining storage units and backup policies, installing NetBackup client software and the Oracle agent on RAC nodes, and linking the Oracle homes. It also provides an example of using RMAN to backup the control file and archive logs to the NetBackup server.
Vbox virtual box在oracle linux 5 - shoug 梁洪响maclean liu
The document describes setting up an Oracle 11g Release 2 RAC environment using VirtualBox virtual machines on Oracle Linux 5.7. It outlines planning the RAC logical architecture and installation requirements. It then details steps to create two virtual machines, install Oracle Linux on them, configure user accounts and directories for the Grid and Oracle software installations, and prepare the systems for the Oracle software installations.
GoldenGate is a replication utility that provides flexible data propagation between databases. It consists of extract, replicat, and data pump processes that access trail files containing change data. An extract process mines source database redo logs and writes changes to trail files. A replicat process reads from trail files and applies changes to target database tables. The demo will show two scenarios for replicating data from a Windows source database to a Linux target database using different GoldenGate configuration methods.
NetApp Cluster-Failover Giveback allows a failed NetApp filer head to recover by having the operational head give back its resources. When a head fails, the other head takes over its disks and network connections in a takeover state. To recover the failed head, the operational head must be instructed to perform a giveback command, which will sync up the peers and allow the failed head to resume normal operations. An HA pair consists of two matching storage controllers connected to each other's disk shelves to provide fault tolerance for nondisruptive maintenance.
DataStax | DSE: Bring Your Own Spark (with Enterprise Security) (Artem Aliev)...DataStax
Connecting Apache Spark to C* is easy, thanks to DataStax Spark Cassandra Connector. But what about Security?
The DSE bring Enterprise security and Kerberos support to C*. Latest Hadoop distribution has Spark support and also support Kerberos. So now you can add a Cassandra to you Hadoop infrastructure with integrated security and build reliable speed level and streaming applications by combining data from both worlds.
This presentation will show all that fun around security configurations
1. DSE client with SSL and Kerberos
2. Connect from Hadoop Spark to DSE
3. Connect DSE Spark to HDFS sources.
4. And all above even with Widows DC :)
About the Speaker
Artem Aliev Software Developer, DataStax
Artem Aliev is a software developer in the DataStax Analytics team. He works on integrating C* database with analytics solution like Spark and Hive.
The document provides information about finding the location of OCR and voting disks in an Oracle RAC environment. It states that the OCR location can be found in the /etc/oracle/ocr.loc file and the voting disk location can be found using the crsctl query css votedisk command. It also provides information on backing up the OCR and voting disks, such as using dd to backup voting disks and ocrconfig to backup and restore OCR.
The document discusses new features in Oracle Database 12c Release 2 related to Oracle Multitenant architecture. Key points include:
- PDBs can now have local undo tablespaces for improved flashback and other features.
- PDBs can be plugged/unplugged into archive files, cloned with hot cloning, refreshed periodically, and relocated between CDBs.
- New resource management features allow limiting I/O rates, configuring memory usage, and assigning performance profiles for PDBs.
- PDB lockdown profiles provide a way to restrict features and operations on a per-PDB basis.
This document provides instructions for an exercise to familiarize users with cluster administration basics in Data ONTAP. The objectives are to connect to the command shell, explore the command hierarchy, manage privileges and licenses, and install and configure OnCommand System Manager. The tasks include connecting to the cluster shell, exploring commands and options, comparing privilege levels, using tab completion, installing and configuring OnCommand System Manager, and managing feature licenses.
MySQL Cluster Performance Tuning - 2013 MySQL User ConferenceSeveralnines
Slides from a presentation given at Percona Live MySQL Conference 2013 in Santa Clara, US.
Topics include:
- How to look for performance bottlenecks
- Foreign Key performance in MySQL Cluster 7.3
- Sharding and table partitioning
- efficient use of datatypes (e.g. BLOBS vs varbinary)
Unleash oracle 12c performance with cisco ucssolarisyougood
This document discusses performance testing of Oracle 12c on Cisco UCS blade servers. An 8-node Oracle RAC cluster was tested achieving 750K IOPS and 25GB/sec bandwidth. OLTP workloads achieved 330K IOPS and DSS workloads achieved 17GB/sec bandwidth running together. Pluggable databases were also compared to traditional containers, showing higher throughput with pluggable databases. Various hardware failures were tested to demonstrate high availability of the Oracle RAC cluster on Cisco UCS.
This document provides an overview and table of contents for the RH401 course on Red Hat Enterprise Linux deployment, virtualization, and systems management. It covers topics such as system management tools, provisioning with DHCP and PXE, installing Red Hat Network satellite servers, building RPM packages, using CVS for configuration management, virtualization with KVM, and Red Hat Network management. The document lists learning objectives, prerequisites, and sequences of exercises for each unit.
White Paper: Using Perforce 'Attributes' for Managing Game Asset MetadataPerforce
Perforce attributes are used to organize and manipulate game assets. Attributes store metadata like categorization and dependency information. A local SQLite database caches attribute information for faster searching. Integrating assets between branches is challenging since attributes cannot be merged like text files.
This document outlines an instructional plan for crisis management and emergency response training at Maxim Medical. The goal is to educate staff to protect employee and organizational safety during emergencies. The objective is for staff to demonstrate knowledge by scoring 95% or higher on a final exam. The training will use various instructional strategies like collaborative learning, critical thinking, and graphic organizers. It will also use technologies like PowerPoint, online exams, tutorials, and virtual activities. The document lists references to support the instructional strategies.
This communications portfolio summarizes web design and social media work done from 2012-2014 at American University. It includes creating success stories and an image slider using the CMS Common Spot, designing flyers and invitations, growing a Pinterest account to 14 followers and 180 pins, and increasing the number of Facebook fans by 94 and Twitter followers by 109 over a few months. Contact information of a LinkedIn profile and email are provided.
This document appears to be celebrating an 18th birthday. It references the birthday girl growing up from a little girl to a charming lady. A quote says she has grown beautiful wings like a butterfly to help her reach her dreams. The document then gives the birthday girl's name and birthdate and references her early childhood years and growing up years. It encourages her to continue pursuing her goals and dreams high. The document provides music credits and thanks various families for their support.
Este documento proporciona instrucciones en 8 pasos para exportar una imagen con fondo transparente desde un programa de edición de imágenes. Estos pasos incluyen crear la imagen sobre un fondo transparente, exportarla como archivo GIF manteniendo los colores exactos, y terminar para obtener una imagen GIF con fondo transparente que puede importarse en otros programas.
This communications portfolio summarizes web design and social media work done from 2012-2014 at American University. It includes creating success stories and an image slider using the CMS Common Spot, designing flyers and invitations, growing a Pinterest account to 14 followers and 180 pins, and increasing the number of Facebook fans by 94 and Twitter followers by 109 over a few months. Contact information of a LinkedIn profile and email are provided.
Yogi Lavi has completed a diploma of project management and diploma of business management. He is currently studying a diploma of management and works in a warehouse operating a forklift. His LinkedIn profile can be found at the provided URL.
Howard Gardner developed the theory of multiple intelligences which suggests that there are different types of intelligence and people have varying strengths in each type. The types include linguistic, logical-mathematical, spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, and naturalist intelligences. Applying multiple intelligences in adult education helps learners utilize their strengths and improves participation and learning outcomes by addressing different learning styles. Sternberg's theory of practical intelligence refers to the ability to apply knowledge to everyday tasks and problems. Culture influences what skills and knowledge are valued as practical intelligence. Emotional intelligence involves self-awareness, managing emotions, motivating oneself, recognizing emotions in others, and handling relationships. Culture shapes
The first Technology driven reality competition showcasing the incredible virtualization community members and their talents. Virtually Everywhere · virtualdesignmaster.com
This document provides instructions for installing Oracle Applications R12 (12.1.3) on a Linux (64-bit) system. It describes downloading and unzipping the installation files, performing pre-install tasks like configuring disk space and software requirements, and outlines the installation process including setting environment variables and the directory structure. It also covers upgrading an existing 12.1.3 installation with a patch and provides solutions for potential issues that may occur.
20150704 benchmark and user experience in sahara weitingWei Ting Chen
Sahara provides a way to deploy and manage Hadoop clusters within an OpenStack cloud. It addresses common customer needs like providing an elastic environment for data processing jobs, integrating Hadoop with the existing private cloud infrastructure, and reducing costs. Key challenges include speeding up cluster provisioning times, supporting complex data workflows, optimizing storage architectures, and improving performance when using remote object storage.
Windows 2000 is a 32-bit operating system designed for compatibility, reliability, and performance. It includes several key components like the kernel, executive services, and environmental subsystems. The kernel schedules threads and handles exceptions/interrupts. Executive services include the object manager, virtual memory manager, process manager, and I/O manager. Environmental subsystems allow running applications from other operating systems. The document also discusses disk structure, file systems, networking, and other OS concepts.
What we unlearned_and_learned_by_moving_from_m9000_to_ssc_ukoug2014Philippe Fierens
The document discusses moving databases from 3 Oracle M9000 servers to a new Oracle SPARC SuperCluster (SSC) system. It describes the key phases of the project including lifting and shifting the databases from the M9000s to application domains on the SSC, making use of the SSC's integrated storage cells, upgrading databases from Oracle 9i and 10g to 11g, and consolidating databases. It also covers issues encountered such as performance problems after the initial migration and regressions encountered during patching cycles. The document provides details on configuring features on the SSC like RAC One Node, Data Guard, and database resource management.
ZCloud Consensus on Hardware for Distributed SystemsGokhan Boranalp
3rd Workshop on Dependability,
May 8, Monday 2017, İYTE,
https://goo.gl/fSVnZy
http://dcs.iyte.edu.tr/ws/ppt/10/presentation.pdf
In distributed applications where the number of members in the cluster increases, the
separation of the consensus related operations at the hardware level is essential for the
following reasons:
1. At the operating system level, messages broadcast on the protocol stack cause latency.
2. It is necessary to increase the number of completed transactions in the communication of
distributed system components and on the network unit (throughput).
3. For devices with limited storage and CPU computing facilities that use embedded operating
systems such as IOT devices, it is also necessary to reduce the processing burden due to
"consensus" operations.
4. A common consensus communication model is needed for different applications that need
to work together in (BFT) distributed systems.
Hadoop Meetup Jan 2019 - Overview of OzoneErik Krogen
A presentation by Anu Engineer of Cloudera regarding the state of the Ozone subproject. He covers a brief introduction of what Ozone is, and where it's headed.
This is taken from the Apache Hadoop Contributors Meetup on January 30, hosted by LinkedIn in Mountain View.
The document is a network proposal from Trey Duckworth and Trevor Moon to NASA IV&V for a new IT network. It recommends a flexible, multi-platform network designed to support collaboration between employees and customers on-site and remotely with 99.999% uptime. It proposes specific servers, software, storage, and budget for the network infrastructure and workstations.
- The document describes installing Oracle Real Application Clusters (RAC) and Cluster Ready Services (CRS) on a two-node Windows cluster.
- It involves a two phase installation - first installing and configuring CRS, then installing the Oracle Database with RAC.
- Key steps include configuring shared disks and partitions for the Oracle Cluster Registry, voting disk, and Automatic Storage Management; installing and configuring CRS; and then installing Oracle Database with RAC.
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1Osama Mustafa
The document provides step-by-step instructions to install Oracle Grid Infrastructure 11g Release 2 (11.2.0.3) on Solaris 11.1. It describes preparing the OS by creating users, groups and directories. It also covers configuring networking, disks and memory parameters. The main steps are: installing Grid software and configuring ASM, followed by installing the Oracle Database and configuring it on the RAC nodes using dbca. Setting up SSH access between nodes and troubleshooting installation errors are also addressed. The goal is to build a fully configured two-node Oracle RAC environment with ASM and single sign-on capabilities.
Sector is a distributed file system that stores files on local disks of nodes without splitting files. Sphere is a parallel data processing engine that processes data locally using user-defined functions like MapReduce. Sector/Sphere is open source, supports fault tolerance through replication, and provides security through user accounts and encryption. Performance tests show Sector/Sphere outperforms Hadoop for sorting and malware analysis benchmarks by processing data locally.
Sector is a distributed file system that stores files on local disks of nodes without splitting files. Sphere is a parallel data processing engine that processes data locally using user-defined functions like MapReduce. Sector/Sphere is open source, written in C++, and provides high performance distributed storage and processing for large datasets across wide areas using techniques like UDT for fast data transfer. Experimental results show it outperforms Hadoop for certain applications by exploiting data locality.
Final ProjectFinal Project Details Description Given a spec.docxAKHIL969626
Final Project
Final Project Details:
Description: Given a specific scenario, create an appropriate IP addressing scheme, document a given network by creating a logical network diagram and create the appropriate access lists for use on the routers. Deliverables:
· Demonstrate the theory and practice of Cisco networking, routing, and switching strategies as outlined in the Cisco CCENT Certification exam
Prior to implementing any design we need to first write-up our proposed network design on paper. With that in mind, we begin by performing a network discovery. Once we have identified all the network devices and the needs of the organization, we can document the TCP/IP information that is needed for our design. In this exercise you will determine the subnet information for each department and assign IP addresses for the network devices.
You have been assigned as a networking tech for a new client, AAA Fabricating. The network is configured with a Class C network and the current allocation of IP addresses has been depleted. You have been tasked to reconfigure the network with a Class B address and assign a subnet to each of the 10 departments and the three routers.
Your network audit consists of the following information:
AAA Fabrication consists of 10 departments spread across three buildings.
Each building is connected using three Cisco 2800 Series routers. The three routers are located in the MIS wiring closet in Building 2.
Each department has its own Cisco 2950 switch.
There are at least two workstations in each department.
The company plans to use a class B address range starting at 172.16.0.0.
Each department must be assigned a subnet. Subnets should be designed to allow for the maximum number of hosts on each department subnet using classful subnetting.
The company also wants the three routers to communicate on the minimum quantity of IP addresses using three subnets.
Building 1
Subnet
Department
Subnet ID
Host ID Range
Broadcast Address
0
Warehouse
1
Receiving
2
shipping
3
Maintenance
Building 2
Subnet
Department
Subnet ID
Host ID Range
Broadcast Address
4
Accounting
5
Human Resources
6
Payroll
7
MIS
8
Employee Training
Building 3
Subnet
Department
Subnet ID
Host ID Range
Broadcast Address
9
R&D
10
Marketing
Routers
Building 1
Ethernet and Serial Interfaces
IP Address
Subnet Mask
Router
Fast Ethernet 0/0
Building 1
Serial 0/0
To Building 2
Serial 0/1
To Building 3
Building 2
Ethernet and Serial Interfaces
IP Address
Subnet Mask
Router
Fast Ethernet 0/0
Building 2
Serial 0/0
To Building 1
Serial 0/1
To building 3
Building 3
Ethernet and Serial Interfaces
IP Address
Subnet Mask
Router
Fast Ethernet 0/0
Building 3
Serial 0/0
To Building 1
Serial 0/1
To Building 2
Part 2
Create a logical Network Diagram
Logical Network topology represents a high level overview of the signal topology of the network. Every LAN has two different topologies, or the way that the devices on a networ ...
Building Apache Cassandra clusters for massive scaleAlex Thompson
Covering theory and operational aspects of bring up Apache Cassandra clusters - this presentation can be used as a field reference. Presented by Alex Thompson at the Sydney Cassandra Meetup.
This document provides information about AST Corporation, an Oracle partner specialized in Oracle technologies. It discusses AST's Oracle certifications and awards. It also lists the Oracle products and services that AST provides support for, including their Oracle Platinum partnership. The remainder of the document describes setting up an Oracle RAC database in a virtualized environment using Oracle VM VirtualBox, including downloading software, installing operating systems on the virtual machines, configuring networking and storage, installing Oracle Grid Infrastructure and the database, and testing the RAC configuration.
Highlights and Challenges from Running Spark on Mesos in Production by Morri ...Spark Summit
This document discusses AppsFlyer's experience running Spark on Mesos in production for retention data processing and analytics. Key points include:
- AppsFlyer processes over 30 million installs and 5 billion sessions daily for retention reporting across 18 dimensions using Spark, Mesos, and S3.
- Challenges included timeouts and errors when using Spark's S3 connectors due to the eventual consistency of S3, which was addressed by using more robust connectors and configuration options.
- A coarse-grained Mesos scheduling approach was found to be more stable than fine-grained, though it has limitations like static core allocation that future Mesos improvements may address.
- Tuning jobs for coarse-
This is the presentation on clusters computing which includes information from other sources too including my own research and edition. I hope this will help everyone who required to know on this topic.
This document outlines a final project assessing the interoperability of Hadoop and OpenStack cloud platforms. The proposed solution involves using scheduling and networking techniques like MRLU scheduling, disabling live migration, and deploying HDFS on local storage to minimize latency between compute and storage resources in the hybrid cloud environment. Bootstrap scripts and Swift object storage are also utilized for initialization tasks and flexible user data/result handling. The solution is evaluated based on interoperability, performance, and future enhancements like exploring Swift as an HDFS backend.
Study notes for CompTIA Certified Advanced Security Practitioner (ver2)David Sweigert
The document provides information on various topics for the CompTIA CASP exam, including:
1. Virtual Trusted Platform Modules (vTPM) which provide secure storage and cryptographic functions to virtual machines.
2. SELinux which added Mandatory Access Control to the Linux kernel to control access between subjects and objects.
3. Differences between common storage protocols like iSCSI, Fibre Channel over Ethernet, and NFS vs CIFS.
It also covers topics like dynamic disk pools vs RAID, Microsoft Group Policies, and differences between network attached storage and storage area networks.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
GraphRAG for Life Science to increase LLM accuracy
Arun
1. CLU S TER COMP U TI N G
Undergraduate Research Report
Instructor:
Marcus Hohlmann
Students:
Jennifer Helsby
Rafael David Pena
With the help of:
Jorge Rodriguez, UF
Fall 2006
Undergraduate Research Report: Cluster Computing
1
2. CLU S TER COMP U TI N G
Undergraduate Research Report
Jennifer Helsby
Rafael David Pena
Fall 2006
Introduction
Cluster computing is an increasingly popular high performance computing solution.
Computing clusters make up over half of the top 500 most powerful computers in the
world, and the number of clusters is growing daily. At Florida Tech, we are building a computing cluster in the Physics Department.
The purposes of this semester’s investigations were to learn and implement the processes of creating a fully functional cluster using Rocks 3.3.0 with Condor installed - a batch
job system, the end goal being the addition of Florida Tech’s cluster to OSG’s (Open Science
Grid) Integration Test Bed. A trip to UF just before the semester started left us with one
frontend and one computational node. We had learned how to customize the Rocks XML
files for our purposes.
This document, in addition to the logbook, is to serve as a reference guide to future
students whom are to work on this project.
Cluster Architecture
Florida Tech acquired ten computers - Dual CPU Intel Pentium 1.0 GHz servers from the University of Florida earlier this year. The plan, as shown in the figure on the next
page, was to use one of these servers as the frontend, the cluster controller, and the remaining nine as computational nodes. The existing hardware was to be utilized in the cluster, but
not as computational nodes. We originally had two brands of motherboards - Asus and Gigabyte - the Asus motherboards are supported by Rocks, while the Gigabyte motherboards
caused problems as the version of Rocks we used did not feature support for that motherboard. Thus, the Asus motherboards have been used with the Linux distribution OpenFiler
in NAS (Network Attached Storage) servers. The Gigabyte motherboards will be used in
workstations. Due to the growth of our cluster, an additional switch had to be purchased to
expand the internal cluster network. Ethernet switches can be cascaded if crossover cables
are used. This network is not particularly fault tolerant, but the purchase of an enterprise
Undergraduate Research Report: Cluster Computing
2
3. class switch starts at about $100 per port and so is not financially viable. For a ten node
cluster, the topology below is appropriate as far as fault tolerance is concerned.
Topology of the cluster (Icons courtesy of Cisco Systems)
Administration Workstation
Internet
Network Attached Storage
nasraid1
Frontend
uscms1
fltech01
fltech02
nasraid2
Cascaded Ethernet Switches
fltech03
fltech04
fltech05
fltech06
fltech07
fltech08
fltech09
Compute Nodes
ROCKS
Rocks is a revamped CentOS distribution that allows customization using a collection of XML files and software packages. The cornerstone of rocks is the kickstart graph,
which lays out the internal structure and available appliances within Rocks (Refer to Appendix A), and the MySQL database to manage the nodes from a central location (i.e. the
frontend). The connections between bubbles on the kickstart graph tell Rocks which packages to install on each appliance. Connections can be made on the graph by editing and creating Rocks’ XML files, which in turn changes what packages are distributed to the nodes.
The kickstart graph is thus vital in generating kickstart files. Our kickstart graph contains
customized configurations, seen in red, as part of the Florida Tech Roll.
Undergraduate Research Report: Cluster Computing
3
4. Frontend Installation
To install the frontend, one needs to boot off the Rocks 3.3.0 CD 1. There are two
installation methods - CD or from a central network location - we use the CD despite its
slower speed. When the Rocks prompt appears, one types frontend to begin the installation process. Then, the rolls on CD are loaded into memory. The information inputted
during the installation fills out the MySQL database and can be edited from the web interface installed by default and accessible by browsing to localhost. The frontend has two ethernet adapters- eth0 is the cluster-side network, and eth1 is the internet adapter. Both must
be configured for the cluster to function as effectively one machine.
Node Installation
To install a node, the insert-ethers command must first be run from the frontend. This allows the frontend to be used as a DHCP server. The frontend collects the
MAC address of the node and assigns it an internal IP. These addresses are located in the
cluster MySQL database. This command also enables the frontend operator to select the
“Appliance Type” of the node that is to be installed, i.e. compute node, web portal, etc. In
our case, we created our own appliance type - by editing the MySQL database and adding a
new XML file - that installed a Rocks compute node as well as Condor (See Appendix B for
detailed procedures). After the frontend has recognized the presence of a node and assigned
it an IP, it sends the node a kickstart file. Kickstart files allow for automated installation they contain the configuration information for every step in the Anaconda installation process.
System Administration
Adding a user in Rocks is done by the normal Linux command, useradd, which
runs a series of scripts and updates the 411 database. One of the problems we encountered
this semester was dealing with 411 - the authentication protocol of Rocks - and NIS - the
authentication protocol of OpenFiler, our NAS operating system. Our chosen solution is to
also install an NIS server on the frontend and have both NIS and 411 access the same configuration files (i.e. /etc/passwd, /etc/group, /etc/shadow, etc.). This is not an
ideal solution, as it requires more administration, but is appropriate for a 10 node cluster.
Although authentication can be scrubbed the advantages to having a authentication method
allows for users to access only their own data and provides added security. We would not
want someone to accidentally delete another users files for example.
Undergraduate Research Report: Cluster Computing
4
5. Condor - Batch Job System
In our cluster we use Condor as our batch job system. Condor is a piece of software
that enables us to distribute the load of a computing task over the 20 CPUs in the cluster.
Condor is also well suited for grid computing, as it is able to submit jobs to machines located all over the world. Also, most code does not need to be recompiled to run using Condor’s computing resources. Condor is a very robust and complex application, as one can see
from the 600 page manual.
Although Rocks provides a Roll for the initial installation, this is not ideal for our use
due to the fact that the Rocks roll has unsupported features for OSG. Therefore we have to
install Condor manually which fortunately is not too daunting a task (See Appendix C for
detailed procedures). Once the RPMs are installed, configuration files located in the /opt/
condor/ directory need to be edited. Condor must be “rolled” out to the nodes via the
XML files and adding the RPMs in the /home/install/ directory tree. After the nodes
are installed with their new Condor configuration the cluster is a fully qualified computing
behemoth. Practically, one can see the status of the machines in condor by typing condor_status at a terminal:
Name
Arch
State
Activity
LoadAv Mem
ActvtyTime
vm1@fltech01. LINUX
OpSys
INTEL
Unclaimed
Idle
0.000
249
0+00:16:12
vm2@fltech01. LINUX
INTEL
Unclaimed
Idle
0.000
249
0+00:16:12
vm1@fltech02. LINUX
INTEL
Unclaimed
Idle
0.000
249
0+00:16:41
...
Machines Owner Claimed Unclaimed Matched Preempting
INTEL/LINUX
20
2
0
18
0
0
Total
20
2
0
18
0
0
Network Attached Storage with OpenFiler
We decided to do away with Rocks on our file server machines as it was proving very
difficult to create a kickstart file to automate the creation of a RAID 5 array.. Instead, we
are now using Openfiler, a distribution of Linux which has been designed with Network attached storage in mind (See Appendix D). We also no longer use RAID 1 - two hard drives
in an array with identical data - as it has two significant drawbacks:
• Slow writing, as each hard drive must write all data.
• Inefficient use of space, i.e. for 400GB of hard drive capacity, we only get 200GB of usable
space.
Undergraduate Research Report: Cluster Computing
5
6. Thus, we decided to use RAID 5 (disk striping with distributed parity) for the following reasons:
• Faster writing. Each hard drive needs to write only 1/3 of the data.
• More efficient use of space. The efficiency of RAID 5 increases as the number of hard
drives in the array increases. A minimum of three can be used.
• Fault tolerance. If any one hard drive fails, the data on that drive can be reconstructed using the data from the other two drives.
Future Work
At present, we are recovering from a hard drive failure on the frontend. Ironically, we
were performing a backup of the hard drive to ensure our data was safe before installing
OSG when the hard drive failed. The first task is to restore or recreate lost configuration
files. Rocks has been reinstalled and is now waiting upon those files. Using our custom
condor-compute appliance (which we also need to recreate), we can fairly easily enter the
nodes into the Rocks database. Alternatively, we may be able to actually manually enter the
MAC and IP addressing information of the nodes into the frontend’s MySQL database
without reinstalling the nodes. This would be the most convenient solution. Once we are
back up to our previous point, the second NAS - nasraid2 - can be installed using new hard
drives. Then, the frontend can be backed up on both NAS servers.
After backup, the process of installing OSG can continue, with the installation of
Pacman onto the frontend and the NAS to house the C.E. (Compute Element) Client software. C.E. Client software must be installed on the NAS where all nodes may access it.
Then VDT (Virtual Data Toolkit) software can be installed on the frontend.
Conclusion
During this semester we were able to set up a 20 CPU cluster, with customized XML
files, supporting network attached storage on a separate linux distribution, and begin the
OSG installation process. With the help of Jorge Rodriguez, we have been able to make significant headway toward our goal of contributing Florida Tech computing resources to the
Open Science Grid community. It has been an important learning experience in cluster
computing and system administration.
Undergraduate Research Report: Cluster Computing
6
7. APPENDIX A: THE KICKSTART GRAPH
Undergraduate Research Report: Cluster Computing
7
8. A P P E N D I X B : C R E AT I O N O F
APPLIANCE
THE
C O N D O R - C O M P U T E
In detail, this is the process of creating the new appliance mentioned earlier in this paper.
We want to create an appliance that rolls Condor out to the nodes with Rocks:
I. An XML file must be created for the new appliance - condor-compute.xml - which
must be located in /home/install/site-profiles/3.3.0/nodes directory. A
sample configuration is as follows:
<?xml version=”1.0” standalone=”no”?>
<kickstart>
<description>Condor-Compute</description>
<changelog></changelog>
<post>
<file name=”/etc/motd” mode=”append”>Condor Compute</file>
</post>
</kickstart>
II. Now, we need to make the links between the bubbles on the kickstart graph in order to
tell Rocks which packages we want the new appliance to have (refer to Appendix A). We
connect the new condor-compute node to the node that already exists - its configuration
information is located in compute.xml. To create links between bubbles, one must create
a new file in /home/install/site-profiles/4.2.1/graphs/default such as
links-condor-compute.xml. It needs to contain, in XML, links between bubbles,
coded as follows:
<?xml version=”1.0” standalone=”no”?>
<graph>
<description></description>
<changelog></changelog>
<edge from=”condor-compute”>
<to>compute</to>
</edge>
<! --- Insert additional links here in same format -->
<order gen=”kgen” head=”TAIL”>
<tail>condor-compute</tail>
</order>
</graph>
Undergraduate Research Report: Cluster Computing
8
9. III. These changes to the internal structure of Rocks need to be propagated throughout the
cluster. To do this, cd up to /home/install and as root:
# rocks-dist dist
IV. Also, the new appliance information should be inputted into the MySQL database:
# add-new-appliance --appliance-name “Condor Compute” --xmlconfig-file-name condor-compute
And that’s it. Now when the insert-ethers command is used on the frontend, on the
“Choose Appliance Type” screen, “Condor Compute” will be displayed as one of the possible
choices.
Undergraduate Research Report: Cluster Computing
9
10. A P P E N D I X C : C O N D O R I N S TA L L AT I O N
AND
C O N -
F I G U R AT I O N
Here we have the procedure by which we installed and configured the condor batch
job system. Although Condor is available as a roll for Rocks we have, under advisement
from Dr. Jorge Rodriguez, decided to install Condor via RPMs and configure the setup
manually.
I. The first step is installing Condor which we do via RPM (Red Hat Package Manager).
The files can be downloaded from: http://www.cs.wisc.edu/condor/downloads/. Once on this
page, download version 6.6.11 to use with Rocks 3.3. Newer versions are not supported by
OSG. On a terminal, go to the directory where you downloaded the file and type the following command:
# rpm -Uvh condor-6.6.11-<rest of filename>.rpm
II. After this file is installed, Condor needs to be configured by adding a file to
/opt/condor-6.6.11/ called condor-config.local. This file contains information about which machine will be the central manager and where to get information about
the rest of the worker nodes. Here is the output of that file:
CONDOR_ADMIN
= jhelsby@fit.edu
CONDOR_IDS
= 502.502
MAIL
= /bin/mail
CPUBusy
= False
UID_DOMAIN
= local
FILESYSTEM_DOMAIN
= local
CONDOR_HOST
= uscms1.local
DAEMON_LIST
= MASTER, STARTD, SCHEDD
START
= True
RANK
= 0
PREEMPT
= False
PREEMPTION_REQUIREMENTS
= False
VACATE
= False
SUSPEND
= False
CONTINUE
= True
KILL
= False
PERIODIC_CHECKPOINT
= False
WANT_SUSPEND
= False
Undergraduate Research Report: Cluster Computing
10
11. WANT_VACATE
= False
III. Once this file is written, Condor can now be configured to function properly on the
nodes. The rpm needs to be added to the following directory:
/home/install/contrib/enterprise/3/public/<arch>/RPMS/
IV. Rocks needs XML files to make new kickstart files. Inside
/home/install/site-profiles/3.3.0/nodes there is a skeleton.xml file
which needs to be copied using:
# cd /home/install/site-profiles/3.3.0/nodes/
# cp skeleton.xml extend-compute.xml
In this file, there are tags such as the following which can be modified further if need be:
<!-- There may be as many packages as needed here. -->
<package> condor </package>
<package> <!-- insert your 2nd package name here --> </package>
<package> <!-- insert your 3rd package name here --> </package>
V. Condor needs to be configured to work properly with the rest of the cluster. This is
done similarly as with the frontend but this would be quite difficult to do repeatedly if say
we had hundreds of nodes. We can use Rocks’ XML files to configure the nodes automatically as we did the RPMs
There are two more files that must be added to the
/home/install/site-profiles/3.3.0/nodes/ directory: condor-master.xml
and condor-compute.xml. These two files perform the same task as the one we performed when we created the condor-config.local only this time it is automated. At this point
we need to ensure that our changes take effect. We then change directory to /home/
install and run:
# rocks-dist dist
After we do this we can test this using the following command to reboot and reinstall a test
node:
# /boot/kickstart/cluster-kickstart
If everything works correctly, the node should reboot and connect to the newly configured
condor master (i.e. the frontend) and be ready to accept condor jobs.
Undergraduate Research Report: Cluster Computing
11
12. APPENDIX D:
OPENFILER
INSTALLING
AND
CONFIGURING
Installing Openfiler is done via CD and it is a fairly straightforward Linux installation procedure which will walk us through setting up the RAID 5 partition. After this installation is complete, we must then prepare Openfiler to function properly with the frontend and the rest of the cluster. The NAS needs direct communication with the frontend
which, acting as a DHCP server, provides it with an IP address. To properly configure
Openfiler, we will have to edit two MySQL databases containing the required network data.
Because Openfiler is a different distribution than Rocks we cannot automate this process.
I. Openfiler’s hostname must be configured for the rest of the cluster to refer to it as when
making requests for files and jobs. We do this by editing /etc/sysconfig/networks .
This file contains a variable called HOSTNAME which we need to change to something
appropriate. For example, in our installation we use nasraid1.local.
II. Next, we address communciation between the frontend and nasraid1. In Rocks the
frontend controls the IP addresses and network information for the rest of the machines on
the network. So our first order of business is giving nasraid1 its appropriate configuration
information. The two MySQL databases that must be edited are the networks and nodes
databases on the frontend. When we go to localhost on the frontend’s web-browser, we can
connect to the Database we need, located under
http://localhost/admin/phpMyAdmin/index.php. Both need a combination of the following
data:
ID - This ID is used by the database to connect the databases together
Name - This is the hostname we added to /etc/sysconfig/networks
Membership - in our case we want membership 2 which is for storage
CPUs - this needs to be set to 1 with our hardware
Rack - NULL for the NAS
Rank - NULL as well
Node - this will simply be the next available number
MAC - the MAC address to the network card on the NAS
Netmask - this is the netmask found when ifconfig is run on the frontend
Gateway - will be set to NULL we don’t need it.
Device - this is the device used on the frontend (i.e. eth0)
Module - simply use 8139too (module used by the kernel to recognize the network card)
Undergraduate Research Report: Cluster Computing
12
13. After this information has been added to the database, we need to run the following command on the frontend to ensure the changes take effect.
# insert-ethers --update
III. A connection needs to be established with the NAS. The frontend is ready to accept
connections from the NAS. Run the following command as root on the NAS to establish a
network connection with the frontend:
# /etc/init.d/network restart
Verify that there is a connection with:
# ping uscms1
IV. Mounting the RAID on the nodes and the frontend requires a bit of work. The first
step is to set up the partitions to be accessible to all the other machines on the network.
This is done through /etc/exports file which we must edit by adding a new line such
as the following:
/RAID 10.0.0.0/255.0.0.0(rwx)
Where RAID is the directory where the RAID partition is mounted.
This is where things get a bit repetitive. We need to edit the /etc/fstab file on
the frontend and on each node by adding the following line to the bottom:
nasraid1:/RAID /mnt/nasraid1 nfs rwx 0 0
The best way of doing this is to run:
# cluster-fork echo “nasraid1:/RAID /mnt/nasraid1 nfs rwx 0 0“ >> /etc/fstab
and now you must repeatedly type the password. The cluster-fork command is a built in
Rocks program which allows all the compute nodes to receive a particular command at the
same time. This is best done once all the nodes have been initially installed. If a new node is
installed or you need to shoot a node, you then must input this line manually as if you run
the cluster-fork command it may disable the rest of the nodes.
V. From now on all the files can be accessed through the /RAID directory. One major
drawback of mounting the NAS in the way stated above is that authentication is not allowed. All files are created without owner or group information which is a problem for protecting users data. We are working on a solution to this by configuring the frontend as an
NIS server and sharing the authentication files with OpenFiler.
Undergraduate Research Report: Cluster Computing
13
14. B I B LI OG RAP HY
[1] http://www.top500.org
[2] http://www.rocksclusters.org/wordpress/?page_id=4
[3] http://www.cs.wisc.edu/condor/manual.v6.8.2/index.html
[4] www.opensciencegrid.org
[5] http://www.rocksclusters.org/rocks-documentation/4.2.1/
[6] http://www.openfiler.com
Undergraduate Research Report: Cluster Computing
14