This document provides an installation guide for Red Hat HPC Solution 5.5, which enables the creation, management, and use of high performance computing clusters running Red Hat Enterprise Linux. It covers installation prerequisites, procedures, verifying installs, adding nodes, managing node groups, synchronizing files, known issues, and revision history. The guide is written by Mark Black, Kailash Sethuraman, and Daniel Riek from Red Hat and Platform Computing Inc.
The document discusses pointers and user spaces in RPG IV. It explains that pointers contain memory addresses and allow fields to be based on and dynamically allocated based on the pointer value. Pointers are used with parameter passing, multiple occurrence data structures, C functions, dynamic memory allocation, and user spaces. The document provides examples of using pointers with parameter lists, accessing trigger buffers, and dynamic memory allocation.
This document provides information on the AddUsers.exe and ARP.exe Windows commands and the ASSOC command. It describes:
- AddUsers.exe automates creating large numbers of user accounts from a comma-delimited file and has options to create, dump, or erase accounts.
- ARP.exe displays and modifies the IP to physical address translation tables used for address resolution, allowing the viewing, adding, and deleting of ARP entries.
- ASSOC associates file extensions with file types in Windows so applications know what type of file it is based on the extension. It allows displaying, adding, and changing the file type associated with an extension.
The resume provides details of Monika Sharma, a 20-year-old student currently pursuing a B.Com(H) degree from ICG - The IIS University, who has achieved academic and extracurricular success in school including being head girl and an accountancy topper, and seeks a challenging position to effectively contribute her skills and talents.
Tool Development 08 - Windows Command PromptNick Pruehs
Chapter 08 of the lecture Tool Development taught at SAE Institute Hamburg.
Introduction to the windows command prompt, and command-line arguments and calling external programs in .NET.
1. The document discusses fundamental DOS commands like DIR, FORMAT, COPY, PATH, LABEL, VOL, MD, CD, and DEL. It provides examples of how to use each command.
2. Rules for naming files in DOS are described, including allowed/prohibited characters and reserved words. File extensions help identify file types like .exe, .com, .bat, .bak, .bas, etc.
3. Operating systems like DOS, Windows, Linux, MacOS, and UNIX are introduced. MS-DOS is characterized as a disk-based, single-user, single-task OS with a character-based interface. Ways to access DOS commands from Windows
The document describes how to use the object file display (OFD) utility to create a DSP boot image from a common object file format (COFF) file. OFD converts COFF files into XML format, extracting section information. A Perl script then processes the XML to create a C source file containing the boot image, which can be included in a host application and downloaded to the DSP. OFD provides a more flexible way to process COFF files than traditional HEX utilities by extracting detailed information without needing to understand low-level COFF formats.
This document provides an introduction to basic Unix/Linux commands. It discusses how to log in and out of a Unix server, interpret the command prompt, enter commands, and manage basic files. The objectives are to familiarize students with accessing a Unix/Linux server, understanding the command prompt, using basic file commands, and changing passwords. Steps are provided to log in, view the prompt, enter sample commands like man and passwd, and change a password. Key terms like absolute vs relative file names are also introduced.
The eZ Open Document Format (eZODF) extension allows importing and exporting of OpenDocument Text (.odt) files in eZ Publish. It supports importing Microsoft Word documents by converting them to .odt format using OpenOffice.org. Documents can be imported by uploading the file and placing it in the content tree. Supported data types and formatting for import and export are described. Templates can be used for custom formatting of exported .odt files.
The document discusses pointers and user spaces in RPG IV. It explains that pointers contain memory addresses and allow fields to be based on and dynamically allocated based on the pointer value. Pointers are used with parameter passing, multiple occurrence data structures, C functions, dynamic memory allocation, and user spaces. The document provides examples of using pointers with parameter lists, accessing trigger buffers, and dynamic memory allocation.
This document provides information on the AddUsers.exe and ARP.exe Windows commands and the ASSOC command. It describes:
- AddUsers.exe automates creating large numbers of user accounts from a comma-delimited file and has options to create, dump, or erase accounts.
- ARP.exe displays and modifies the IP to physical address translation tables used for address resolution, allowing the viewing, adding, and deleting of ARP entries.
- ASSOC associates file extensions with file types in Windows so applications know what type of file it is based on the extension. It allows displaying, adding, and changing the file type associated with an extension.
The resume provides details of Monika Sharma, a 20-year-old student currently pursuing a B.Com(H) degree from ICG - The IIS University, who has achieved academic and extracurricular success in school including being head girl and an accountancy topper, and seeks a challenging position to effectively contribute her skills and talents.
Tool Development 08 - Windows Command PromptNick Pruehs
Chapter 08 of the lecture Tool Development taught at SAE Institute Hamburg.
Introduction to the windows command prompt, and command-line arguments and calling external programs in .NET.
1. The document discusses fundamental DOS commands like DIR, FORMAT, COPY, PATH, LABEL, VOL, MD, CD, and DEL. It provides examples of how to use each command.
2. Rules for naming files in DOS are described, including allowed/prohibited characters and reserved words. File extensions help identify file types like .exe, .com, .bat, .bak, .bas, etc.
3. Operating systems like DOS, Windows, Linux, MacOS, and UNIX are introduced. MS-DOS is characterized as a disk-based, single-user, single-task OS with a character-based interface. Ways to access DOS commands from Windows
The document describes how to use the object file display (OFD) utility to create a DSP boot image from a common object file format (COFF) file. OFD converts COFF files into XML format, extracting section information. A Perl script then processes the XML to create a C source file containing the boot image, which can be included in a host application and downloaded to the DSP. OFD provides a more flexible way to process COFF files than traditional HEX utilities by extracting detailed information without needing to understand low-level COFF formats.
This document provides an introduction to basic Unix/Linux commands. It discusses how to log in and out of a Unix server, interpret the command prompt, enter commands, and manage basic files. The objectives are to familiarize students with accessing a Unix/Linux server, understanding the command prompt, using basic file commands, and changing passwords. Steps are provided to log in, view the prompt, enter sample commands like man and passwd, and change a password. Key terms like absolute vs relative file names are also introduced.
The eZ Open Document Format (eZODF) extension allows importing and exporting of OpenDocument Text (.odt) files in eZ Publish. It supports importing Microsoft Word documents by converting them to .odt format using OpenOffice.org. Documents can be imported by uploading the file and placing it in the content tree. Supported data types and formatting for import and export are described. Templates can be used for custom formatting of exported .odt files.
The document discusses Disk Operating System (DOS) and the types of commands in DOS. It describes how DOS divides disks into system and data areas, with the system area containing the boot, FAT, and root directory sections. It also explains the DOS command prompt and different types of internal and external commands used in DOS.
This document discusses data files in C programming. It covers opening and closing data files, creating data files, and processing data files. Some key points:
1) To access a data file in C, it must first be opened using the fopen() function, which returns a FILE pointer. This pointer is then used to read from or write to the file.
2) A data file can be created by writing data from a program to a new file using functions like putc() and fputs().
3) To process a data file, functions like fgetc() and fputs() can be used to read and write data, character by character or as strings. Command line arguments passed to
MEX compiles source code files into MEX-files that can be executed from within MATLAB. MEX accepts C, C++, Fortran, and Ada source files along with object files and libraries. It uses options files to control compiler settings and outputs platform-specific MEX-file extensions. Command line options allow overriding options file settings and controlling the compilation process.
This document provides an overview of Linux terminal sessions and system utilities. It discusses employing fundamental utilities like ls, wc, sort, and grep. It also covers managing input/output redirection, special characters, shell variables, environment variables, and creating shell scripts. Key topics include using utilities to list directories, count file elements, sort lines, and locate specific lines. It also discusses starting additional terminal sessions, exiting sessions, and locating the graphical terminal.
This document provides an overview of teaching tips, quick quizzes, class discussion topics, and additional resources for teaching a lesson on the MS-DOS operating system. It covers the history and evolution of MS-DOS, its design goals of accommodating single users, memory management, process management using interrupt handlers, device management using installable device drivers, file organization and management using directories and file allocation tables, the command-driven user interface, and use of batch files, filters, pipes, and commands like TREE.
This document summarizes the inputs and outputs of the mtLine and fdData programs, which calculate transmission line parameters. MtLine accepts conductor and frequency input data to output line impedance and admittance matrices. FdData produces a frequency-dependent line model file for transient analysis in MicroTran. Both programs can read input files and write output files, with mtLine supporting multiple frequency cases and fdData generating a single line model file. The document describes the file formats and running of the programs in both prompt and command-line modes.
This document discusses different types of file structures and access methods. It describes sequential files, which can only be accessed sequentially from beginning to end, and indexed and hashed files, which allow random access via keys. It also discusses updating sequential files, collision resolution methods for hashed files, directories for organizing files, and the difference between text and binary files.
The document provides an overview of email technology including how email works, common email protocols like SMTP and POP3, email standards, and considerations for implementing email servers. Key points include:
1. Email is sent via SMTP and retrieved via POP3 or IMAP. It is stored on email servers in mailboxes that clients can access using these protocols.
2. SMTP is the main protocol for sending email between servers. It is used to route messages from the sender to the recipient's mail server.
3. Standards like RFC822 and MIME define email formats and syntax for headers, attachments, etc. to ensure interoperability between clients and servers.
This document provides an overview of some key conventions in LaTeX, including the TeX Directory Structure (TDS) that organizes TeX-related files in a structured folder hierarchy. It describes the standard locations for files like documentation, fonts, packages, and programs. It also discusses TeX's treatment of special characters and how they are displayed, as well as how to input mathematical symbols, Greek letters, and accented characters. The document is intended as an introductory tutorial on LaTeX conventions.
PuTTY is a free and open-source terminal emulator and SSH client. It allows users to connect to other systems running SSH, Telnet, or Rlogin servers over the network. PuTTY can be downloaded and installed easily without any configuration required. It can be configured by specifying connection settings like the host name, protocol, encryption, and saved for future use. Basic UNIX commands in PuTTY allow users to navigate directories, view and edit files, install and run programs. Common commands include ls, cd, cat, vi, more, grep, find, man and others. PuTTY provides secure remote access and administration of UNIX servers through its simple terminal interface.
This document provides an overview of system administrator tasks and basic UNIX concepts. It discusses the roles and responsibilities of system administrators, the structure and components of UNIX operating systems, basic commands for navigating the file system, managing files and directories, editing text, and running processes. It also covers shells, variables, and cron jobs for scheduling automated tasks. The document concludes with introductions to AIX operating systems and IBM pSeries servers.
In MS-Dos (Disk Operating System) There are two types of Basic dos commands they are internal dos commands and external dos commands which are used separately to perform specific task or operation. Internal dos commands are those commands which are included in command processor (command.com). Internal dos commands are built in command.com file and while the computer has been booted this file or commands are loaded in the computer memory and you can use this basic dos commands while computer is ON.
This document provides information about the MS-DOS operating system, including its history, structure, files, commands, and more. It discusses that MS-DOS is a single-user, single-tasking operating system that uses a command line interface. It describes the system files used by MS-DOS like IO.SYS, MSDOS.SYS, and COMMAND.COM. It also summarizes the structure of MS-DOS including the operating system loader, BIOS, kernel, and user interface. Finally, it provides examples of various internal and external commands used in MS-DOS.
The document discusses internal commands in DOS. It defines internal commands as built-in commands that are loaded with the operating system into memory during booting and remain resident as long as the computer is on. It provides examples of common internal commands like DIR, COPY, DEL, TYPE, CD, MD, RD, and explains what each command does and provides sample syntax. The document also discusses conventions used in command descriptions and provides examples of using wildcards with commands.
Useful Linux and Unix commands handbookWave Digitech
This article provides practical examples for most frequently used commands in Linux / UNIX. Helpful for Engineers and trainee engineers, Software developers. A handy notes for all Linux & Unix commands.
This document provides a comprehensive list of Linux commands, files, directories, and shell variables. It begins with an introduction and then covers shorthand at the command prompt, typical dot files, useful files, important directories, bash shell variables, daemons and services, window managers, an alphabetical list of commands, and notes on applications. The document is intended to give beginners, programmers, and professionals a jumpstart on common Linux commands and essential system information. It provides high-level overviews of the key components that make up a Linux system and environment.
This document provides a cheat sheet of common Linux commands and their usage. It covers basic file operations like copying, moving, deleting files and directories. It also includes commands for viewing files, compressing/decompressing files, finding files, remote access, and getting system information. The commands are explained over 3 pages with examples of proper syntax and usage for each one.
Linux is a prominent example of free and open source software. It can be installed on a wide variety of devices from embedded systems to supercomputers. Linux is commonly used for servers, with estimates that it powers around 60% of web servers. Linux distributions package the Linux kernel with other software like utilities, libraries and desktop environments. Programming languages and build tools like GCC are supported. Embedded Linux is often used in devices due to its low cost and ease of modification.
The document provides an overview of Linux operating system concepts including:
- Linux is an open source operating system that interacts with hardware and allocates resources.
- It supports multi-tasking and multi-user environments. Common types include Debian, Ubuntu, and Redhat.
- Key components include the kernel, shell programs, file management commands, text editors, browsers, and programming tools.
This document provides an overview of file administration in Linux. It describes the three types of files in Linux - ordinary disk files which contain user data, special files which represent devices, and directory files which contain other files and directories. It outlines guidelines for naming files and directories, explaining which characters to avoid. It also introduces the file command for determining a file's type and describes the basic Linux directory structure with files and directories organized in a tree format.
What is DCA (Diploma of Computer Application) Detail, Syllabus,Coursess.pdfRohitRoshanBengROHIT
What is DCA? (Diploma in ComputerApplication) Course details, Syllabus :
DCA full form is (Diploma in Computer Application). It is one-year diploma program in the field of computer applications that includes the study of a variety of software programs including HTML, MS office, Internd Applications and operating systems.
The document discusses Disk Operating System (DOS) and the types of commands in DOS. It describes how DOS divides disks into system and data areas, with the system area containing the boot, FAT, and root directory sections. It also explains the DOS command prompt and different types of internal and external commands used in DOS.
This document discusses data files in C programming. It covers opening and closing data files, creating data files, and processing data files. Some key points:
1) To access a data file in C, it must first be opened using the fopen() function, which returns a FILE pointer. This pointer is then used to read from or write to the file.
2) A data file can be created by writing data from a program to a new file using functions like putc() and fputs().
3) To process a data file, functions like fgetc() and fputs() can be used to read and write data, character by character or as strings. Command line arguments passed to
MEX compiles source code files into MEX-files that can be executed from within MATLAB. MEX accepts C, C++, Fortran, and Ada source files along with object files and libraries. It uses options files to control compiler settings and outputs platform-specific MEX-file extensions. Command line options allow overriding options file settings and controlling the compilation process.
This document provides an overview of Linux terminal sessions and system utilities. It discusses employing fundamental utilities like ls, wc, sort, and grep. It also covers managing input/output redirection, special characters, shell variables, environment variables, and creating shell scripts. Key topics include using utilities to list directories, count file elements, sort lines, and locate specific lines. It also discusses starting additional terminal sessions, exiting sessions, and locating the graphical terminal.
This document provides an overview of teaching tips, quick quizzes, class discussion topics, and additional resources for teaching a lesson on the MS-DOS operating system. It covers the history and evolution of MS-DOS, its design goals of accommodating single users, memory management, process management using interrupt handlers, device management using installable device drivers, file organization and management using directories and file allocation tables, the command-driven user interface, and use of batch files, filters, pipes, and commands like TREE.
This document summarizes the inputs and outputs of the mtLine and fdData programs, which calculate transmission line parameters. MtLine accepts conductor and frequency input data to output line impedance and admittance matrices. FdData produces a frequency-dependent line model file for transient analysis in MicroTran. Both programs can read input files and write output files, with mtLine supporting multiple frequency cases and fdData generating a single line model file. The document describes the file formats and running of the programs in both prompt and command-line modes.
This document discusses different types of file structures and access methods. It describes sequential files, which can only be accessed sequentially from beginning to end, and indexed and hashed files, which allow random access via keys. It also discusses updating sequential files, collision resolution methods for hashed files, directories for organizing files, and the difference between text and binary files.
The document provides an overview of email technology including how email works, common email protocols like SMTP and POP3, email standards, and considerations for implementing email servers. Key points include:
1. Email is sent via SMTP and retrieved via POP3 or IMAP. It is stored on email servers in mailboxes that clients can access using these protocols.
2. SMTP is the main protocol for sending email between servers. It is used to route messages from the sender to the recipient's mail server.
3. Standards like RFC822 and MIME define email formats and syntax for headers, attachments, etc. to ensure interoperability between clients and servers.
This document provides an overview of some key conventions in LaTeX, including the TeX Directory Structure (TDS) that organizes TeX-related files in a structured folder hierarchy. It describes the standard locations for files like documentation, fonts, packages, and programs. It also discusses TeX's treatment of special characters and how they are displayed, as well as how to input mathematical symbols, Greek letters, and accented characters. The document is intended as an introductory tutorial on LaTeX conventions.
PuTTY is a free and open-source terminal emulator and SSH client. It allows users to connect to other systems running SSH, Telnet, or Rlogin servers over the network. PuTTY can be downloaded and installed easily without any configuration required. It can be configured by specifying connection settings like the host name, protocol, encryption, and saved for future use. Basic UNIX commands in PuTTY allow users to navigate directories, view and edit files, install and run programs. Common commands include ls, cd, cat, vi, more, grep, find, man and others. PuTTY provides secure remote access and administration of UNIX servers through its simple terminal interface.
This document provides an overview of system administrator tasks and basic UNIX concepts. It discusses the roles and responsibilities of system administrators, the structure and components of UNIX operating systems, basic commands for navigating the file system, managing files and directories, editing text, and running processes. It also covers shells, variables, and cron jobs for scheduling automated tasks. The document concludes with introductions to AIX operating systems and IBM pSeries servers.
In MS-Dos (Disk Operating System) There are two types of Basic dos commands they are internal dos commands and external dos commands which are used separately to perform specific task or operation. Internal dos commands are those commands which are included in command processor (command.com). Internal dos commands are built in command.com file and while the computer has been booted this file or commands are loaded in the computer memory and you can use this basic dos commands while computer is ON.
This document provides information about the MS-DOS operating system, including its history, structure, files, commands, and more. It discusses that MS-DOS is a single-user, single-tasking operating system that uses a command line interface. It describes the system files used by MS-DOS like IO.SYS, MSDOS.SYS, and COMMAND.COM. It also summarizes the structure of MS-DOS including the operating system loader, BIOS, kernel, and user interface. Finally, it provides examples of various internal and external commands used in MS-DOS.
The document discusses internal commands in DOS. It defines internal commands as built-in commands that are loaded with the operating system into memory during booting and remain resident as long as the computer is on. It provides examples of common internal commands like DIR, COPY, DEL, TYPE, CD, MD, RD, and explains what each command does and provides sample syntax. The document also discusses conventions used in command descriptions and provides examples of using wildcards with commands.
Useful Linux and Unix commands handbookWave Digitech
This article provides practical examples for most frequently used commands in Linux / UNIX. Helpful for Engineers and trainee engineers, Software developers. A handy notes for all Linux & Unix commands.
This document provides a comprehensive list of Linux commands, files, directories, and shell variables. It begins with an introduction and then covers shorthand at the command prompt, typical dot files, useful files, important directories, bash shell variables, daemons and services, window managers, an alphabetical list of commands, and notes on applications. The document is intended to give beginners, programmers, and professionals a jumpstart on common Linux commands and essential system information. It provides high-level overviews of the key components that make up a Linux system and environment.
This document provides a cheat sheet of common Linux commands and their usage. It covers basic file operations like copying, moving, deleting files and directories. It also includes commands for viewing files, compressing/decompressing files, finding files, remote access, and getting system information. The commands are explained over 3 pages with examples of proper syntax and usage for each one.
Linux is a prominent example of free and open source software. It can be installed on a wide variety of devices from embedded systems to supercomputers. Linux is commonly used for servers, with estimates that it powers around 60% of web servers. Linux distributions package the Linux kernel with other software like utilities, libraries and desktop environments. Programming languages and build tools like GCC are supported. Embedded Linux is often used in devices due to its low cost and ease of modification.
The document provides an overview of Linux operating system concepts including:
- Linux is an open source operating system that interacts with hardware and allocates resources.
- It supports multi-tasking and multi-user environments. Common types include Debian, Ubuntu, and Redhat.
- Key components include the kernel, shell programs, file management commands, text editors, browsers, and programming tools.
This document provides an overview of file administration in Linux. It describes the three types of files in Linux - ordinary disk files which contain user data, special files which represent devices, and directory files which contain other files and directories. It outlines guidelines for naming files and directories, explaining which characters to avoid. It also introduces the file command for determining a file's type and describes the basic Linux directory structure with files and directories organized in a tree format.
What is DCA (Diploma of Computer Application) Detail, Syllabus,Coursess.pdfRohitRoshanBengROHIT
What is DCA? (Diploma in ComputerApplication) Course details, Syllabus :
DCA full form is (Diploma in Computer Application). It is one-year diploma program in the field of computer applications that includes the study of a variety of software programs including HTML, MS office, Internd Applications and operating systems.
What is DCA (Diploma of Computer Application) Detail, Syllabus,Coursess.pdfRohitRoshanBengROHIT
A Diploma in Computer Applications (DCA) is a short-term (1-2 years), technical diploma program that deals with the fundamentals of computer applications. The ...
Linux is an open-source operating system used widely for servers and can also be installed on desktops and embedded devices. It uses a modular kernel called Linux and source code is freely available under licenses like GPL. Common Linux distributions include Red Hat, Debian, Ubuntu and others. The Apache web server is widely used open-source software that helped popularize the World Wide Web and can be configured using directives in configuration files.
The document provides an overview of various operating systems including UNIX, Linux, and Windows. It discusses the history and development of UNIX including early projects at Bell Labs and Berkeley. It also summarizes key features of UNIX such as security, reliability, and multi-user support. The document then describes the UNIX directory structure and common commands like ls, cd, cat, and man.
The structure of Linux - Introduction to Linux for bioinformaticsBITS
This 3th slide deck of the training 'Introduction to linux for bioinformatics' gives a broad overview of the file system structure of linux. We very gently introducte the command line in this presentation.
Linux is a widely used open-source operating system that can run on desktops, servers, and embedded devices. It includes basic commands like cal, date, cd, and cat. The document also provides overviews of installing and configuring the Apache web server, PHP, and MySQL to set up a basic LAMP stack on a Linux system.
Power point on linux commands,appache,php,mysql,html,css,web 2.0venkatakrishnan k
Linux is a widely used open-source operating system that can run on desktops, servers, and embedded devices. The document provides basic commands for Linux like cal to view a calendar, date to check the date and time, and cd to change directories. It also gives an overview of installing and configuring web servers like Apache and PHP as well as databases like MySQL on a Linux system.
This chapter discusses the history and varieties of UNIX and Linux operating systems. It describes how to install Linux, configure users and permissions, and interconnect Linux with other network operating systems using tools like Samba, WINE, VMware and Telnet. The chapter also provides examples of basic Linux commands and how to set up a Linux server with the required hardware specifications.
The document outlines a presentation on becoming a "rockstar" with Drupal. It discusses Drupal's large open source community and code base. It covers best practices for code structure, naming conventions, deployment strategies like Features and Configuration Management. It also summarizes caching options like Memcache, Varnish and Boost as well as security practices and the flexibility provided by Drupal's hooks, API and thousands of contributed modules. The presentation concludes with an overview of the command line tool Drush and its uses in deployment, site management and more.
Linux Administration in this basic commands are there & also advanced commands are also there,It will be very use full for everyone who are all intrested in learning Linux,Which means everyone learn Linux esaliy.
The document discusses various topics related to Linux administration. It covers Unix system architecture, the Linux command line, files and directories, running programs, wildcards, text editors, shells, command syntax, filenames, command history, paths, hidden files, home directories, making directories, copying and renaming files, and more. It provides an overview of key Linux concepts and commands for system administration.
The document acknowledges and thanks several people for their contributions to an internship program. It thanks the course coordinator for their support, the librarian and lab assistant for their hard work, and other staff members for their assistance. It also thanks faculty, the program coordinator, and friends who helped as interns for their ideas and contributions throughout the project.
The document acknowledges and thanks several people for their contributions to an internship program. It thanks the course coordinator for their support, the librarian and lab assistant for their hard work, and other staff members for their assistance. It also thanks faculty, the program coordinator, and friends who worked as interns for their help and ideas throughout the project.
Some key features of 4DOS include over 110 commands that enhance and expand on standard DOS commands, customizable colors, powerful file searching, interactive help, command line editing features, and tab completion of file names. The guide explains how to install and use 4DOS and where to find additional documentation.
1) The document discusses a presentation about implementing security best practices on Linux systems. It provides information about the speaker's background and qualifications in cybersecurity and Linux.
2) The presentation covers topics like cybersecurity principles, Linux security hardening techniques, and using the CIS benchmarks and CIS-CAT Lite tool to assess and improve the security of Ubuntu systems.
3) It encourages attendees to ask questions to learn more about securing Linux and have a chance to win prizes from the event sponsor, Biznet Gio.
This document discusses implementing DevOps for large enterprises. It covers the basics of enterprise DevOps, challenges in adopting it, strategies for doing so successfully, key practices and principles, and best practices for transformation. The session will provide insights on leveraging DevOps at scale in large organizations.
This document discusses how Acronis is helping the cloud provider Biznet Gio to simplify business continuity for their customers. Key benefits of using Acronis include a flexible licensing model, single dashboard for management across regions, and white label integration to easily deliver backup and disaster recovery as a service. Acronis cyber infrastructure allows for no data transfer costs and ensures high availability through multi-region support and automated disaster recovery plans.
This document discusses the importance of having a business continuity plan (BCP) to protect critical business services from disasters and interruptions. It outlines key elements of an effective BCP such as risk assessment, priority setting, recovery strategies, testing, and maintenance. The document also introduces disaster recovery as a service (DRaaS) as a cost-effective solution that can provide data replication, high availability, and rapid recovery in the event of an outage. DRaaS helps ensure business continuity with minimal on-site infrastructure and reduced costs compared to traditional disaster recovery methods.
The document provides a summary of the 31-year history of Linux from 1991 to 2022. It begins with Linus Torvalds announcing and releasing the initial Linux kernel in 1991. Key events include it becoming open source in 1992, the first Linux distributions in the early 1990s, and the growth of desktop Linux with projects like GNOME and KDE in the late 1990s. The document then covers increased adoption in Indonesia and events like the Indonesia Go Open Source initiative in the 2000s. It concludes with current trends like open source adoption concerns and most used open source technologies.
This document provides an overview of considerations for choosing container storage for Kubernetes. It discusses the existing storage landscape, including open-source and commercial options. Key factors to consider include supported storage types, data protection and replication capabilities, dynamic provisioning, and whether container-native storage (CNS) or container-attached storage (CAS) is preferable based on workload needs. Performance benchmarks show Piraeus and StorageOS providing the best performance. The document aims to help users determine the right storage solution for their specific Kubernetes applications and workloads.
The document provides an overview of cloud infrastructure architecture and security. It discusses key cloud security concepts like the shared responsibility model between cloud providers and customers. It also covers common cloud security categories such as identity and access management, data security, compliance with regulations, and security best practices and frameworks.
PHPIDOL#80: Kubernetes 101 for PHP Developer. Yusuf Hadiwinata - VP Operation...Yusuf Hadiwinata Sutandar
Sesi Terakhir sebelum libur PHPID-OL memasuki Bulan Puasa Ramadhan. Kita akan ketemu lagi 19 April 2021.
Topik penutup yang akan diisi oleh Om Yusuf Hadiwinata, Praktisi Teknologi terkemuka dan ternama di lingkungan Industri IT Indonesia...
Ciyaooo.... Maju Terus PHP Indonesia
Link Video: https://fb.me/e/hzWbd0FeW
Building Monitoring Framework
Thnks you Ralali, DevOps Indonesia, IDDevops Member dan para peserta event meetup malam ini
Presentasi bisa di akses di: https://www.slideshare.net/isnuryusuf/devops-indonesia-presentation-monitoring-framework
Video Record bisa di lihat di:
- https://www.youtube.com/watch?v=cyopfqHxMqU
- https://www.youtube.com/watch?v=V_HYxs6IUxM
This document provides information on database security. It discusses how database security protects confidentiality, integrity and availability of databases. It also discusses the importance of database security to prevent data loss or compromise. Some of the largest data breaches in 2018 are summarized, including breaches of Aadhaar and Facebook that exposed over 1 billion and 87 million records respectively. Common attack vectors and frameworks for implementing database security are referenced. Finally, the document outlines a methodology for implementing proven database security practices around inventory, testing, compliance, eliminating vulnerabilities, enforcing least privileges, monitoring for anomalies, data protection, backup plans, and responding to incidents.
Cloud computing is a model for enabling network access to configurable computing resources that can be rapidly provisioned with minimal management effort. There are differing definitions from NIST, Wikipedia, and others. Cloud computing provides utility computing, service-oriented architecture, and service level agreements. Key characteristics include scalability, availability, manageability, accessibility, performance, and enabling techniques like virtualization. The three main cloud models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Cloud deployment models include public, private, hybrid, and community clouds. Cloud computing provides advantages like cost savings and scalability but also risks like reliance on internet and potential security issues.
Buku ini memberikan tips dan informasi bagaimana berselancar di dunia siber dengan aman untuk masyarakat. Buku ini akan digunakan dalam roadshow sosialisasi keamanan siber di 5 kota oleh Badan Siber dan Sandi Negara agar masyarakat mengetahui ancaman di dunia siber dan cara menghadapinya.
CTI Group has several job vacancies across various departments and experience levels. Positions range from entry-level roles for fresh graduates like marketing representatives and sales specialists to middle and senior level roles such as managers, team leaders, and specialists in areas including accounting, procurement, software support, and security. Thank you for your consideration.
The document discusses DevSecOps and security practices in DevOps. It introduces DevSecOps and reasons for adopting it, including how security has traditionally been seen as inhibiting to DevOps efforts. It then outlines ways to manage risk in a DevOps environment by securing assets, development processes, operations, and APIs. Specific techniques are discussed for each area, such as container scanning, threat modeling tools, and static/dynamic application security testing options.
OCI as the most important organization in the container ecosystem driving vendor neutrality, standardization and making this amazing technology accessible globally
This document provides an overview of containers and Docker for automating DevOps processes. It begins with an introduction to containers and Docker, explaining how containers help break down silos between development and operations teams. It then covers Docker concepts like images, containers, and registries. The document discusses advantages of containers like low overhead, environment isolation, quick deployment, and reusability. It explains how containers leverage kernel features like namespaces and cgroups to provide lightweight isolation compared to virtual machines. Finally, it briefly mentions Docker ecosystem tools that integrate with DevOps processes like configuration management and continuous integration/delivery.
The document discusses the origins and evolution of OpenStack, an open-source cloud computing platform. It began in 2010 as a collaboration between NASA and Rackspace, building upon NASA's earlier Nebula platform. Over time, major Linux vendors like Red Hat, Ubuntu, and SUSE began developing their own OpenStack distributions to simplify deployment and management. The "Big Three" distributions take different approaches, and the market share between them has continued growing as OpenStack adoption increases among developers and enterprises. Key factors for OpenStack success include the supported virtualization technologies, ease of deployment, ongoing operations, reliability, and community support behind each distribution.
3. Mono-spaced Bold
This denotes words and phrases that will or could be input on a system, including shell commands, file
names and paths. It is also used to highlight key caps and key-combinations you can press as shortcuts.
For example:
To see the contents of the file my_next_bestselling_novel in your current working
directory, enter the cat my_next_bestselling_novel command at the shell
prompt and press Enter to execute the command.
A useful shortcut for the above command (and many others) is Tab completion. Type cat
my_ and then press the Tab key. Assuming there are no other files in the current directory
which begin with 'my_', the rest of the file name will be entered on the command line for
you.
(If other file names begin with 'my_', pressing the Tab key expands the file name to the
point the names differ. Press Tab again to see all the files that match. Type enough of the
file name you want to include on the command line to distinguish the file you want from
the others and press Tab again.)
The above includes a file name, a shell command and two key caps, all distinctly presented in Mono-
spaced Bold and all distinguishable thanks to context.
Key-combinations can be distinguished from key caps by the hyphen connecting each part of a key-
combination. For example:
Press Enter to execute the command.
Press Ctrl-Alt-F1 to switch to the first virtual terminal. Press Ctrl-Alt-F7 to return to your
X-Windows session.
The first sentence above highlights a specific key cap to press. The second highlights two sets of three
keys, each set pressed simultaneously.
If source code is discussed, class names, methods, functions, variable names and returned values
mentioned within a paragraph will be presented as above, in Mono-spaced Bold. For example:
File-related classes include filesystem for file systems, file for files, and dir for
directories. Each class has its own associated set of permissions.
In PDF and paper editions, a specific typeface is used: 12-point Liberation Mono Bold.
This typeface is also used in HTML editions, if the Liberation Fonts are installed on your system: an
equivalent mono-spaced bold face is used otherwise. Note: Red Hat Enterprise Linux 5 and later
include the Liberation Fonts set by default.
Proportional Bold
This style denotes words or phrases you will encounter on a system. This includes application names;
dialogue box text; labelled buttons; check-box and radio button labels; menu titles and sub-menu titles.
For example:
Choose System > Preferences > Mouse from the main menu bar to launch the Mouse
Preferences utility. In the Buttons tab, click the Left-handed mouse check box and click
4. Close to switch the primary mouse button from the left to the right (making the mouse
suitable for use in the left hand).
To insert a special character into a gedit file, choose Applications > Accessories >
Character Map from the main menu bar. Next, choose Search > Find… from the
Character Map menu bar, type the name of the character in the Search field and click
Next. The character you sought will be highlighted in the Character Table. Double-click
this highlighted character to place it in the Text to copy field and then click the Copy
button. Now switch back to your document and choose Edit > Paste from the gedit menu
bar.
The above text includes application names; system-wide menu names and items; application-specific
menu names; and buttons and text found within a GUI interface, all distinctly presented in Proportional
Bold and all distinguishable by context.
Note the > shorthand used to indicate traversal through a menu and its sub-menus. This is to avoid the
verbose and difficult-to-follow 'Select Mouse from the Preferences sub-menu in the System menu of
the main menu bar' approach.
In PDF and paper editions, a specific typeface is used: 12-point Liberation Sans Bold. This typeface
is also used in HTML editions, if the Liberation Fonts are installed on your system: an equivalent
proportional bold face is used otherwise. Note: Red Hat Enterprise Linux 5 and later include the
Liberation Fonts set by default.
Mono-spaced Bold Italic or Proportional Bold Italic
Whether Mono-spaced Bold or Proportional Bold, the switch to Italics indicates replaceable or variable
text. Italics denotes text you do not input literally or displayed text that changes depending on
circumstance. For example:
To connect to a remote machine using ssh, type ssh username@domain.name at a shell
prompt. If the remote machine is example.com and your username on that machine is
john, you type ssh john@example.com.
The mount -o remount file-system command remount the named file system. For
example, to remount the /home file system, the command is mount -o remount
/home.
To see the version of a currently installed package, use the rpm -q package command. It
will return a result as follows: package-version-release.
Note the words in bold italics above — username, domain.name, file-system, package, version and
release. Each word is a placeholder, either for text you would replace with specific examples when
entering a command or for specific text that would be displayed by the system.
In PDF and paper editions, specific typefaces are used: 12-point Liberation Mono Bold Italic and 12-
point Liberation Sans Bold Italic. These typefaces are also used in HTML editions, if
the Liberation Fonts are installed on your system: equivalent mono-spaced and proportional bold italic
faces are used otherwise. Note: Red Hat Enterprise Linux 5 and later include the Liberation Fonts set
by default.
Proportional Italic
5. Aside from standard usage as a marker for the formal title of a work (eg a book title), italic is used to
denote the first time a new and important term is used. For example:
When the Apache HTTP Server accepts requests, it dispatches child processes or threads to
handle them. This group of child processes or threads is known as a server-pool. Under
Apache HTTP Server 2.0, the responsibility for creating and maintaining these server-pools
has been abstracted to a group of modules called Multi-Processing Modules (MPMs).
Unlike other modules, only one module from the MPM group can be loaded by the Apache
HTTP Server.
In PDF and paper editions, a specific typeface is used: 12-point Liberation Italic. This typeface is also
used in HTML editions, if the Liberation Fonts are installed on your system: an equivalent proportional
italic face is used otherwise. Note: Red Hat Enterprise Linux 5 and later include the Liberation Fonts
set by default.
1.2. Pull-quote Conventions
Two, commonly multi-line, data types are set off visually from the surrounding text.
Output sent to a terminal is set in Mono-spaced Roman and presented thus:
books Desktop documentation drafts mss photos stuff svn
books_tests Desktop1 downloads images notes scripts svgs
Source-code listings are also set in Mono-spaced Roman but are presented and highlighted as
follows:
package org.jboss.book.jca.ex1;
import javax.naming.InitialContext;
public class ExClient
{
public static void main(String args[])
throws Exception
{
InitialContext iniCtx = new InitialContext();
Object ref = iniCtx.lookup("EchoBean");
EchoHome home = (EchoHome) ref;
Echo echo = home.create();
System.out.println("Created Echo");
System.out.println("Echo.echo('Hello') = " + echo.echo("Hello"));
}
}
As with the in-line conventions, a specific typeface is used in PDF and print editions: 12-point
Liberation Mono. Again, as with in-line styles, if the Liberation Fonts are installed on your
system, the same typeface is used in HTML editions: an equivalent mono-spaced roman face will be
displayed otherwise.
7. • A valid subscription to Red Hat Network is required including an entitlement to Red Hat HPC
Channel
• Red Hat HPC creates a private DNS zone for all machines under its control. The name of this
zone must NOT be the same as any other DNS zone within the organization where the cluster is
installed.
Chapter 3. Installation Procedure
3.1. Recommended Network Topology
3.2. Starting the Install
3.3. Upgrading an Existing Installation
Verify that the installer node meets the prerequisites.
Register on Red Hat Network and subscribe to the appropriate channels.
3.1. Recommended Network Topology
In its default configuration, the Red Hat HPC Solution treats one Network interface of the installer
node as a public interface on which it imposes a standard firewall policy, while other interfaces are
treated as trusted, private interfaces to the cluster nodes. While this can be easily adapted to the
customer's preferences, it is the recommended network topology for an installation of the Red Hat HPC
Solution. It provides clear separation of the public network from the private cluster-internal network(s).
In that topology, the installer node acts as a gateway and firewall, protecting the cluster nodes. This
allows a relaxed set of firewall and security settings within the private cluster network, while still
maintaining secure operations.
Please consider the installation notes below, when planning your network topology.
For improved security, Red Hat recommends enabling the firewall on the external interfaces of the
installer node and maintaining a clean separation between the public networks and the private cluster
network. Also customers are advised that optional monitoring tools like Nagios®, Cacti®, or Ntop
disclose details of the network topology and are only accessible to authorized users over a secure
connection. Red Hat recommends the use of theencrypted https protocol rather than plain http
connections for these services.
8. 3.2. Starting the Install
Log into the machine as root and install the Red Hat HPC bootstrap RPM:
# yum install pcm mod_ssl
After installing the PCM RPM, source kusuenv script to set up the PCM environment:
# source /etc/profile.d/kusuenv.sh
Run the installation script:
# /opt/kusu/sbin/pcm-setup
The script detects your network settings and provide a summary per NIC:
NIC: eth0
============================================================
Device = eth0 IP = 172.25.243.44
Network = 172.25.243.0 Subnet = 255.255.255.0
mac = 00:0C:29:C4:61:06 Gateway = 172.25.243.2
dhcp = False boot = 1
Note
Red Hat HPC can only provision over statically configured NICs and not over over DHCP configured
NICs. The PCM installer asks if you want to provision on all networks, and if not which ones to
provision on.
Red Hat HPC creates a separate DNS zone for the nodes it installs. The tool prompts for this zone.
Warning
Do not use the same DNS zone as any other in your organization. Using an existing zone causes DNS
name resolution problems.
Do not use ‘localhost’ as the hostname. This causes conflicts with the Lava kit as ‘localhost’ will
resolve to the loopback device and not the NIC.
Note
The Red Hat HPC Solution tries to generate IP addresses for the individual Compute Nodes by
incrementing from the Installer Node's IP address in the private cluster network. The Installer Node
therefore has a low IP address in that network and a free range following that IP address, or the user
must adjust the Starting IP for provisioned compute nodes using the “netedit” tool.
The Red Hat HPC Solution stores a copy of the OS media and installation images. The PCM installer
prompts for the location of the directory to store the operating system. The default is /depot. A
symbolic link to /depot is created if another location is used.
The PCM installer builds a local repository using the OS media. This repository is used by PCM when
provisioning compute nodes.
The PCM installer asks for the physical DVD or CDs (in the optical drive physically connected to the
9. installer host), a directory containing the contents of the OS media, or an ISO file providing the media.
Note
If the file system option is used to provide the OS media, please select ‘N’ when prompted for
additional disks. After the OS media is successfully imported (approximately 5-10 minutes when
importing from a physical optical drive) and the local PCM repository created, a sequence of scripts
runs to configure the PCM cluster for the installation.
The default firewall rules for a RHEL installation blocks the ports needed to provision nodes. The
script provided configures the firewall to allow these ports. When the script runs, it opens the ports
necessary for provisioning the nodes. It also configures Network Address Translation (NAT) on the
installer node, so that the provisioned nodes can access the non-provisioning networks connected to the
installer on other interfaces.
To run the script to configure the firewall as root, run:
# /opt/kusu/bin/kusurc
/etc/rc.kusu.d/firstrun/S02KusuIptables.rc.py
Once the installation has completed the following message will appear:
Congratulations! The base kit is installed and configured to provision on:
Network 1.2.3.4 on interface ethX
The installer node is now ready to provision other nodes in the cluster.
Prior to installing the compute nodes it is best to add all the desired kits, and customize the node
groups. If the kits are added after the Compute Nodes have been installed it is necessary to run the
following command to get Nagios® and Cacti® to display the nodes in their respective web interfaces:
# addhost -u
This causes re-generation of many of the application configuration files.
3.3. Upgrading an Existing Installation
Upgrading an existing Red Hat HPC cluster is a two step process. First before the base kit can be
updated, the existing addon kits in the RHHPC system must be removed. This is required as some of
the older kits are not guaranteed to be compatible with RHHPC 5.5. Follow these steps to remove the
addon kits:
1. Remove the kit components from the nodegroup. Run “ngedit” and select the installer node
group to edit. Go to the component screen. De-select the components of the kits you wish to
upgrade. Continue and apply the changes.
2. Run the above step for all nodegroups.
3. Remove the kit associations from the repository
#repoman -e -k<kitname> -r<reponame>
Optionally, to list repositories and associated kits, the following command can be used:
#repoman -l
10. 4. Update repository after removing kit associations:
#repoman -u -r<reponame>
5. Remove older kits from the system
#kitops -e -k<kitname>
Optionally to list kits installed, the following command can be used:
#kitops -l
Second to update the base kit and reinstall other addon kits. The installer node contains a Red Hat
repository for RHEL 5. This repository must be updated prior to updating the kits or running a `yum
update' on the master installer. If the master installer contains packages that are newer than the
packages in the Kusu repository, there can be dependency problems when installing some kits.The base
kit must be updated prior to reinstalling the other kits. The steps below outline how to update the base
kit on the installer.
1. Ensure that the installer node can connect to Red Hat Network (RHN).
2. Update the “pcm” package:
# yum update pcm
3. Source the environment:
# source /etc/profile.d/kusuenv.sh
4. Run the PCM upgrade script. This will update the base kit from RHN, and rebuild the
repository for installing nodes.
# pcm-setup -u
Upon completion of the command, the base kit will be updated. If desired the other kits can be
updated.
5. Update the installer node and the compute node repository. See chapter 4 for details.
6. # repopatch –r rhel5_x86_64Update the kit downloaders by running the following
command for the downloader you wish to upgrade.#yum update pcm-kit-<kitname>
7. Follow the instructions in chapter 5 for installing kits.
NOTE: There is a known issue in upgrading the Cacti kit from RHEL 5 Update 2 to RHEL 5 Update 3.
The Cacti user must be removed prior to adding the new Cacti kit. Use: userdel cacti to remove
the user
NOTE: There is a known issue whereby the pcm-setup -u command does not proceed and fails with the
message “PCM setup script does not seem to have run in this machine, cannot upgrade” . Run the
following as a workaround:
# touch /var/lock/subsys/pcm-setup
NOTE: If the installer node was not configured with a suitable hostname in previous RHHPC, run the
following to change the hostname before upgrading:
# /opt/kusu/sbin/kusu-net-tool hostname <new FQDN hostname>
Chapter 4. Updating the Installer Node and the Compute Node
Repository
Prior to updating the repository it is recommended that a snapshot (copy) of the repository be made. If
there are any application issues with the updates the copy can be used:
11. # repoman –r rhel5_x86_64 -s
To update the compute nodes in a Red Hat HPC cluster use the following command:
# repopatch –r rhel5_x86_64
The repopatch tool downloads all of the required updates for the operating system and installs them
into the repository for the compute nodes. repopatch displays an error if it is not properly
configured. For example:
# repopatch –r rhel5_x86_64
Getting updates for rhel-5-x86_64. This may take awhile…
Unable to get updates. Reason: Please configure
/opt/kusu/etc/updates.conf
Edit the /opt/kusu/etc/updates.conf file adding your username and password for Red
Hat Network to the [rhel] section of the file, for example:
[fedora]
url=http://download.fedora.redhat.com/pub/fedura/linux/
[rhel]
username=
password-=
url=https://rhn.redhat.com/XMLRPC
yumrhn=https://rhn.redhat.com/rpc/api
After configuring the /opt/kusu/etc/updates.conf file, repopatch downloads all of the
updates from Red Hat Network and creates an update kit which is then associated with the rhel-5-
x86_64 repository using ngedit.
repopatch automatically associates the update kit with the correct repository. View the list of update
kit components from ngedit on the Components screen and list the available update kits using the
kitops command. For example:
12. Once repopatch has retrieved the updated packages and rebuilt the repository, the compute nodes
can be updated. This can either be done by reinstalling the compute nodes:
# boothost –r -n {Name of Node group}
or by updating their packages:
# cfmsync -u -n {Name of Node group}
The cfmsync command causes the compute nodes to start update their packages from the repository
they installed from.
Note
Remember that yum is used to update the installer node directly from Red Hat Network or other yum
repositories. The repopatch command updates the repositories used to provision compute nodes,
and the cfmsync command is used to signal the compute nodes to update.
The repopatch command can take up to a few hours to run, depending on the delta of updates it picks
up and also on the network latency.
Chapter 5. Installing Additional Red Hat HPC Kits
Additional software tools such as Nagios® and Cacti are packaged as software kits. Software packaged
as a kit is easier to install onto a Red Hat HPC Cluster. A kit contains rpms for the software, rpms for
meta-data and configuration files.
13. Note
As described in the previous section, you may be required to update the repositories.
To install Cacti® onto the Red Hat HPC cluster:
# yum install pcm-kit-cacti
# /opt/kusu/sbin/install-kit-cacti
To install Nagios® onto the Red Hat HPC cluster :
# yum install pcm-kit-nagios
# /opt/kusu/sbin/install-kit-nagios
To see what kits are available use:
# yum search pcm-kit
The yum commands above download the respective kit downloaders from the Red Hat Network. The
kit downloaders are distinguished by the pcm-kit-* prefix. In the event of a download problem, you
can safely re-run the kit downloaders.
Included in the kit downloader RPM is an installation script that adds the kit to the Red Hat HPC
cluster repository and rebuilds the cluster repository.
Every kit that is downloaded from Red Hat Network has a corresponding script used to install the kit
into the cluster repository.
Chapter 6. Viewing Available Red Hat HPC Kits
Use the following command to query the kits available from Red Hat Network:
# yum list pcm-kit-*
At the time of writing, the following kits are available:
Name Description
pcm-kit-cacti A reporting tool
pcm-kit-lava Open source LSF, a batch scheduling and queuing system
pcm-kit-
A network monitoring tool
nagios
pcm-kit-ntop A network monitoring tool
pcm-kit-rhel-
The Java Runtime
java
A collection of MPIs (MPICH 1,2, MVAPICH 1,2 and OpenMPI), math libraries
pcm-kit-hpc
(ATLAS, BLACS, SCALAPACK), and benchmarking tools.
pcm-kit-
Another system monitoring tool
ganglia
pcm-kit-rhel- The OFED stack
14. Name Description
ofed
Table 6.1. Available Kits
Other non-Open Source kits are available from http://my.platform.com
Chapter 7. Verifying the Red Hat HPC install
Once the installer node is successfully configured the next step is to verify that all software components
are installed and working correctly. The following steps can be used to verify the Red Hat HPC
installation.
Procedure 7.1. Verifying the HPC Install
1. Start the web browser (Firefox). The cluster homepage is displayed.
2. Use the dmesg command to check for hardware issues.
3. Check all network interfaces to see if they are configured and up.
# ifconfig -a | more
4. Verify that the routing table is correct.
# route
Ensure that the following system services are running:
Service Command
Web Server service httpd status
DHCP service dhcpd status
DNS service named status
Xinetd service xinetd status
MySQL service mysqld status
NFS service nfs status
1. Table 7.1. Running System Services
5. Run some basic Red Hat HPC commands.
List the installed repositories
# repoman –l
List the installed kits
15. # kitops –l
Run the Node Group Editor
# ngedit
Run the Add Host tool
# addhost
6. Check that Cacti is installed (optional; Cacti is only available if the Cacti kit has been installed)
From the web browser enter the following URL:
http://localhost/cacti
Login to Cacti with username: admin, password: admin
7. Check that Nagios is installed (optional; Nagios is only available if the Nagios kit was installed)
From the web browser enter the following URL:
http://localhost/nagios
Login to Nagios with username: admin, password: admin
Chapter 8. Adding Nodes to the Cluster
The addhost tool adds nodes to a Red Hat HPC cluster.
addhost listens on a network interface for nodes that are PXE booting and adds them to a specified
node group.
Node groups are templates that define common characteristics such as network, partitioning, operating
system and kits for all nodes in a node group.
Open a terminal window or login to the installer node as root to add nodes.
Procedure 8.1. Adding Nodes to the Cluster
1. Run addhost
# addhost
2. Select the node group for the new nodes. Normally compute nodes are added to the compute-
rhel node group:
16. 3. Select the network interface to listen on for new PXE booted node
19. 6. Boot the nodes you want to add to the cluster. Wait a few seconds between powering up nodes
so that the machines are named sequentially in the order they are started.
20. 7. When a node is successfully detected by addhost, a line corresponding to the node appears
in the installing node status window.
21. 8. Exit addhost when Red Hat HPC has detected all nodes. The Installing node status screen
does not update to indicate that the node has installed.
22. Chapter 9. Managing Node Groups
9.1. Adding RPM Packages in RHEL to Node Groups
9.2. Adding RPM Packages not in RHEL to Node Groups
9.3. Adding Kit Components to Node Groups
Red Hat HPC cluster management is built around the concept of node groups. Node groups are a
powerful template mechanism that allows the cluster administrator to define common shared
characteristics among a group of nodes. Red Hat HPC ships with a default set of node groups for
installer nodes, packaged installed compute nodes, diskless compute nodes and imaged compute nodes.
The default node groups can be modified or new node groups can be created from the default node
groups. All of the nodes in a node group share the following:
• Node Name format
• Operating System Repository
• Kernel parameters
• Kits and components
• Network Configuration and available networks
• Additional RPM packages
• Custom scripts (for automated configuration of tools)
23. • Partitioning
A typical HPC cluster is created from a single installer node and many compute nodes. Normally
compute nodes are exactly the same as each other with a few exceptions, like the node name or other
host specific configuration files. A node group for compute nodes makes it easy to configure and
manage 1 or 100 nodes all from the same node group. The ngedit command is a graphical TUI (Text
User Interface) run by the cluster administrator to create, delete and modify node groups. The ngedit
tool modifies cluster information in the Red Hat HPC database and also automatically calls other tools
and plugins to perform actions or update configuration. For example, modifying the set of packages
associated with a node group in ngedit automatically calls cfm (configuration file manager) to
synchronize all of the nodes in the cluster using yum to add and remove the new packages, while
modifying the partitioning on the node group notifies the administrator that a re-install must be
performed on the nodes in the node group in order to change the partitioning. The Red Hat HPC
database keeps track of the node group state, thus several changes can be made to a node group
simultaneously and the physical nodes in the group can be updated immediately or at a future time
using the cfmsync command.
9.1. Adding RPM Packages in RHEL to Node Groups
Run the following steps to add RPM Packages in RHEL to node groups:
Open a Terminal and run the node group editor as root.
# ngedit
Select the compute-rhel node group and move through the Text User Interface screens by pressing F8
or by choosing next on the screen. Stop at the Optional Packages screen.
24. Additional RPM packages are added by selecting the package in the tree list. Pressing the space bar
expands or contracts the list to display the available packages.
Packages are sorted alphabetically by default. The list of packages can be sorted by Red Hat groups,
just choose Toggle View to re-sort the packages.
Select the additional packages using the spacebar. When a package is selected an asterisk displays
beside the package name.
Package dependencies are automatically handled by yum. If any selected package requires other
packages they are automatically included when the package is installed on the cluster nodes.
ngedit automatically calls cfm to synchronize the nodes and install new packages but, by design,
does not automatically remove packages from nodes in the cluster. If required pdsh and rpm can be
used to completely remove packages from the RPM database on each node in the cluster.
9.2. Adding RPM Packages not in RHEL to Node Groups
Red Hat HPC maintains a repository containing all of the RPM packages that ship with Red Hat
Enterprise Linux. This repository is sufficient for most customers. RPM packages that are not in Red
Hat Enterprise Linux can also be added to a Red Hat HPC repository by placing the RPM packages into
the appropriate contrib directory under /depot. For example:
Procedure 9.1. Adding RPM Packages not in RHEL to Node Groups
1. Start with the RPMs that are not in Red Hat Enterprise Linux or in a Red Hat HPC Kit
25. 2. Create the appropriate subdirectories in /depot/contrib:
# mkdir –p /depot/contrib/rhel/5/x86_64
# cp foo.rpm /depot/contrib/rhel/5/x86_64/foo.rpm
3. Rebuild the Red Hat HPC repository with repoman:
# repoman –u –r rhel5_x86_64
4. It takes some time to rebuild the repository and associated images.
5. Run ngedit and navigate to the Optional Packages screen.
6. Select the new package by navigating within the package tree and using the spacebar to select.
7. Continue through the ngedit screens and either allow ngedit to synchronize the nodes
immediately or perform the node synchronization manually with cfmsync –p at a later time.
Example: selecting a RPM package that is not included in Red Hat Enterprise Linux
Contributions can be added to more than one Red Hat HPC repository, the directory structure is:
/depot/contrib/<os_name>/<version>/<architecture>
9.3. Adding Kit Components to Node Groups
Adding kit components to nodes in a node group is very similar to adding additional RPM packages.
1. Open a Terminal and run ngedit
2. Press F8 (or choose Next) and proceed to Components screen
3. Enable components on a per-node group basis.
26. Each Red Hat HPC kit installs an application or a set of applications. The kit also contains components
which are meta-RPM packages designed for installing and configuring applications within the cluster.
By enabling the appropriate components, it is easy to configure all nodes in a node group.
For example, the Cacti kit contains two components, component-cacti and component-
cacti-monitored-node. component-cacti installs and configures Cacti, sets up the web
pages and connection to the database. This component is normally installed on the cluster installer node
or any other node (or set of nodes) designated as the management node.
The other component in the Cacti kit, component-cacti-monitored-node contains the Cacti
agent code that runs on compute nodes in the cluster.
Most Red Hat HPC Kits come configured with automatic node group association and component
selection. In the case of the Cacti kit, all nodes within the compute-rhel node group have the
component-cacti-monitored-node component enabled. This means these nodes are
monitored by Cacti by default. The component does not need to be explicitly enabled as the Cacti kit
does this automatically.
As another example, the Platform Lava kit automatically associates the Lava master with the installer
node group and the Lava compute nodes with the compute-rhel node group. Installing the Lava kit
automatically sets up and creates a usable Lava cluster without needing any additional configuration.
Chapter 10. Synchronizing Files in the Cluster
HPC clusters are built from individual compute nodes and all of these nodes must have copies of
common system files such as /etc/passwd, /etc/shadow, /etc/group and others.
27. Red Hat HPC contains a file synchronization service called CFM (Configuration File Manager).
CFM runs on each compute node in the cluster and when new files are available on the installer node a
message is sent to all of the nodes notifying them that files are available. Each compute node connects
to the installer node and copies the new files using the HTTP protocol. All files to be synchronized by
CFM are located in the directory tree /etc/cfm/<node group> as can be seen in the following
screenshot:
In the screenshot above /etc/cfm directory contains several node group directories such as
compute-diskless and compute-rhel. In each of those directories is a directory tree where the
/etc/cfm/<node group> directory represents the root of the tree. The /etc/cfm/compute-
rhel/etc directory contains several files or symbolic links to system files.
Creating symbolic links for the files in CFM allows the compute nodes to be automatically
synchronized with system files on the installer node. /etc/passwd and /etc/shadow are two
examples where symlinks are used.
Adding files to cfm is simple. Create all of the directories and subdirectories for the file then place the
file in the appropriate location.
Existing files can also have a <filename>.append file. The contents of a <filename>.append
file are automatically appended to the existing <filename> file on all nodes in the node group.
Use the cfmsync command to notify all of the nodes in all node groups or nodes in a single node
group. For example:
# cfmsync –f –n compute-rhel
Synchronizes all files in the compute-rhel node group.
28. # cfmsync –f
Synchronizes all files in all node groups
For more information on cfmsync view the man pages.
Chapter 11. Note on ABI Stability
Red Hat's commitment to provide binary runtime compatibility as described at
http://www.redhat.com/security/updates/errata/ ,does not to the full extent apply to the Red Hat HPC
Solution cluster middleware.
Red Hat HPC Solution, as an add-on to Red Hat Enterprise Linux, closely tracks the upstream projects,
in order to provide a maximum level of enablement in this fast moving area. As a consequence, Red
Hat and Platform Computing, as an exception from the general practice in Red Hat Enterprise Linux,
can only preserve API/ABI compatibility across minor releases to the degree, the upstream projects do.
For this reason, applications that build on-top of the HPC Solution stack might require recompilation or
even source-level code changes when moving from one minor release of Red Hat Enterprise Linux to a
newer one.
This is not generally required for the underlying Enterprise Linux software stack with exception of the
OFED packages specified in the Red Hat Enterprise Linux release notes at
http://www.redhat.com/docs/manuals/enterprise/.
Chapter 12. Known Issues
• Summary: pcm-setup -u fails to upgrade the system, failing with the message “PCM” setup
script does not seem to have run in this machine, cannot upgrade”
Details: RHHPC uses a lockfile mechanism to control if the system had been installed. When
upgrading from older RHHPC editions, the existance of this lock file is used to determine if an
upgrade or an install is required. If this file was removed, then pcm-setup -u will not correctly
trigger.
Work around: Run the following command before re-running 'pcm-setup -u':
#touch /var/lock/subsys/pcm-setup
• Summary: After upgrading the system and removing and installing the updated cacti kit, the
graphs do not display properly.
Details: The cacti user's home directory was not created properly with RHHPC 5.1's cacti kit.
This has a knock on effect when updating the Cacti kit because the rpms do not recreate the user
if the user already exists.
Work around: Run the following command prior to running the updated install-kit-cacti kit
installer script
# userdel cacti
• Summary: The ganglia user may not be created at times when installing ganglia, causing the
services to fail.
Details: A corner case with interaction with the other addon kits can sometimes cause the
ganglia user to be not created.
Symptoms: Running gmond and gmetad fail, user ganglia does not exist.
29. Workaround: Run the following commands to create the 'ganglia' user and permission the
directories correctly:
#useradd -d /var/lib/ganglia -s /sbin/nologin ganglia
#cd /var/lib/ganglia/
#chown ganglia:ganglia rrds
#service gmond restart
#service gmetad restart
• Summary: Cannot access the ganglia, ntop web GUI.Details: After run “/opt/kusu/bin/kusurc
/etc/rc.kusu.d/firstrun/S02KusuIptables.rc.py” to configure firewall for PCM, the ganglia, ntop
web GUI cannot be access again.
Work around: Reboot the installer node or run the following command:
# service kusu start
• Summary: After upgrading the system and installing the updated ntop kit, the graphs do not
display properly.
Details: A corner case with interaction with the other addon kits can sometimes cause the ntop
service to be not started successfully.
Workaround: Run the following commands to restart the 'ntop' service:
#service ntop restart
Revision History
Revision History
Revision 1.0 Kailash Sethuraman [Sep 30, 2009] Updated the installation guide for RHHPC 5.
Revision 1.1 Bin Xu [May 5, 2010] Updated the installation guide for RHHPC 5.5
Revision 1.2 Kailash Sethuraman [May 13, 2010] Minor updates to language/wording
Revision 1.3 Bin Xu [May 24, 2010] Updated the upgrading guide