NFS allows remote hosts to mount file systems over a network as if they were local. It uses TCP and RPC processes to authenticate clients and grant access to exported file shares based on configuration in /etc/exports. Administrators can start and stop the NFS server and related services using the service command to export resources from centralized servers.
Network File System (NFS) allows users to access and share files located on remote computers. It builds on ONC RPC and has evolved through several versions. NFS uses a client-server model where the client makes RPC requests to access files on the NFS server's file system. This allows for flexible sharing of resources but introduces some security and performance disadvantages compared to a local file system. Overall NFS is a widely used distributed file system protocol.
The document discusses Internet protocols and IPTables filtering. It provides an overview of Internet protocols, IP addressing, firewall utilities, and the different types of IPTables - Filter, NAT, and Mangle tables. The Filter table is used for filtering packets. The NAT table is used for network address translation. The Mangle table is used for specialized packet alterations. IPTables works by defining rules within chains to allow or block network traffic based on packet criteria.
The document summarizes the architecture of the Linux operating system. It discusses the main components of Linux including the kernel, process management, memory management, file systems, device drivers, network stack, and architecture-dependent code. The kernel is at the core and acts as a resource manager. It uses a monolithic design. Process and memory management are handled via data structures like task_struct and buddy allocation. Virtual memory is implemented using page tables. File systems organize files in a hierarchy with inodes. Device drivers interface with hardware. The network stack follows a layered model. Architecture code is separated by subdirectory.
This document discusses shell scripting and provides information on various shells, commands, and scripting basics. It covers:
- Common shells like Bourne, C, and Korn shells. The Bourne shell is typically the default and fastest, while the C shell adds features like alias and history.
- Basic bash commands like cd, ls, pwd, cp, mv, less, cat, grep, echo, touch, mkdir, chmod, and rm.
- The superuser/root user with full privileges and password security best practices.
- How login works and the difference between .login and .cshrc initialization files.
- Exiting or logging out of shells.
Linux was created in 1991 by Linus Torvalds as an open-source alternative to the proprietary Minix operating system. Some key features of Linux include its portability across different hardware, its open-source and collaborative development model, its ability to support multiple users and programs running simultaneously, its hierarchical file system, and its built-in security features like password protection. Linux also provides advantages over other operating systems like Windows by being free, allowing for custom modifications, and providing highly secure and robust servers.
This document discusses user and file permissions in Linux. It covers how every file is owned by a user and group, and how file access is defined using file mode bits. These bits determine read, write and execute permissions for the file owner, group and others. An example of a file with permissions -rw-rw-r-- is provided to demonstrate this. User accounts are configured in /etc/passwd, while passwords are securely stored in /etc/shadow. Common commands for managing users, groups, permissions and default file access (umask) are also outlined.
Network File System (NFS) is a distributed file system protocol that allows users to access files over a network as if they were on a local disk. NFS was originally developed by Sun Microsystems in 1984 and is now maintained by the IETF. NFS uses RPC calls to issue requests from clients to servers and maintains a stateless design to simplify crash recovery. While easy to set up and administer, NFS has limitations regarding performance, scalability, security and file locking.
The document discusses Linux file systems. It describes that Linux uses a hierarchical tree structure with everything treated as a file. It explains the basic components of a file system including the boot block, super block, inode list, and block list. It then covers different types of file systems for Linux like ext2, ext3, ext4, FAT32, NTFS, and network file systems like NFS and SMB. It also discusses absolute vs relative paths and mounting and unmounting filesystems using the mount and umount commands.
Network File System (NFS) allows users to access and share files located on remote computers. It builds on ONC RPC and has evolved through several versions. NFS uses a client-server model where the client makes RPC requests to access files on the NFS server's file system. This allows for flexible sharing of resources but introduces some security and performance disadvantages compared to a local file system. Overall NFS is a widely used distributed file system protocol.
The document discusses Internet protocols and IPTables filtering. It provides an overview of Internet protocols, IP addressing, firewall utilities, and the different types of IPTables - Filter, NAT, and Mangle tables. The Filter table is used for filtering packets. The NAT table is used for network address translation. The Mangle table is used for specialized packet alterations. IPTables works by defining rules within chains to allow or block network traffic based on packet criteria.
The document summarizes the architecture of the Linux operating system. It discusses the main components of Linux including the kernel, process management, memory management, file systems, device drivers, network stack, and architecture-dependent code. The kernel is at the core and acts as a resource manager. It uses a monolithic design. Process and memory management are handled via data structures like task_struct and buddy allocation. Virtual memory is implemented using page tables. File systems organize files in a hierarchy with inodes. Device drivers interface with hardware. The network stack follows a layered model. Architecture code is separated by subdirectory.
This document discusses shell scripting and provides information on various shells, commands, and scripting basics. It covers:
- Common shells like Bourne, C, and Korn shells. The Bourne shell is typically the default and fastest, while the C shell adds features like alias and history.
- Basic bash commands like cd, ls, pwd, cp, mv, less, cat, grep, echo, touch, mkdir, chmod, and rm.
- The superuser/root user with full privileges and password security best practices.
- How login works and the difference between .login and .cshrc initialization files.
- Exiting or logging out of shells.
Linux was created in 1991 by Linus Torvalds as an open-source alternative to the proprietary Minix operating system. Some key features of Linux include its portability across different hardware, its open-source and collaborative development model, its ability to support multiple users and programs running simultaneously, its hierarchical file system, and its built-in security features like password protection. Linux also provides advantages over other operating systems like Windows by being free, allowing for custom modifications, and providing highly secure and robust servers.
This document discusses user and file permissions in Linux. It covers how every file is owned by a user and group, and how file access is defined using file mode bits. These bits determine read, write and execute permissions for the file owner, group and others. An example of a file with permissions -rw-rw-r-- is provided to demonstrate this. User accounts are configured in /etc/passwd, while passwords are securely stored in /etc/shadow. Common commands for managing users, groups, permissions and default file access (umask) are also outlined.
Network File System (NFS) is a distributed file system protocol that allows users to access files over a network as if they were on a local disk. NFS was originally developed by Sun Microsystems in 1984 and is now maintained by the IETF. NFS uses RPC calls to issue requests from clients to servers and maintains a stateless design to simplify crash recovery. While easy to set up and administer, NFS has limitations regarding performance, scalability, security and file locking.
The document discusses Linux file systems. It describes that Linux uses a hierarchical tree structure with everything treated as a file. It explains the basic components of a file system including the boot block, super block, inode list, and block list. It then covers different types of file systems for Linux like ext2, ext3, ext4, FAT32, NTFS, and network file systems like NFS and SMB. It also discusses absolute vs relative paths and mounting and unmounting filesystems using the mount and umount commands.
Here are the key differences between relative and absolute paths in Linux:
- Relative paths specify a location relative to the current working directory, while absolute paths specify a location from the root directory.
- Relative paths start from the current directory, denoted by a period (.). Absolute paths always start from the root directory, denoted by a forward slash (/).
- Relative paths are dependent on the current working directory and may change if the working directory changes. Absolute paths will always refer to the same location regardless of current working directory.
- Examples:
- Relative: ./file.txt (current directory)
- Absolute: /home/user/file.txt (from root directory)
So in summary, relative paths
This document discusses user administration in Linux. It describes the different types of user accounts - root, system, and user accounts. The root account has complete control while system accounts are for specific system functions. User accounts provide interactive access for general users. Groups are used to logically group user accounts. The main user administration files are /etc/passwd, /etc/shadow, /etc/group, and /etc/gshadow. Basic commands for managing users include useradd, usermod, userdel, groupadd, groupmod, and groupdel. Creating, modifying, and deleting users and groups are demonstrated.
The document discusses File Transfer Protocol (FTP), Network File System (NFS), and Samba server configuration. It provides details on FTP such as its history, components, modes, and how to configure an FTP server in Linux. It describes NFS including its history, versions, configuration files, and steps to configure NFS client and server. It also explains Samba, its components, purpose, and how to configure a Samba server using both command line and graphical tools.
The document provides an overview of the UNIX operating system through a seminar presentation. It discusses the history of UNIX from the 1970s to the 2000s, defines what UNIX is, describes common UNIX commands and the file system structure, and covers topics like memory management, interrupts, reasons for using UNIX, and some applications of UNIX like storage consulting and middleware/database administration. The presentation is intended to educate about the key aspects and functionality of the UNIX operating system.
The document summarizes the standard directory structure and purposes of the main directories in a Linux file system. The root directory (/) contains all other directories and files on the system. Key directories include /bin for essential executable binaries, /dev for device files, /etc for system configuration files, /home for user files, /lib for shared libraries, /sbin for system administration binaries, /tmp for temporary files, /usr for user programs and documentation, and /var for files that change frequently like logs.
This document provides an introduction to Linux, including:
- A brief history of Linux from its origins in the 1980s to its use today on servers, supercomputers, and other devices.
- An overview of Linux distributions such as Ubuntu, Red Hat Enterprise Linux, and others.
- Popular applications that run on Linux, such as OpenOffice, web browsers, email clients, and multimedia software.
- Languages supported by Linux user interfaces and documentation.
- Reasons for switching to Linux like security, cost savings, and stability compared to other operating systems.
- Considerations for switching like hardware and software compatibility.
NAS (network-attached storage) allows clients to access files over a network. It uses file sharing protocols like NFS and CIFS to provide access to files stored on storage devices. NAS devices have benefits like centralized storage, simplified management, scalability, and security integration. They improve efficiency by allowing multiple servers and clients to access shared storage.
Linux is an open-source operating system based on Unix, designed for multi-user environments. The document provides an overview of basic Linux commands like ls, mkdir, cd for navigating files and directories, as well as more advanced commands for manipulating files, checking system resources, and getting system information. It also lists and describes many common Linux commands and their functions.
Unix is a multi-user, multi-tasking operating system that was first created in 1969 at Bell Labs. It allows many users to use the system simultaneously running multiple programs. Linux originated in 1991 as a personal project and is now a free, open source Unix-like operating system. It features multi-tasking, virtual memory, networking and more. Linux is widely used for servers, workstations, internet services and more due to its low cost, stability, and reliability compared to other operating systems.
The document provides an overview of the UNIX operating system. It discusses the components of a computer system including hardware, operating system, utilities, and application programs. It then defines the operating system as a program that acts as an interface between the user and computer hardware. The document outlines the goals of an operating system and provides a brief history of the development of UNIX from Multics. It also describes some key concepts of UNIX including the kernel, shell, files, directories, and multi-user capabilities.
Linux is an open-source operating system that originated as a personal project by Linus Torvalds in 1991. It can run on a variety of devices from servers and desktop computers to smartphones. Some key advantages of Linux include low cost, high performance, strong security, and versatility in being able to run on many system types. Popular Linux distributions include Red Hat Enterprise Linux, Debian, Ubuntu, and Mint. The document provides an overview of the history and development of Linux as well as common myths and facts about the operating system.
This lecture discusses the concept of Multi-User support in Linux. It discusses how Linux protects user files and resources from other user unauthorized access. It also shows how to share resources and files among users, how to add/del users and groups.
Check the other Lectures and courses in
http://Linux4EnbeddedSystems.com
or Follow our Facebook Group at
- Facebook: @LinuxforEmbeddedSystems
Lecturer Profile:
- https://www.linkedin.com/in/ahmedelarabawy
Cron and at are tools for automating jobs and scripts on Linux systems. Cron is used for recurring jobs run on a schedule, while at is used for jobs that need to run only once at a specific time. The cron daemon crond handles cron jobs, while the at daemon atd handles at jobs. Commands like crontab -e and crontab -l are used to edit and view cron job schedules. Examples show how to set up jobs to run backups, reports, scripts, and other tasks on a variety of schedules using cron and at.
The document discusses the history and advantages of Linux compared to other operating systems like Windows, DOS and UNIX. It explains how the GNU project was started to develop a free and open source UNIX-like operating system. It then describes how Linus Torvalds developed the initial Linux kernel in 1991 building on the work of the GNU project. It highlights some key advantages of Linux like high security, many available tools and the flexibility of the environment. It also provides a brief overview of some common Linux components like the kernel, shells, KDE/GNOME desktop environments and the directory structure.
I have more than 10 years of extensive hands-on experience in IT Systems Administration including Windows Server 2008 / 2012 Administration, technical communication, network infrastructure, server administration, and VoIP / IP PABX, IVR, SIP Base products.
Specialties: Designing and implementation of Microsoft & Network Infrastructure products and Deployment, IT Strategy & Security, Data Center Consultancy, 2nd and 3rd level of Servers and IT System support.
Topic # 12 of outline Configuring Local Services.pptxAyeCS11
Windows services allow applications to run in the background without a user interface. They can be configured to start automatically when the system boots up. Common windows services include the Windows Event Log, Windows Firewall, and Windows Error Reporting services. Developers can use windows services to host web services, making the web services always available even when no users are logged on. Distributed applications split an application across multiple computers for scalability or to access external services, like an ecommerce site using PayPal. Interoperable applications can communicate with any other application built on any platform.
Linux uses memory management to partition memory between kernel and application spaces, organize memory using virtual addresses, and swap memory between primary and secondary storage. It divides memory using paging into equal-sized pages, creates virtual address spaces, and uses an MMU to translate between virtual and physical addresses. This allows processes to run independently with their own logical view of memory while the physical memory is shared.
Linux uses a logical file system hierarchy standard to organize files across multiple directories and file systems. The root directory is at the top level and is represented by a forward slash. Key directories include /bin for executable commands, /lib for shared libraries, /etc for configuration files, and /var for dynamic data. Common file systems in Linux include ext2, ext3, ReiserFS, tmpfs, and proc.
NFS (Network File System) was developed by SUN Microsystems in 1984 to allow users to access and share files located on remote computers over a network. It builds on the ONC RPC system and uses a stateless protocol to store files on a network in a way that is easy to recover from failures. NFS is commonly used with UNIX systems but also supports other platforms. It has gone through several versions to support larger files, security improvements, and a stateful protocol in later versions. NFS allows applications on client computers to access and manipulate files on a server computer in a way that is transparent to the user.
NFS allows remote access to files on a server from client machines. It uses stateless servers so server disruptions don't affect clients, and clients can continue accessing files after a server reboot. The client parses file paths and looks up components individually to accommodate different file naming conventions. NFS adopted UNIX file semantics and operations like open, read, write, and close, along with basic file types and permissions.
The Network File System (NFS) is the most widely used network-based file system. NFS’s initial simple design and Sun Microsystems’ willingness to publicize the protocol and code samples to the community contributed to making NFS the most successful remote access file system. NFS implementations are available for numerous Unix systems, several Windows-based systems, and others.
I have tried my best to describe Samba Server through this PPT. I hope you guys will love this and this ppt will be helpful for you all.
Thanks,
Veeral Arora
Here are the key differences between relative and absolute paths in Linux:
- Relative paths specify a location relative to the current working directory, while absolute paths specify a location from the root directory.
- Relative paths start from the current directory, denoted by a period (.). Absolute paths always start from the root directory, denoted by a forward slash (/).
- Relative paths are dependent on the current working directory and may change if the working directory changes. Absolute paths will always refer to the same location regardless of current working directory.
- Examples:
- Relative: ./file.txt (current directory)
- Absolute: /home/user/file.txt (from root directory)
So in summary, relative paths
This document discusses user administration in Linux. It describes the different types of user accounts - root, system, and user accounts. The root account has complete control while system accounts are for specific system functions. User accounts provide interactive access for general users. Groups are used to logically group user accounts. The main user administration files are /etc/passwd, /etc/shadow, /etc/group, and /etc/gshadow. Basic commands for managing users include useradd, usermod, userdel, groupadd, groupmod, and groupdel. Creating, modifying, and deleting users and groups are demonstrated.
The document discusses File Transfer Protocol (FTP), Network File System (NFS), and Samba server configuration. It provides details on FTP such as its history, components, modes, and how to configure an FTP server in Linux. It describes NFS including its history, versions, configuration files, and steps to configure NFS client and server. It also explains Samba, its components, purpose, and how to configure a Samba server using both command line and graphical tools.
The document provides an overview of the UNIX operating system through a seminar presentation. It discusses the history of UNIX from the 1970s to the 2000s, defines what UNIX is, describes common UNIX commands and the file system structure, and covers topics like memory management, interrupts, reasons for using UNIX, and some applications of UNIX like storage consulting and middleware/database administration. The presentation is intended to educate about the key aspects and functionality of the UNIX operating system.
The document summarizes the standard directory structure and purposes of the main directories in a Linux file system. The root directory (/) contains all other directories and files on the system. Key directories include /bin for essential executable binaries, /dev for device files, /etc for system configuration files, /home for user files, /lib for shared libraries, /sbin for system administration binaries, /tmp for temporary files, /usr for user programs and documentation, and /var for files that change frequently like logs.
This document provides an introduction to Linux, including:
- A brief history of Linux from its origins in the 1980s to its use today on servers, supercomputers, and other devices.
- An overview of Linux distributions such as Ubuntu, Red Hat Enterprise Linux, and others.
- Popular applications that run on Linux, such as OpenOffice, web browsers, email clients, and multimedia software.
- Languages supported by Linux user interfaces and documentation.
- Reasons for switching to Linux like security, cost savings, and stability compared to other operating systems.
- Considerations for switching like hardware and software compatibility.
NAS (network-attached storage) allows clients to access files over a network. It uses file sharing protocols like NFS and CIFS to provide access to files stored on storage devices. NAS devices have benefits like centralized storage, simplified management, scalability, and security integration. They improve efficiency by allowing multiple servers and clients to access shared storage.
Linux is an open-source operating system based on Unix, designed for multi-user environments. The document provides an overview of basic Linux commands like ls, mkdir, cd for navigating files and directories, as well as more advanced commands for manipulating files, checking system resources, and getting system information. It also lists and describes many common Linux commands and their functions.
Unix is a multi-user, multi-tasking operating system that was first created in 1969 at Bell Labs. It allows many users to use the system simultaneously running multiple programs. Linux originated in 1991 as a personal project and is now a free, open source Unix-like operating system. It features multi-tasking, virtual memory, networking and more. Linux is widely used for servers, workstations, internet services and more due to its low cost, stability, and reliability compared to other operating systems.
The document provides an overview of the UNIX operating system. It discusses the components of a computer system including hardware, operating system, utilities, and application programs. It then defines the operating system as a program that acts as an interface between the user and computer hardware. The document outlines the goals of an operating system and provides a brief history of the development of UNIX from Multics. It also describes some key concepts of UNIX including the kernel, shell, files, directories, and multi-user capabilities.
Linux is an open-source operating system that originated as a personal project by Linus Torvalds in 1991. It can run on a variety of devices from servers and desktop computers to smartphones. Some key advantages of Linux include low cost, high performance, strong security, and versatility in being able to run on many system types. Popular Linux distributions include Red Hat Enterprise Linux, Debian, Ubuntu, and Mint. The document provides an overview of the history and development of Linux as well as common myths and facts about the operating system.
This lecture discusses the concept of Multi-User support in Linux. It discusses how Linux protects user files and resources from other user unauthorized access. It also shows how to share resources and files among users, how to add/del users and groups.
Check the other Lectures and courses in
http://Linux4EnbeddedSystems.com
or Follow our Facebook Group at
- Facebook: @LinuxforEmbeddedSystems
Lecturer Profile:
- https://www.linkedin.com/in/ahmedelarabawy
Cron and at are tools for automating jobs and scripts on Linux systems. Cron is used for recurring jobs run on a schedule, while at is used for jobs that need to run only once at a specific time. The cron daemon crond handles cron jobs, while the at daemon atd handles at jobs. Commands like crontab -e and crontab -l are used to edit and view cron job schedules. Examples show how to set up jobs to run backups, reports, scripts, and other tasks on a variety of schedules using cron and at.
The document discusses the history and advantages of Linux compared to other operating systems like Windows, DOS and UNIX. It explains how the GNU project was started to develop a free and open source UNIX-like operating system. It then describes how Linus Torvalds developed the initial Linux kernel in 1991 building on the work of the GNU project. It highlights some key advantages of Linux like high security, many available tools and the flexibility of the environment. It also provides a brief overview of some common Linux components like the kernel, shells, KDE/GNOME desktop environments and the directory structure.
I have more than 10 years of extensive hands-on experience in IT Systems Administration including Windows Server 2008 / 2012 Administration, technical communication, network infrastructure, server administration, and VoIP / IP PABX, IVR, SIP Base products.
Specialties: Designing and implementation of Microsoft & Network Infrastructure products and Deployment, IT Strategy & Security, Data Center Consultancy, 2nd and 3rd level of Servers and IT System support.
Topic # 12 of outline Configuring Local Services.pptxAyeCS11
Windows services allow applications to run in the background without a user interface. They can be configured to start automatically when the system boots up. Common windows services include the Windows Event Log, Windows Firewall, and Windows Error Reporting services. Developers can use windows services to host web services, making the web services always available even when no users are logged on. Distributed applications split an application across multiple computers for scalability or to access external services, like an ecommerce site using PayPal. Interoperable applications can communicate with any other application built on any platform.
Linux uses memory management to partition memory between kernel and application spaces, organize memory using virtual addresses, and swap memory between primary and secondary storage. It divides memory using paging into equal-sized pages, creates virtual address spaces, and uses an MMU to translate between virtual and physical addresses. This allows processes to run independently with their own logical view of memory while the physical memory is shared.
Linux uses a logical file system hierarchy standard to organize files across multiple directories and file systems. The root directory is at the top level and is represented by a forward slash. Key directories include /bin for executable commands, /lib for shared libraries, /etc for configuration files, and /var for dynamic data. Common file systems in Linux include ext2, ext3, ReiserFS, tmpfs, and proc.
NFS (Network File System) was developed by SUN Microsystems in 1984 to allow users to access and share files located on remote computers over a network. It builds on the ONC RPC system and uses a stateless protocol to store files on a network in a way that is easy to recover from failures. NFS is commonly used with UNIX systems but also supports other platforms. It has gone through several versions to support larger files, security improvements, and a stateful protocol in later versions. NFS allows applications on client computers to access and manipulate files on a server computer in a way that is transparent to the user.
NFS allows remote access to files on a server from client machines. It uses stateless servers so server disruptions don't affect clients, and clients can continue accessing files after a server reboot. The client parses file paths and looks up components individually to accommodate different file naming conventions. NFS adopted UNIX file semantics and operations like open, read, write, and close, along with basic file types and permissions.
The Network File System (NFS) is the most widely used network-based file system. NFS’s initial simple design and Sun Microsystems’ willingness to publicize the protocol and code samples to the community contributed to making NFS the most successful remote access file system. NFS implementations are available for numerous Unix systems, several Windows-based systems, and others.
I have tried my best to describe Samba Server through this PPT. I hope you guys will love this and this ppt will be helpful for you all.
Thanks,
Veeral Arora
Samba is an open source software suite that allows file and printer sharing between Linux/Unix systems and Windows clients. It uses the SMB protocol to provide services to SMB/CIFS clients. The document discusses installing and configuring Samba, including creating a smb.conf file to define shares, users, and permissions. It also covers connecting Samba clients and basic troubleshooting.
Presentation on samba server & apache serverManoz Kumar
Linux is an open-source operating system similar to Windows and OS X that powers many servers, mobile phones, supercomputers, and other devices. It is developed collaboratively by thousands of developers from around the world, which helps spread costs and leads to software innovation. The document then discusses how to set up a Linux Samba server to allow file sharing with Windows machines and an HTTP server to host web pages. It provides step-by-step instructions for configuring the Samba and HTTP servers, including editing configuration files and changing directory permissions to enable file sharing between Linux and Windows systems.
Term 5 students will have the opportunity to take samba rhythm workshops led by Mags from the Merton Music Foundation and London School of Samba. They will learn about the history and origins of samba music and instruments. The document then provides details on several traditional samba instruments - their origins, what they look like, and how they are played as part of samba ensembles.
Samba allows Windows and Unix systems to share files and printers on a network. It implements the SMB protocol to enable Unix systems to communicate with Windows clients. Samba includes client tools that allow Unix users to access resources shared by Windows systems. It provides reliable file and printer sharing across platforms at a low maintenance cost.
The Linux boot process involves 6 key stages:
1. The BIOS performs initial checks and loads the boot loader.
2. The boot loader like GRUB is loaded by the MBR and displays a menu to select the kernel.
3. The selected kernel is loaded along with the initrd and mounts the root filesystem.
4. The kernel executes init which reads the runlevel config and loads appropriate services.
5. Based on the runlevel, programs in directories like rc3.d are started in sequence.
6. Once all programs are started, the Linux login prompt is displayed.
The document discusses Linux iptables firewall. Iptables is the default firewall package for Linux and runs inside the Linux kernel. It has three built-in tables (filter, nat, mangle) that are used to filter, alter, and inspect packets. Iptables uses built-in chains and user-defined rules to allow or deny traffic based on packet criteria like source/destination, protocol, interface etc. Common iptables commands and options are also explained.
RAID (Redundant Array of Independent Drives) é uma técnica que combina vários discos rígidos para formar uma única unidade lógica de armazenamento. Existem diferentes níveis de RAID que oferecem redundância, desempenho e tolerância a falhas de maneiras distintas. O documento explica os principais níveis de RAID como 0, 1, 5 e 6 e como eles balanceiam esses fatores.
Trabalho de Sistema de Arquivo em Rede apresentado em aula de Sistemas Distribuídos pelos alunos Marlon Munhoz e Larissa Zanforlin na Faculdade de Tecnologia de Jales - FATEC Jales.
Anton Chuvakin FTP Server Intrusion InvestigationAnton Chuvakin
Now famous FTP server intrusion investigation, including log analysis, disk forensics as well as lessons learned; all still fun and useful, but circa 2002
The Linux booting process begins when the user turns on the computer. The BIOS loads and runs a power-on self-test before finding the bootable devices and loading the boot sector from the master boot record (MBR). The MBR then loads the boot loader, such as GRUB or LILO, which loads the Linux kernel into memory and passes control to it. The kernel initializes essential system components and starts the init process, which launches other processes according to the runlevel configuration to complete the system startup.
Samba allows UNIX machines to interface with Microsoft networks by supporting SMB and CIFS protocols. It provides file and printer sharing, directory services, and authentication/access control. Samba simulates a Windows domain server and works like a workgroup by default.
NIS enables centralized user authentication across networks by using a shared database. All systems can login with the same credentials and password changes are reflected everywhere. It was originally called Yellow Pages by Sun Microsystems.
DHCP automatically assigns IP addresses to devices on a network from a defined pool. It issues and manages IP addresses so they are not statically configured on each device.
DNS translates human-friendly domain names to IP addresses so computers can communicate. If
The Linux boot process begins when the BIOS initializes hardware and runs POST tests. It then loads the boot loader like GRUB from the hard disk MBR or EFI partition. The boot loader loads the Linux kernel and initramfs into memory. The kernel initializes hardware and mounts the root filesystem. Init then starts processes for system services. Getty starts text logins. Finally, the X Window System loads for graphical desktop access.
This document provides an overview of the Linux filesystem, including its structure, key directories, and concepts like mounting. It describes the Filesystem Hierarchy Standard which defines the main directories and their contents. Key points covered include that everything in Linux is treated as a file, the top-level root directory is "/", essential directories like /bin, /dev, /etc, /home, /lib, /proc, /sbin, /usr, /var are explained, and mounting additional filesystems is described.
NFS allows remote hosts to mount file systems over a network as if they were local. It uses TCP and RPC processes to facilitate sharing files between systems. Administrators can configure NFS clients and servers by editing configuration files like /etc/exports and /etc/fstab to control access and mount shared directories.
Linux uses a unified, hierarchical file system to organize and store data on disk partitions. It places all partitions under the root directory by mounting them at specific points. The file system is case sensitive. The Linux kernel manages hardware resources and the file system, while users interact through commands interpreted by the shell. Journaling file systems like ext3 and ReiserFS were developed to improve robustness over ext2 by logging file system changes to reduce the need for integrity checks after crashes. Ext4 further improved on this with features like larger maximum file sizes and delayed allocation.
The document discusses the Network File System (NFS) protocol. NFS allows users to access and share files located on remote computers as if they were local. It operates using three main layers - the RPC layer for communication, the XDR layer for machine-independent data representation, and the top layer consisting of the mount and NFS protocols. NFS version 4 added features like strong security, compound operations, and internationalization support.
Network file sharing allows files to be accessed over a network. It employs Remote Procedure Calls and requires setup of both server and client software. While convenient for users, it complicates administration and can introduce security issues if not configured properly. NFS and SMB are common network file sharing protocols, each with their own advantages and disadvantages.
The document discusses the Network File System (NFS) which allows files on a remote computer to be accessed over a network as if they are local. It provides details on how NFS works by exporting file systems from the NFS server and mounting them on client computers. The document also covers NFS versions, configuration of NFS servers, and tools for managing NFS shares through graphical interfaces.
This document discusses how to set up NFS and Samba servers. It explains that NFS allows sharing directories between UNIX systems using daemons like nfs, nfslock, and portmap. It describes configuring the /etc/exports file to define exported filesystems and configuring clients via /etc/fstab. For Samba, it discusses using smbd to share files and printers from UNIX to Windows via the smb.conf configuration file. Authentication can be done using local Unix usernames and passwords stored in /etc/samba/smbpasswd.
The document discusses several methods for configuring file sharing services on Linux systems. It covers setting up an FTP server, using NFS to share files between Linux and UNIX systems, emulating a NetWare file server using the mars-nwe package, and implementing Windows file sharing protocols using Samba. Key components covered include configuring the wu-ftpd FTP server, using the mount command and /etc/exports file for NFS, and setting up Samba servers and clients for Windows integration.
The document discusses network file systems (NFS) and its components. It describes how NFS allows remote access to shared file systems across networks using the NFS protocol. It explains the key aspects of NFS including exporting file systems from the NFS server, mounting remote file systems on clients, and the architecture involving NFS servers and clients. It also briefly mentions utilities like mountd, nfsd, and issues that can arise with user and group IDs when sharing files across systems.
NFS (Network File System) allows hosts to mount remote file systems and access them locally. There are three versions of NFS in use (v2, v3, v4). NFS implements a client-server model and uses RPC (Remote Procedure Call) to make file operations on remote servers appear local. NFS aims to support UNIX file semantics over the network in a stateless manner for scalability.
This document describes Highly Available NFS (HANFS) which provides uninterrupted NFS access to files by leveraging Oracle ASM Cluster File System (ACFS). HANFS exposes NFS exports through highly available virtual IPs (HAVIPs) and uses Oracle Clusterware agents to ensure the HAVIPs and exports are always available. The document provides details on supported platforms, HANFS configuration including adding HAVIPs and export file systems, and an example configuration of a simple two node HANFS cluster.
NFS allows files to be shared over a network between systems. To set up NFS, install the NFS server on the host system and NFS client on other systems. Configure NFS exports on the host to share directories, and create mount points on clients to access the shared directories remotely as if they were local. Access and ownership are tested by writing files to the shared directories. The shares can be automatically mounted on clients at boot by adding them to /etc/fstab.
This document discusses setting up a file server configuration and installation in Linux. It involves installing and configuring FTP, SAMBA, NFS, and DHCP servers to share files over a network. Users are added and files are shared on the server. Screenshots are provided to show the file sharing functionality. Key benefits are larger file sharing capacity and reduced storage usage. Limitations and future enhancements are also mentioned.
This document discusses setting up a file server configuration and installation in Linux. It involves installing and configuring FTP, SAMBA, NFS, and DHCP servers to share files over a network. Users are added and files are shared on the server. Screenshots are provided to show the configuration and file sharing working properly. Benefits of a file server include allowing multiple users to access files simultaneously and sharing changes immediately.
This document provides instructions for configuring a basic NFS server on a Red Hat Enterprise Linux 7 system to share directories with a client system. It describes installing NFS and related packages, exporting directories like /opt/nfs, configuring the firewall and starting the NFS daemon on the server. It also covers installing NFS packages, mounting exported directories and testing access on the client. The document further discusses making the NFS shares persist after reboots by configuring fstab files on both systems. It concludes by sharing a home directory through NFS and confirming access.
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...xKinAnx
This document provides information about clustered NFS (cNFS) in IBM Spectrum Scale. cNFS allows multiple Spectrum Scale servers to share a common namespace via NFS, providing high availability, performance, scalability and a single namespace as storage capacity increases. The document discusses components of cNFS including load balancing, monitoring, and failover. It also provides instructions for prerequisites, setup, administration and tuning of a cNFS configuration.
Open System SnapVault (OSSV) is a disk-to-disk backup solution that uses block-level incremental backups to efficiently backup data from non-NetApp systems to NetApp storage. OSSV can also be used for data migrations by performing block-level incremental transfers, which significantly reduces transfer times compared to file-level tools when migrating large frequently changed files. During a restore, the DFM server uses the NDMP protocol to communicate with both the OSSV host and NetApp filer to initiate and manage the restore process.
The document provides an overview and guide for using the Linux CIFS client to mount Windows and Samba servers. It describes differences between the newer cifs client and older smbfs client, mount procedures, options, and considerations for server configuration to support POSIX semantics and extended attributes from the Linux CIFS client.
Oracle database might have problems with stale NFSv3 locks upon restartAshwin Pawar
The document discusses stale NFS locks that can occur upon Oracle database restart with NFSv3. The issue is caused by lockd and rpc.statd using different name resolution methods to create and clear locks, resulting in non-matching host names. The solution is to change the order of host names in /etc/hosts so the same host name is used for lock creation and clearance.
This document outlines the steps to configure a highly available NFS server using DRBD and heartbeat on Ubuntu. Key steps include:
1. Installing DRBD, heartbeat and configuring the /etc/drbd.conf and heartbeat configuration files on both NFS servers so they are identical.
2. Making the data partition consistent between servers using drbdadm commands.
3. Installing and configuring heartbeat to monitor services and fail over the virtual IP address when either NFS server fails.
4. Testing the configuration by stopping heartbeat on each server to trigger a failover and verifying services move to the other server.
HP-UX 11iv3 Ignite-UX with NFSv4 and SSH Tunnel by Dusan BaljevicCircling Cycle
This document describes how to set up Ignite-UX backups between an Ignite-UX server and client running HP-UX 11iv3 by using NFSv4 with SSH tunneling. Key steps include configuring NFSv4 and SSH on the server and client, mounting the necessary NFS shares on the client, and running make_net_recovery to perform backups to the local client rather than a remote system. Some performance testing showed backups of a 5.8GB system image took 58 minutes over a 100Mbps network using this approach.
MongoDB is a document-oriented database where data is stored in flexible JSON-like documents within collections, rather than rows in tables. Documents can contain various data types and dynamically vary in structure. The MongoDB database stores collections of documents and provides basic CRUD functions through its shell interface to create, read, update and delete documents through queries, indexes and special collection types.
FTP is a utility used to transfer files between local and remote machines using the File Transfer Protocol. The FTP command allows users to connect to remote servers, upload and download files, execute commands on the local or remote machine, change directories and modes, and disconnect. Key functions include getting/putting files to transfer one or more files between machines, listing directory contents, deleting files, and changing transfer modes between ASCII and binary. Help provides information on all FTP commands.
This document provides keyboard shortcuts for navigating and using Microsoft Word 2013. It includes shortcuts for:
1. Navigating between windows, documents, dialog boxes, and sections within the help document.
2. Performing common commands like copying text, printing, searching, and expanding/collapsing sections.
3. Accessing the ribbon tabs and commands using keyboard shortcuts like Alt+letter to bypass the mouse. Detailed shortcuts are provided for working with text, formatting, objects, and other Word features.
This document provides instructions for creating bulleted and numbered lists in OneNote. It explains that bulleted lists should be used for random, unordered items like grocery lists. Numbered lists are for sequentially ordered items like directions. The steps are to select Bullets or Numbering from the Home tab, type each list item and press Enter, and press Enter twice to end the list. Shortcuts to automatically start each list are also provided.
The document provides instructions for creating and updating an index in Microsoft Word. It discusses marking index entries by selecting text and adding XE fields, choosing an index design, and generating the index. The instructions also cover editing individual entries, deleting entries, and updating the index to reflect changes.
The document discusses JavaScript and the DOM (Document Object Model). It covers:
1. How JavaScript can be inserted into HTML pages using the <script> tag, including inline code and external files.
2. How the DOM represents and interacts with elements in an HTML document, allowing JavaScript to dynamically access and update elements.
3. JavaScript language basics, including data types, variables, operators, and comments.
Regular expressions (regex) allow complex pattern matching in text. The document discusses regex basics like literals, character classes, quantifiers, and flags in Python. It explains how to use the re module to compile patterns into RegexObjects and search/match strings. RegexObjects provide reusable patterns while re module functions provide shortcuts but cache compiled patterns.
The selection sort algorithm sorts an array by repeatedly finding the minimum element from the unsorted portion and placing it at the beginning of the sorted portion. It maintains two subarrays - the sorted subarray and the unsorted subarray. In each iteration, it selects the minimum element from the unsorted subarray and inserts it into the sorted subarray. This process continues until the entire array is sorted.
The document describes the binary search algorithm for searching a sorted array. It has a time complexity of O(Logn) compared to linear search which is O(n). Binary search works by comparing the target value to the middle element of the array and recursively searching either the left or right half depending on if the target is less than or greater than the middle element. Implementations of both recursive and iterative binary search in C are provided.
The document provides an overview of JSON (JavaScript Object Notation) including its syntax, structure, and common uses. JSON is a lightweight data-interchange format that is easy for humans to read and write and easy for machines to parse and generate. It is built on two structures: a collection of name/value pairs and an ordered list of values. JSON is primarily used to transmit data between a web server and web application.
A stemming algorithm reduces inflected words to their word stem or root form. It works by removing suffixes and endings while trying to leave the stem in a familiar form. Developing a good stemming algorithm requires understanding the language's grammar, morphology, and common word forms. The algorithm is built incrementally and rules are evaluated based on whether they improve or degrade search performance across a test vocabulary. Irregular forms and stopwords also need to be handled.
To configure a DHCP server role in Windows Server, an administrator must first log into the server, open the Server Manager, add the DHCP Server role via the Add Roles Wizard, configure the network interfaces, DNS and WINS settings, and add DHCP scopes defining the IP range, subnet mask, default gateway and lease time; once installed, the DHCP server interface can be accessed from the Administrative Tools menu.
The document provides an overview of networking concepts including:
1) A network consists of devices that exchange data over media, with hosts having logical addresses. Common host types include workstations, servers, and routers.
2) Protocols establish rules for network functions, while bandwidth refers to the amount of data that can be transmitted in a given time period.
3) Network topologies determine how devices are physically connected, with common types being bus, star, and ring configurations.
4) Networks are generally local area networks (LANs) or wide area networks (WANs) connecting multiple LANs across a wide geographic region.
A network connects devices that exchange data over media. A host has a logical network address and can be devices like workstations, servers, printers. Protocols are agreed upon rules for network functions. The network topology refers to the physical layout, such as bus, star, or ring. Networks are LANs for small areas or WANs connecting multiple LANs over wide areas. Devices include routers, switches, bridges, firewalls, and more.
A network consists of systems and devices that exchange data over media like cables. Hosts have logical addresses and can be devices like workstations, servers, printers. Data is transmitted in bits, which are 1s and 0s. Protocols are agreed upon rules for network functions. Bandwidth is the amount of data transmitted per second. Network topologies include bus, star, and ring shapes. LANs serve small areas while WANs connect LANs over wide areas using leased lines. The OSI model splits communication into 7 layers with each layer performing a specific function independently. Physical layer devices connect wiring, data link devices include bridges and switches, and network layer devices route between logical addresses.
A network is a collection of systems and devices exchanging data over some form of media. Hosts can be workstations, servers, printers, connection devices, or routers. Data is transmitted in bits, which are 1s and 0s, and protocols define rules for network functions. Common network topologies include bus, ring, star, and token passing ring. Networks can be LANs, covering a small area, or WANs connecting multiple LANs over a wide geographic area using leased lines. The OSI model organizes network communication into seven layers, with each layer adding a header and the data link layer adding a trailer for encapsulation.
The document provides an introduction to Python including:
- Starting the Python interpreter and basic calculations
- Variables, expressions, statements, functions, modules, comments
- Strings, lists, tuples, dictionaries
- Common list, string, and dictionary methods
It covers the basic Python syntax and many common data structures and their associated methods in less than 3 sentences.
The document provides an introduction to Python including:
- Starting the Python interpreter and basic calculations
- Variables, expressions, statements, functions, modules, comments
- Strings, lists, tuples, dictionaries
- Common list, string, and dictionary methods
It covers the basic Python syntax and many common data structures and their associated methods in less than 3 sentences.
This document provides an overview of Linux Bash shell scripting. It covers topics such as writing basic scripts, using variables and arithmetic, conditional statements, loops, reading/writing files, and more. Examples are given for many common scripting tasks like renaming files, checking disk space, searching files, generating random numbers, and calculating values. The document is intended to teach the basics of shell scripting through explanations and code samples.
Perl can be obtained from various sources including downloading the source code from perl.com or getting binary packages from ActiveState for Linux, Solaris, and Windows. Installing Perl on Linux/UNIX involves checking the version with the perl -v command and installing RPM packages with rpm or building from source which involves extracting, configuring, making, testing, and installing. Installing on Windows is straightforward using the ActiveState Perl installer and optionally configuring support for IIS or PWS web servers.
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
2. 1. A Network File System (NFS) allows remote hosts to mount file
systems over a network and interact with those file systems as
though they are mounted locally.
2. This enables system administrators to consolidate resources onto
centralized servers on the network.
3. HOW NFS WORKS
TCP is the default transport protocol for NFS version 2 and 3 under
Red Hat Enterprise Linux. UDP can be used for compatibility purposes
as needed, but is not recommended for wide usage. NFSv4 requires
TCP. All the RPC/NFS daemons have a '-p' command line option that
can set the port, making firewall configuration easier.
After TCP wrappers grant access to the client, the NFS server refers to
the /etc/exports configuration file to determine whether the client is
allowed to access any exported file systems. Once verified, all file and
directory operations are available to the user.
4. nfs
service nfs start starts the NFS server and the appropriate RPC
processes to service
requests for shared NFS file systems.
nfslock
service nfslock start activates a mandatory service that starts the
appropriate RPC
processes allowing NFS clients to lock files on the server.
5. rpcbind
rpcbind accepts port reservations from local RPC services. These
ports are then made
available (or advertised) so the corresponding remote RPC services
can access them.
rpcbind responds to requests for RPC services and sets up
connections to the
requested RPC service. This is not used with NFSv4.
6. THE FOLLOWING RPC PROCESSES
FACILITATE NFS SERVICES:
rpc.mountd
rpc.nfsd
Lockd
rpc.statd
rpc.rquotad
rpc.idmapd
7. NFS CLIENT CONFIGURATION
The mount command mounts NFS shares on the client side. Its format
is as follows:
# mount -t nfs -o options host:/remote/export /local/directory
8. This command uses the following variables:
options
A comma-delimited list of mount options; refer to Section 8.5, “Common NFS Mount
Options” for details on valid NFS mount options.
server
The hostname, IP address, or fully qualified domain name of the server exporting the file
system you wish to mount
/remote/export
The file system or directory being exported from the server, that is, the directory you wish to
mount
/local/directory
The client location where /remote/export is mounted
9. The NFS protocol version used in Red Hat Enterprise Linux 7 is
identified by the mount options nfsvers or vers. By default, mount will
use NFSv4 with mount -t nfs. If the server does not support NFSv4,
the client will automatically step down to a version supported by the
server. If the nfsvers/vers option is used to pass a particular version
not supported by the server, the mount will fail. The file system type
nfs4 is also available for legacy reasons; this is equivalent to running
mount -t nfs -o nfsvers=4 host:/remote/export /local/directory.
10. Mounting NFS File Systems using /etc/fstab
An alternate way to mount an NFS share from another machine is to
add a line to the /etc/fstab file. The line must state the hostname of
the NFS server, the directory on the server being exported, and the
directory on the local machine where the NFS share is to be mounted.
You must be root to modify the /etc/fstab file.
11. Syntax example
The general syntax for the line in /etc/fstab is as follows:
server:/usr/local/pub /pub nfs defaults 0 0
The mount point /pub must exist on the client machine before this
command can be
executed. After adding this line to /etc/fstab on the client system,
use the command
mount /pub, and the mount point /pub is mounted from the server.
12. The /etc/fstab file is referenced by the netfs service at boot time, so
lines referencing
NFS shares have the same effect as manually typing the mount
command during the
boot process.
A valid /etc/fstab entry to mount an NFS export should contain the
following information:
server:/remote/export /local/directory nfs options 0 0
13. USING LDAP TO STORE
AUTOMOUNTER MAPS
LDAP client libraries must be installed on all systems configured to
retrieve automounter maps from LDAP. On Red Hat Enterprise Linux,
the openldap package should be installed automatically as a
dependency of the automounter. To configure LDAP access, modify
/etc/openldap/ldap.conf. Ensure that BASE, URI, and schema are set
appropriately for your site.
The most recently established schema for storing automount maps in
LDAP is described by rfc2307bis. To use this schema it is necessary
to set it in the autofs configuration (/etc/sysconfig/autofs) by
removing the comment characters from the schema definition.
15. Ensure that these are the only schema entries not commented in the
configuration. The
automountKey replaces the cn attribute in the rfc2307bis schema. An
LDIF of a sample configuration is described below:
16. LDF CONFIGURATION
# extended LDIF
#
# LDAPv3
# base <> with scope subtree
# filter: (&(objectclass=automountMap)(automountMapName=auto.master))
# requesting: ALL
#
# auto.master, example.com
dn: automountMapName=auto.master,dc=example,dc=com
objectClass: top
objectClass: automountMap
automountMapName: auto.master
17. # extended LDIF
#
# LDAPv3
# base <automountMapName=auto.master,dc=example,dc=com>
with scope
subtree
# filter: (objectclass=automount)
# requesting: ALL
#
18. COMMON NFS MOUNT OPTIONS
Beyond mounting a file system with NFS on a remote host, it is also
possible to specify other options at mount time to make the mounted
share easier to use. These options can be used with manual mount
commands, /etc/fstab settings, and autofs
The following are options commonly used for NFS mounts:
intr
Allows NFS requests to be interrupted if the server goes down or
cannot be reached.
19. lookupcache= mode
Specifies how the kernel should manage its cache of directory entries for a
given mount point. Valid arguments for mode are all, none, or pos/positive.
nfsvers= version
Specifies which version of the NFS protocol to use, where version is 2, 3, or
4. This is useful for hosts that run multiple NFS servers. If no version is
specified, NFS uses the highest version supported by the kernel and mount
command.
The option vers is identical to nfsvers, and is included in this release for
compatibility reasons.
Noacl Turns off all ACL processing. This may be needed when interfacing
with older versions of
Red Hat Enterprise Linux, Red Hat Linux, or Solaris, since the most recent
20. nolock
Disables file locking. This setting is occasionally required when
connecting to older NFS servers.
noexec
Prevents execution of binaries on mounted file systems. This is useful
if the system is mounting a non-Linux file system containing
incompatible binaries.
nosuid
Disables set-user-identifier or set-group-identifier bits. This
prevents remote users from gaining higher privileges by running a
setuid program.
21. port=num
port=num — Specifies the numeric value of the NFS server port. If num is 0 (the
default), then mount queries the remote host's rpcbind service for the port
number to use. If the remote host's NFS daemon is not registered with its
rpcbind service, the standard NFS port number of TCP 2049 is used instead.
rsiz e=num and wsiz e=num
These settings speed up NFS communication for reads (rsize) and writes (wsize)
by
setting a larger data block size (num, in bytes), to be transferred at one time. Be
careful
when changing these values; some older Linux kernels and network cards do not
work well
with larger block sizes. For NFSv3, the default values for both parameters is set
to 8192.
22. sec= mode
Its default setting is sec=sys, which uses local UNIX UIDs and GIDs. These
use AUTH_SYS
to authenticate NFS operations.“
sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to
authenticate users.
sec=krb5i uses Kerberos V5 for user authentication and performs integrity
checking of
NFS operations using secure checksums to prevent data tampering.
sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and
encrypts NFS
traffic to prevent traffic sniffing. This is the most secure setting, but it also
23. tcp
Instructs the NFS mount to use the TCP protocol.
udp
Instructs the NFS mount to use the UDP protocol.
For a complete list of options and more detailed information on each
one, refer to man mount and man nfs.
24. Starting and Stopping NFS
To run an NFS server, the rpcbind
[1] service must be running. To verify that rpcbind is active, use the
following command:
# service rpcbind status
If the rpcbind service is running, then the nfs service can be started.
To start an NFS server, use the following command:
25. # service nfs start
nfslock must also be started for both the NFS client and server to function
properly. To start NFS locking, use the following command:
# service nfslock start
If NFS is set to start at boot, ensure that nfslock also starts by running
chkconfig --list
nfslock. If nfslock is not set to on, this implies that you will need to manually
run the service
nfslock start each time the computer starts. To set nfslock to automatically
start on boot, use
chkconfig nfslock on.
26. nfslock is only needed for NFSv3.
To stop the server, use:
# service nfs stop
The restart option is a shorthand way of stopping and then starting
NFS. This is the most efficient way to make configuration changes
take effect after editing the configuration file for NFS. To restart the
server type:
27. # service nfs restart
The condrestart (conditional restart) option only starts nfs if it is
currently running. This option is useful for scripts, because it does
not start the daemon if it is not running. To conditionally restart the
server type:
# service nfs condrestart
28. To reload the NFS server configuration file without restarting the
service type:
# service nfs reload
29. NFS SERVER CONFIGURATION
There are two ways to configure an NFS server:
Manually editing the NFS configuration file, that is, /etc/exports, and
through the command line, that is, by using the command exportfs.
1. T he /etc/exports Configuration File
The /etc/exports file controls which file systems are exported to remote
hosts and specifies options. It follows the following syntax rules:
Blank lines are ignored. To add a comment, start a line with the hash
mark (#). You can wrap long lines with a backslash ().
Each exported file system should be on its own individual line. Any lists
of authorized hosts placed after an exported file system must be
separated by space characters. Options for each of the hosts must be
placed in parentheses directly after the host identifier, without any
spaces separating the host and the first parenthesis.
30. Each entry for an exported file system has the following structure:
export host(options)
The aforementioned structure uses the following variables:
Export: The directory being exported
Host: The host or network to which the export is being shared
Options: The options to be used for host
31. It is possible to specify multiple hosts, along with specific options for
each host. To do so, list them on the same line as a space-delimited
list, with each hostname followed by its respective options (in
parentheses), as in:
export host1(options1) host2(options2) host3(options3)
In its simplest form, the /etc/exports file only specifies the exported
directory and the hosts permitted to access it, as in the following
example:
The /etc/exports file
/exported/directory bob.example.com
32. The format of the /etc/exports file is very precise, particularly in regards to
use of the space character. Remember to always separate exported file
systems from hosts and hosts from one another with a space character.
However, there should be no other space characters in the file except on
comment lines.
For example, the following two lines do not mean the same thing:
/home bob.example.com(rw)
/home bob.example.com (rw)
The first line allows only users from bob.example.com read/write access to
the /home
directory. The second line allows users from bob.example.com to mount the
directory as
33. THE EXPORTFS COMMAND
Every file system being exported to remote users with NFS, as well as
the access level for those file systems, are listed in the /etc/exports
file. When the nfs service starts, the /usr/sbin/exportfs command
launches and reads this file, passes control to rpc.mountd (if NFSv2
or NFSv3) for the actual mounting process, then to rpc.nfsd where the
file systems are then available to remote users.
When issued manually, the /usr/sbin/exportfs command allows the
root user to selectively export or unexport directories without
restarting the NFS service. When given the proper options, the
/usr/sbin/exportfs command writes the exported file systems to
/var/lib/nfs/xtab. Since rpc.mountd refers to the xtab file when
deciding access privileges to a file system, changes to the list of
exported file systems take effect immediately.
34. The Network Information Service (NIS) and Network File System (NFS)
are services that allow you to build distributed computing systems
that are both consistent in their appearance and transparent in the
way files and data are shared.
35. The Network File System (NFS) is a distributed filesystem that provides
transparent access to remote disks.
NFS is also built on the RPC protocol and imposes a client-server
relationship on the hosts
that use it. An NFS server is a host that owns one or more filesystems and
makes them
available on the network; NFS clients mount filesystems from one or more
servers. This
follows the normal client-server model where the server owns a resource
that is used by the
client. In the case of NFS, the resource is a physical disk drive that is shared
36. There are two aspects to system administration using NFS: choosing a
filesystem naming and mounting scheme, and then configuring the servers
and clients to adhere to this scheme. The goal of any naming scheme should
be to use network transparency wisely. Being able to mount filesystems from
any server is useful only if the files are presented in a manner that is
consistent with the users' expectations.
If NFS has been set up correctly, it should be transparent to the user. For
example, if locally developed applications were found in /usr/local/bin
before NFS was installed, they should
continue to be found there when NFS is running, whether /usr/local/bin is
on a local filesystem or a remote one. To the user, the actual disk holding
/usr/local/bin isn't important as long as the executables are accessible and
built for the right machine architecture. If user smust change their
environments to locate files accessed through NFS, they will probably dislike
the new network architecture because it changes the way things work.
37. An environment with many NFS servers and hundreds of clients can quickly
become
overwhelming in terms of management complexity. Successful system
administration of a
large NFS network requires adding some intelligence to the standard
procedures. The cost of
consistency on the network should not be a large administrative overhead.
One tool that
greatly eases the task of running an NFS network is the automounter, which
applies NIS
management to NFS configuration. This chapter starts with a quick look at
how to get NFS up
38. SETTING UP NFS
On an NFS client, you need to have the lockd and statd daemons
running in order to use NFS.
These daemons are generally started in a boot script (Solaris uses
/etc/init.d/nfs.client):
if [ -x /usr/lib/nfs/statd -a -x /usr/lib/nfs/lockd ]
then
/usr/lib/nfs/statd > /dev/console 2>&1
/usr/lib/nfs/lockd > /dev/console 2>&1
fi
39. On most NFS servers, there is a file that contains the list of
filesystems the server will allow clients to mount via NFS. Many
servers store this list in /etc/exports file.
The nfsd daemon accepts NFS RPC requests and executes them on
the server. Some servers run multiple copies of the daemon so that
they can handle several RPC requests at once. In Solaris, a single copy
of the daemon is run, but multiple threads run in the kernel to
provide parallel NFS service.
40. EXPORTING FILESYSTEMS
Usually, a host decides to become an NFS server if it has filesystems
to export to the network. A server does not explicitly advertise these
filesystems; instead, it keeps a list of currently exported filesystems
and associated access restrictions in a file and compares incoming
NFS mount requests to entries in this table. It is up to the server to
decide if a filesystem can be mounted by a client. You may change
the rules at any time by rebuilding its exported filesystem table.
41. RULES FOR EXPORTING
FILESYSTEMS
There are four rules for making a server's filesystem available to NFS:
1. Any filesystem, or proper subset of a filesystem, can be exported from a
server. A
proper subset of a filesystem is a file or directory tree that starts below the
mount point
of the filesystem. For example, if /usr is a filesystem, and the /usr/local
directory is
part of that filesystem, then /usr/local is a proper subset of /usr.
2. You cannot export any subdirectory of an exported filesystem unless the
subdirectory
is on a different physical device.
3. You cannot export any parent directory of an exported filesystem unless
the parent is
on a different physical device.
42. MOUNTING FILESYSTEMS
NFS filesystems appear to be "normal" filesystems on the client, which
means that they can be mounted on any directory on the client. It's
possible to mount an NFS filesystem over all or part of another
filesystem, since the directories used as mount points appear the
same no matter where they actually reside. When you mount a
filesystem on top of another one, you obscure whatever is "under" the
mount point. NFS clients see the most recent view of the filesystem.
43. HARD AND SOFT MOUNTS
The hard and soft mount options determine how a client behaves
when the server is
excessively loaded for a long period or when it crashes. By default, all
NFS
filesystems are mounted hard, which means that an RPC call that
times out will be
retried indefinitely until a response is received from the server. This
makes the NFS
server look as much like a local disk as possible — the request that
needs to go to
disk completes at some point in the future. An NFS server that
crashes looks like a disk
44. SYMBOLIC LINKS
Symbolic links are both useful and confusing when used with NFS-
mounted filesystems.
They can be used to "shape" a filesystem arbitrarily, giving the system
administrator
freedom to organize filesystems and pathnames in convenient ways.
When used
badly, symbolic links have unexpected and unwanted side effects,
including poor
performance and "missing" files or directories.
45. Symbolic links differ from hard links in several ways, but the salient
distinction is that hard
links duplicate directory entries, while symbolic links are new directory
entries of a special
type. Using a hard link to a file is no different from using the original file,
but referencing a
symbolic link requires reading the link to find out where it points and then
referencing that
file or directory. It is possible to create a loop of symbolic links, but the
kernel routines that
read the links and build up pathnames eventually return an error when too
many links have
46. RESOLVING SYMBOLIC LINKS IN
NFS
When an NFS client does a stat( ) of a directory entry and finds it is a
symbolic link, it issues an RPC call to read the link (on the server) and
determine where the link points. This is the equivalent of doing a local
readlink( ) system call to examine the contents of a symbolic link. The server
returns a pathname that is interpreted on the client, not on the server.
The pathname may point to a directory that the client has mounted, or it
may not make sense
on the client. If you uncover a link that was made on the server that points to
a filesystem not
exported from the server, you will have either trouble or confusion if you
resolve the link. If
the link accidentally points to a valid file or directory on the client, the
results are often
47. An example here helps explain how links can point in unwanted directions.
Let's say that you install a new publishing package, marker, in the tools
filesystem on an NFS server. Once it's loaded, you realize that you need to
free some space on the /tools filesystem, so you move the font directory
used by marker to the /usr filesystem, and make a symbolic link to redirect
the fonts subdirectory to its new location:
# mkdir /usr/marker
# cd /tools/marker
# tar cf - fonts | ( cd /usr/marker; tar xbBfp 20 - )
# rm -rf fonts
# ln -s /usr/marker/fonts fonts
48. Using symbolic links to reduce the number of directories in a
pathname is beneficial only if
users are not tempted to cd from one link to another:
# ln -s /minnow/fred /u/fred
# ln -s /alewife/lucy /u/lucy
The unsuspecting user tries to use the path-compressed names, but
finds that relative
pathnames aren't relative to the link directory:
49. Mount points, exports, and links
Symbolic links have strange effects on mounting and exporting
filesystems. A good general rule to remember is that filesystem
operations apply to the target of a link, not to the link itself. The
symbolic link is just a pointer to the real operand.
If you mount a filesystem on a symbolic link, the actual mount occurs
on the directory pointed to by the link. The following sequence of
operations produces the same net result:
52. A potential problem arises if the symbolic link and the directory it
points to are on
different filesystems: it's possible that the server has exported the
link's filesystem but
not the filesystem containing the link's target. In this example,
/usr/man and
/usr/share/man could be in two distinct filesystems, which would
require two entries in
the server's dfstab file.