This document provides an overview of the Linux operating system, including its history, design principles, kernel components, process management, scheduling, and synchronization techniques. It describes how Linux originated from UNIX and is now a widely used open source operating system. Key topics covered include Linux kernel modules, process identity and context, the Completely Fair Scheduler algorithm, and how the kernel protects access to shared resources using nonpreemptible code and interrupt disabling.
The document describes the history, architecture, design principles, and key components of the Windows 10 operating system. It discusses the kernel, executive, file system, security features, virtual memory management, and more. Windows 10 is a preemptive, portable, extensible, and compatible operating system that uses object-oriented design and supports security, reliability, internationalization, and energy efficiency.
The document provides an overview of the Linux operating system, including its history, design principles, components, and key features. It discusses the kernel, processes and threads, scheduling, memory management, file systems, I/O, inter-process communication, networking, and security in Linux. The document is intended to introduce the fundamental concepts and architecture of the Linux system.
This document provides an overview of the Linux operating system, including:
- A brief history of Linux and its development as a free, open-source operating system based on UNIX standards.
- An overview of key components of the Linux system, including the kernel, system libraries, system utilities, and kernel modules.
- Descriptions of important Linux concepts like process and memory management, scheduling, file systems, and interprocess communication.
- Details on Linux distributions, licensing, and design principles focused on speed, efficiency, and standardization.
This document provides an overview of the Linux operating system, including its history and design principles. It describes key components like the Linux kernel, kernel modules, process management, scheduling, and memory management. It discusses how Linux implements features like file systems, input/output, and interprocess communication. The document also covers Linux distributions and licensing. It provides details on the evolution of the Linux kernel from early versions to version 2.0 and beyond, which added support for new architectures and multiprocessor systems.
Linux began as a small kernel developed by Linus Torvalds in 1991 to be compatible with UNIX. It has since grown through collaboration to be a full-fledged, open-source operating system. The Linux kernel supports loading modules, multiprocessing, and uses a buddy system to manage physical memory. Key components include the kernel, system libraries, utilities, and distributions which package Linux for different systems.
Linux is an open-source operating system based on UNIX with a modular kernel. It uses processes, memory management and file systems similar to UNIX. The Linux kernel supports features like symmetric multiprocessing, virtual memory and loading of kernel modules. Popular Linux distributions package and distribute the Linux system along with utilities and applications.
The document provides an overview of the history and design of the Linux operating system in 3 paragraphs:
Linux was first developed in 1991 by Linus Torvalds as a small kernel for compatibility with UNIX. It has since grown through collaboration over the internet to run on various hardware platforms while remaining free and open source. Early versions only supported 386 processors and basic functionality, while later versions added support for new hardware, file systems, and networking.
The core components of Linux include the kernel, system libraries, and system utilities. The kernel provides core system functions and resource management. Libraries and utilities are developed separately but work together to provide a full UNIX-compatible system. Device drivers, file systems, and network protocols can
The document provides an overview of the history and components of the Linux operating system. It discusses how Linux originated as a small kernel developed by Linus Torvalds in 1991 and has since evolved through collaborations. The core components of Linux include the kernel, system libraries, system utilities, and kernel modules. It also describes key aspects of Linux such as process management, scheduling, memory management, and file systems.
The document describes the history, architecture, design principles, and key components of the Windows 10 operating system. It discusses the kernel, executive, file system, security features, virtual memory management, and more. Windows 10 is a preemptive, portable, extensible, and compatible operating system that uses object-oriented design and supports security, reliability, internationalization, and energy efficiency.
The document provides an overview of the Linux operating system, including its history, design principles, components, and key features. It discusses the kernel, processes and threads, scheduling, memory management, file systems, I/O, inter-process communication, networking, and security in Linux. The document is intended to introduce the fundamental concepts and architecture of the Linux system.
This document provides an overview of the Linux operating system, including:
- A brief history of Linux and its development as a free, open-source operating system based on UNIX standards.
- An overview of key components of the Linux system, including the kernel, system libraries, system utilities, and kernel modules.
- Descriptions of important Linux concepts like process and memory management, scheduling, file systems, and interprocess communication.
- Details on Linux distributions, licensing, and design principles focused on speed, efficiency, and standardization.
This document provides an overview of the Linux operating system, including its history and design principles. It describes key components like the Linux kernel, kernel modules, process management, scheduling, and memory management. It discusses how Linux implements features like file systems, input/output, and interprocess communication. The document also covers Linux distributions and licensing. It provides details on the evolution of the Linux kernel from early versions to version 2.0 and beyond, which added support for new architectures and multiprocessor systems.
Linux began as a small kernel developed by Linus Torvalds in 1991 to be compatible with UNIX. It has since grown through collaboration to be a full-fledged, open-source operating system. The Linux kernel supports loading modules, multiprocessing, and uses a buddy system to manage physical memory. Key components include the kernel, system libraries, utilities, and distributions which package Linux for different systems.
Linux is an open-source operating system based on UNIX with a modular kernel. It uses processes, memory management and file systems similar to UNIX. The Linux kernel supports features like symmetric multiprocessing, virtual memory and loading of kernel modules. Popular Linux distributions package and distribute the Linux system along with utilities and applications.
The document provides an overview of the history and design of the Linux operating system in 3 paragraphs:
Linux was first developed in 1991 by Linus Torvalds as a small kernel for compatibility with UNIX. It has since grown through collaboration over the internet to run on various hardware platforms while remaining free and open source. Early versions only supported 386 processors and basic functionality, while later versions added support for new hardware, file systems, and networking.
The core components of Linux include the kernel, system libraries, and system utilities. The kernel provides core system functions and resource management. Libraries and utilities are developed separately but work together to provide a full UNIX-compatible system. Device drivers, file systems, and network protocols can
The document provides an overview of the history and components of the Linux operating system. It discusses how Linux originated as a small kernel developed by Linus Torvalds in 1991 and has since evolved through collaborations. The core components of Linux include the kernel, system libraries, system utilities, and kernel modules. It also describes key aspects of Linux such as process management, scheduling, memory management, and file systems.
This document provides an overview of the Linux operating system, including its history, design principles, and key components. It describes how Linux originated as a small kernel developed by Linus Torvalds in 1991 and has since grown through collaboration into a full-fledged open source operating system compatible with UNIX standards. The document outlines Linux's modular kernel architecture, use of kernel modules, process and memory management designs, and standards-compliance.
This document provides an overview of the CSC 539 Operating Systems Structure and Design course. It discusses influential early operating systems like Atlas, CTSS, MULTICS, OS/360, UNIX, Alto and Mach. It then focuses on case studies of the Linux and Windows XP operating systems, describing their histories, design principles, process management, memory management, virtual memory, file systems and more.
The Standard C and Xilinx C libraries provide standard C library functions and functions to access peripherals for the MicroBlaze and PowerPC processors. The Standard C library (libc.a) contains standard C functions compiled for the target processor, with header files located in the include directory. The Xilinx C library (libxil.a) contains exception and interrupt handlers. Both libraries are included automatically during compilation. Smaller input/output functions like print, putnum, and xil_printf are also provided in the libraries.
This document provides installation and upgrade instructions for Rational ClearCase Version 7.0.0 on Windows platforms. It includes information on planning the installation, system requirements, configuring the installation, creating a release area, installing servers and clients, and upgrading an existing installation. The document contains details on licensing, supported platforms, disk space needs, installation procedures, and post-installation configuration steps.
tybsc it sem 5 Linux administration notes of unit 1,2,3,4,5,6 version 3WE-IT TUTORIALS
Introduction: Introduction to UNIX, Linux, GNU and Linux distributions Duties of the System Administrator, The Linux System Administrator, Installing and Configuring Servers, Installing and Configuring Application Software,
Creating and Maintaining User Accounts, Backing Up and Restoring Files, Monitoring and Tuning Performance, Configuring a Secure System, Using Tools
to Monitor Security Booting and shutting down: Boot loaders-GRUB, LILO, Bootstrapping, Init
process, rc scripts, Enabling and disabling services.
The File System: Understanding the File System Structure, Working with Linux- Supported File Systems, Memory and Virtual
System Configuration Files: System wide Shell Configuration Scripts, System Environmental Settings, Network Configuration Files, Managing the init Scripts,
Configuration Tool, Editing Your Network Configuration
TCP/IP Networking: Understanding Network Classes, Setting Up a Network nterface Card (NIC), Understanding Subnetting, Working with Gateways and Routers, Configuring Dynamic Host Configuration Protocol, Configuring the Network Using the Network
The Network File System: NFS Overview, Planning an NFS Installation, Configuring an NFS Server, Configuring an NFS Client, Using Automount Services, Examining NFS Security
Connecting to Microsoft Networks: Installing Samba, Configuring the Samba Server, Creating Samba Users 3, Starting the Samba Server, Connecting to a Samba
Client, Connecting from a Windows PC to the Samba Server Additional Network Services: Configuring a Time Server, Providing a Caching Proxy Server
Internet Services: Secure Services, SSH, scp, sftp Less Secure Services (Telnet ,FTP, sync,rsh ,rlogin,finger,talk and ntalk, Linux Machine as a Server, Configuring
the xinetd Server, Comparing xinetd and Standalone, Configuring Linux Firewall Packages, Domain Name System: Understanding DNS, Understanding Types of Domain Servers, Examining Server Configuration Files, Configuring a Caching DNS Server, Configuring a Secondary Master DNS Server, Configuring a Primary
Master Server, Checking Configuration
Configuring Mail Services: Tracing the Email Delivery Process, Mail User Agent (MUA), Introducing SMTP, Configuring Sendmail, Using the Postfix Mail Server,
Serving Email with POP3 and IMAP, Maintaining Email Security Configuring FTP Services: Introducing vsftpd, Configuring vsftpd, Advanced FTP Server Configuration, Using SFTP
Configuring a Web Server: Introducing Apache, Configuring Apache, Implementing SSI, Enabling CGI, Enabling PHP, Creating a Secure Server with SSL System Administration: Administering Users and Groups Installing and Upgrading Software Packages
Study notes for CompTIA Certified Advanced Security Practitioner (ver2)David Sweigert
The document provides information on various topics for the CompTIA CASP exam, including:
1. Virtual Trusted Platform Modules (vTPM) which provide secure storage and cryptographic functions to virtual machines.
2. SELinux which added Mandatory Access Control to the Linux kernel to control access between subjects and objects.
3. Differences between common storage protocols like iSCSI, Fibre Channel over Ethernet, and NFS vs CIFS.
It also covers topics like dynamic disk pools vs RAID, Microsoft Group Policies, and differences between network attached storage and storage area networks.
This document discusses various security issues related to computer systems and networks. It covers authentication methods, threats like Trojan horses and viruses, intrusion detection techniques, and encryption standards. It also describes security classifications from the Department of Defense and how Windows NT implements configurable security policies ranging from minimal to discretionary protection.
Running ColdFusion MX 7 on Linux and UnixSteven Erat
ColdFusion has supported Linux since 1999. This document discusses ColdFusion MX 7's support for Unix and Linux systems. It provides an overview of ColdFusion's history on these platforms, system requirements for ColdFusion MX 7, migration paths from previous versions, and new features in ColdFusion MX 7. It also covers general concerns about running ColdFusion on Unix/Linux and steps for a multiserver configuration installation on Linux via the console.
Windows NT 3.1 was the first release of Microsoft's Windows NT operating system line. It was released in July 1993 with versions for desktop and server use. Unlike Windows 3.1, NT 3.1 was a 32-bit operating system written from scratch to be stable, secure and suitable for business use on networks and office applications. Development of NT began in 1988 with the goal of creating a successor to OS/2, but tensions with IBM led Microsoft to develop NT as a Windows clone instead.
This document provides an overview of the Redhat Linux operating system. It discusses that Linux is an open-source operating system based on Unix. It originated from the GNU project in 1984 and the Linux kernel was created by Linus Torvalds in 1991. Linux is popular due to its low cost, stability, performance, and choice of distributions. Some disadvantages are that it has a less user-friendly interface and is harder for beginners to use than Windows. The document also covers Redhat certifications and career opportunities in Linux.
This document provides an overview of the development of Linux and open source operating systems. It describes how Linux originated from earlier systems like UNIX and how Linus Torvalds released the first version in 1991. It also lists some of the major Linux distributions like Debian, Ubuntu, Fedora and compares characteristics of the Linux kernel to Windows.
The document proposes a new OS architecture called the multikernel that treats multicore systems as a network of independent cores that communicate via message passing instead of shared memory. This approach embraces the networked nature of modern multicore hardware and applies insights from distributed systems. The multikernel model is implemented in the Barrelfish OS and shown to provide comparable performance to conventional OS designs on current hardware while scaling better to support future heterogeneous multicore systems.
Windows NT 3.1 was the first release of Microsoft's Windows NT operating system line. It was released in July 1993 and could run on Intel x86, DEC Alpha, and MIPS R4000 processors. Windows NT 3.1 provided a stable, 32-bit foundation and supported the NTFS and FAT file systems, although hardware integration was limited due to a lack of Plug and Play support.
This document discusses the evolution of Linux container virtualization, including technologies like LXC, Docker, CoreOS, and Kubernetes. It provides an overview of key concepts in virtualization like namespaces, cgroups, AppArmor, SELinux, and seccomp. It also summarizes features of Linux container engines like LXC, and container platforms like Docker, CoreOS, and the Kubernetes container cluster management system.
The document provides an overview of Linux training. It discusses operating systems and their functions. It then covers Linux origins, introductions, advantages over Windows, flavors, installation, boot sequences, run levels, applications, and file structure including important configuration files. The document is intended to educate users about the fundamentals of Linux operating systems.
I have described all about linux OS starting from basics.
I guess this PPT will really be very very helpful for you guys.
This was one of the most appreciable PPT in my time when i presented it in my class.
1. The document provides an overview of the history and development of UNIX/Linux operating systems. It originated from projects in the 1960s and was further developed by Ken Thompson, Dennis Ritchie and others.
2. UNIX became popular due to its modular design, use of a hierarchical file system, treating all system resources as files, and ability to combine simple programs together.
3. The basic architecture of UNIX involves application programs interacting with the kernel via system calls to perform tasks like process and memory management.
Windows 2000 architecture has a layered design with a kernel mode and user mode. The kernel mode consists of the hardware abstraction layer, kernel, and executive services which have unrestricted system access. The user mode contains subsystems and has limited resource access. The kernel schedules processes and handles interrupts, synchronization, and recovery. Executive services provide common functions like I/O management, security, and power management through components like the object manager and process manager. Environment subsystems allow running applications from other operating systems by converting their API calls.
This document provides an overview of the Linux operating system, including its history and design principles. It describes key components of Linux such as the kernel, kernel modules, process management, scheduling, and synchronization techniques. The document also discusses Linux distributions and licensing. It provides details on the evolution of Linux kernels over time and the introduction of new features and capabilities with each version.
The document summarizes key aspects of the Linux operating system as described in Chapter 16 of the 9th Edition of Operating System Concepts by Silberschatz, Galvin and Gagne. It covers Linux history and design principles, kernel modules, process management, scheduling, and memory management. The core topics of the chapter are explained, including the Linux kernel, distributions, licensing, and the major components that make up the Linux system.
This document provides an overview of the Linux operating system, including its history, design principles, and key components. It describes how Linux originated as a small kernel developed by Linus Torvalds in 1991 and has since grown through collaboration into a full-fledged open source operating system compatible with UNIX standards. The document outlines Linux's modular kernel architecture, use of kernel modules, process and memory management designs, and standards-compliance.
This document provides an overview of the CSC 539 Operating Systems Structure and Design course. It discusses influential early operating systems like Atlas, CTSS, MULTICS, OS/360, UNIX, Alto and Mach. It then focuses on case studies of the Linux and Windows XP operating systems, describing their histories, design principles, process management, memory management, virtual memory, file systems and more.
The Standard C and Xilinx C libraries provide standard C library functions and functions to access peripherals for the MicroBlaze and PowerPC processors. The Standard C library (libc.a) contains standard C functions compiled for the target processor, with header files located in the include directory. The Xilinx C library (libxil.a) contains exception and interrupt handlers. Both libraries are included automatically during compilation. Smaller input/output functions like print, putnum, and xil_printf are also provided in the libraries.
This document provides installation and upgrade instructions for Rational ClearCase Version 7.0.0 on Windows platforms. It includes information on planning the installation, system requirements, configuring the installation, creating a release area, installing servers and clients, and upgrading an existing installation. The document contains details on licensing, supported platforms, disk space needs, installation procedures, and post-installation configuration steps.
tybsc it sem 5 Linux administration notes of unit 1,2,3,4,5,6 version 3WE-IT TUTORIALS
Introduction: Introduction to UNIX, Linux, GNU and Linux distributions Duties of the System Administrator, The Linux System Administrator, Installing and Configuring Servers, Installing and Configuring Application Software,
Creating and Maintaining User Accounts, Backing Up and Restoring Files, Monitoring and Tuning Performance, Configuring a Secure System, Using Tools
to Monitor Security Booting and shutting down: Boot loaders-GRUB, LILO, Bootstrapping, Init
process, rc scripts, Enabling and disabling services.
The File System: Understanding the File System Structure, Working with Linux- Supported File Systems, Memory and Virtual
System Configuration Files: System wide Shell Configuration Scripts, System Environmental Settings, Network Configuration Files, Managing the init Scripts,
Configuration Tool, Editing Your Network Configuration
TCP/IP Networking: Understanding Network Classes, Setting Up a Network nterface Card (NIC), Understanding Subnetting, Working with Gateways and Routers, Configuring Dynamic Host Configuration Protocol, Configuring the Network Using the Network
The Network File System: NFS Overview, Planning an NFS Installation, Configuring an NFS Server, Configuring an NFS Client, Using Automount Services, Examining NFS Security
Connecting to Microsoft Networks: Installing Samba, Configuring the Samba Server, Creating Samba Users 3, Starting the Samba Server, Connecting to a Samba
Client, Connecting from a Windows PC to the Samba Server Additional Network Services: Configuring a Time Server, Providing a Caching Proxy Server
Internet Services: Secure Services, SSH, scp, sftp Less Secure Services (Telnet ,FTP, sync,rsh ,rlogin,finger,talk and ntalk, Linux Machine as a Server, Configuring
the xinetd Server, Comparing xinetd and Standalone, Configuring Linux Firewall Packages, Domain Name System: Understanding DNS, Understanding Types of Domain Servers, Examining Server Configuration Files, Configuring a Caching DNS Server, Configuring a Secondary Master DNS Server, Configuring a Primary
Master Server, Checking Configuration
Configuring Mail Services: Tracing the Email Delivery Process, Mail User Agent (MUA), Introducing SMTP, Configuring Sendmail, Using the Postfix Mail Server,
Serving Email with POP3 and IMAP, Maintaining Email Security Configuring FTP Services: Introducing vsftpd, Configuring vsftpd, Advanced FTP Server Configuration, Using SFTP
Configuring a Web Server: Introducing Apache, Configuring Apache, Implementing SSI, Enabling CGI, Enabling PHP, Creating a Secure Server with SSL System Administration: Administering Users and Groups Installing and Upgrading Software Packages
Study notes for CompTIA Certified Advanced Security Practitioner (ver2)David Sweigert
The document provides information on various topics for the CompTIA CASP exam, including:
1. Virtual Trusted Platform Modules (vTPM) which provide secure storage and cryptographic functions to virtual machines.
2. SELinux which added Mandatory Access Control to the Linux kernel to control access between subjects and objects.
3. Differences between common storage protocols like iSCSI, Fibre Channel over Ethernet, and NFS vs CIFS.
It also covers topics like dynamic disk pools vs RAID, Microsoft Group Policies, and differences between network attached storage and storage area networks.
This document discusses various security issues related to computer systems and networks. It covers authentication methods, threats like Trojan horses and viruses, intrusion detection techniques, and encryption standards. It also describes security classifications from the Department of Defense and how Windows NT implements configurable security policies ranging from minimal to discretionary protection.
Running ColdFusion MX 7 on Linux and UnixSteven Erat
ColdFusion has supported Linux since 1999. This document discusses ColdFusion MX 7's support for Unix and Linux systems. It provides an overview of ColdFusion's history on these platforms, system requirements for ColdFusion MX 7, migration paths from previous versions, and new features in ColdFusion MX 7. It also covers general concerns about running ColdFusion on Unix/Linux and steps for a multiserver configuration installation on Linux via the console.
Windows NT 3.1 was the first release of Microsoft's Windows NT operating system line. It was released in July 1993 with versions for desktop and server use. Unlike Windows 3.1, NT 3.1 was a 32-bit operating system written from scratch to be stable, secure and suitable for business use on networks and office applications. Development of NT began in 1988 with the goal of creating a successor to OS/2, but tensions with IBM led Microsoft to develop NT as a Windows clone instead.
This document provides an overview of the Redhat Linux operating system. It discusses that Linux is an open-source operating system based on Unix. It originated from the GNU project in 1984 and the Linux kernel was created by Linus Torvalds in 1991. Linux is popular due to its low cost, stability, performance, and choice of distributions. Some disadvantages are that it has a less user-friendly interface and is harder for beginners to use than Windows. The document also covers Redhat certifications and career opportunities in Linux.
This document provides an overview of the development of Linux and open source operating systems. It describes how Linux originated from earlier systems like UNIX and how Linus Torvalds released the first version in 1991. It also lists some of the major Linux distributions like Debian, Ubuntu, Fedora and compares characteristics of the Linux kernel to Windows.
The document proposes a new OS architecture called the multikernel that treats multicore systems as a network of independent cores that communicate via message passing instead of shared memory. This approach embraces the networked nature of modern multicore hardware and applies insights from distributed systems. The multikernel model is implemented in the Barrelfish OS and shown to provide comparable performance to conventional OS designs on current hardware while scaling better to support future heterogeneous multicore systems.
Windows NT 3.1 was the first release of Microsoft's Windows NT operating system line. It was released in July 1993 and could run on Intel x86, DEC Alpha, and MIPS R4000 processors. Windows NT 3.1 provided a stable, 32-bit foundation and supported the NTFS and FAT file systems, although hardware integration was limited due to a lack of Plug and Play support.
This document discusses the evolution of Linux container virtualization, including technologies like LXC, Docker, CoreOS, and Kubernetes. It provides an overview of key concepts in virtualization like namespaces, cgroups, AppArmor, SELinux, and seccomp. It also summarizes features of Linux container engines like LXC, and container platforms like Docker, CoreOS, and the Kubernetes container cluster management system.
The document provides an overview of Linux training. It discusses operating systems and their functions. It then covers Linux origins, introductions, advantages over Windows, flavors, installation, boot sequences, run levels, applications, and file structure including important configuration files. The document is intended to educate users about the fundamentals of Linux operating systems.
I have described all about linux OS starting from basics.
I guess this PPT will really be very very helpful for you guys.
This was one of the most appreciable PPT in my time when i presented it in my class.
1. The document provides an overview of the history and development of UNIX/Linux operating systems. It originated from projects in the 1960s and was further developed by Ken Thompson, Dennis Ritchie and others.
2. UNIX became popular due to its modular design, use of a hierarchical file system, treating all system resources as files, and ability to combine simple programs together.
3. The basic architecture of UNIX involves application programs interacting with the kernel via system calls to perform tasks like process and memory management.
Windows 2000 architecture has a layered design with a kernel mode and user mode. The kernel mode consists of the hardware abstraction layer, kernel, and executive services which have unrestricted system access. The user mode contains subsystems and has limited resource access. The kernel schedules processes and handles interrupts, synchronization, and recovery. Executive services provide common functions like I/O management, security, and power management through components like the object manager and process manager. Environment subsystems allow running applications from other operating systems by converting their API calls.
This document provides an overview of the Linux operating system, including its history and design principles. It describes key components of Linux such as the kernel, kernel modules, process management, scheduling, and synchronization techniques. The document also discusses Linux distributions and licensing. It provides details on the evolution of Linux kernels over time and the introduction of new features and capabilities with each version.
The document summarizes key aspects of the Linux operating system as described in Chapter 16 of the 9th Edition of Operating System Concepts by Silberschatz, Galvin and Gagne. It covers Linux history and design principles, kernel modules, process management, scheduling, and memory management. The core topics of the chapter are explained, including the Linux kernel, distributions, licensing, and the major components that make up the Linux system.
The document provides an overview of the Linux operating system, including its history and design principles. It describes key components such as the Linux kernel, kernel modules, process management, scheduling, and synchronization techniques. The kernel uses modules that can be loaded and unloaded to support device drivers, file systems, and networking protocols. Process scheduling uses both time-sharing and real-time algorithms.
The chapter discusses the Linux operating system. It provides an overview of Linux's history and development. Key topics covered include the Linux kernel, process and memory management, scheduling, file systems, and interprocess communication. The chapter describes how Linux implements these operating system concepts and compares Linux's approach to traditional UNIX implementations.
The document discusses Linux operating system design principles, kernel modules, process management, scheduling, memory management, and input/output management. It covers Linux history and key versions. It also describes the components of a Linux system including the kernel, system libraries, and system utilities. It provides details on Linux process management, kernel modules, scheduling, and memory management.
The document discusses the Linux operating system, including its history, design principles, kernel modules, process management, scheduling, memory management, input/output management, file systems, and inter-process communication. It also briefly covers the architectures and frameworks of two popular mobile operating systems, iOS and Android. The document provides details on Linux kernel versions and distributions, and explains concepts like kernel synchronization, interrupt handling, and the Completely Fair Scheduler algorithm used in Linux.
The document provides an overview of the history and design of the Linux operating system. It discusses key aspects of Linux including its kernel development over time, process management, scheduling, memory management, file systems, and interprocess communication. The core components of a Linux system including the kernel, system libraries, and system utilities are also summarized.
The document provides an overview of the history and components of the Linux operating system. It discusses how Linux originated as a small kernel developed by Linus Torvalds in 1991 and has since evolved through collaboration. The key components of Linux include the Linux kernel, system libraries, system utilities, and kernel modules. Linux uses a multi-user, multi-tasking model and adheres to UNIX standards and design principles.
Assignment On Linux Unix Life Cycle And Its Commands Course Title System Pro...Robin Beregovska
The document provides an overview of the Linux operating system, including its history, components, design principles, and licensing. It discusses how Linux originated in 1991 as a hobby project and has since grown into a full-fledged operating system through collaboration. It describes the main components of Linux, including the Linux kernel, system libraries and tools from GNU and other open source projects, and various Linux distributions that package these components together.
This document provides an overview of the Linux operating system, including its history and development. It describes how Linux originated as a small kernel created by Linus Torvalds in 1991 and has since grown through collaboration into a full operating system. It discusses key milestones like Linux versions 1.0 and 2.0, popular distributions like Red Hat and Debian, its open source licensing, and the core components of the Linux system including the kernel, system libraries, and utilities.
Unix is a multi-user and multi-tasking operating system developed in the 1960s and 1970s at Bell Labs. It was influenced by the MULTICS project and initially developed by Ken Thompson on a PDP-7 computer. Dennis Ritchie further developed Unix and created the C programming language. Unix became widely adopted on university campuses and later had several commercial releases from Bell Labs. Linux was later developed by Linus Torvalds in 1991 as a free Unix-like operating system and has become widely popular and distributed through different Linux distributions.
This document provides an overview of the Linux operating system, including its history, design principles, and key components. It began in 1991 as a small kernel developed by Linus Torvalds and has grown through collaboration over the Internet. The core Linux kernel is original but can run existing UNIX software. Major versions have added support for new hardware, file systems, networking, and multiprocessing. Key components include the Linux kernel, system libraries, and system utilities. The kernel uses loadable modules and supports process management and scheduling.
This document provides an introduction to Linux, including:
- What Linux is, its basic components like the kernel and shell, and features such as being open-source and portable.
- The architecture of Linux including the hardware, kernel, shell, and utilities layers.
- Popular Linux distributions like Debian, Red Hat Enterprise Linux, and SUSE Linux Enterprise.
- A comparison between Linux and Windows in areas like licensing, updating processes, and software availability.
- Important Linux commands for navigation, file management, networking, and package management.
- An overview of the Linux file system structure and common directories like /bin, /etc, and /var.
The document summarizes the architecture of the Linux operating system. It discusses the main components of Linux including the kernel, process management, memory management, file systems, device drivers, network stack, and architecture-dependent code. The kernel is at the core and acts as a resource manager. It uses a monolithic design. Process and memory management are handled via data structures like task_struct and buddy allocation. Virtual memory is implemented using page tables. File systems organize files in a hierarchy with inodes. Device drivers interface with hardware. The network stack follows a layered model. Architecture code is separated by subdirectory.
The document summarizes the architecture of the Linux operating system. It discusses that Linux is divided into the kernel space and user space. The kernel is responsible for process management, memory management, file systems, device drivers, and the network stack. It also touches on architecture-dependent code and the components of the Linux system like the kernel, user applications, and system libraries.
The document provides an overview of the key components of the Linux operating system, including:
1) The Linux kernel, which acts as a resource manager for processes, memory, and hardware devices.
2) Process and memory management systems that control how processes are allocated and memory is allocated and freed.
3) The file system which organizes how files are stored and accessed.
4) Device drivers that allow the operating system to interface with hardware.
5) The network stack which handles network protocols and connections.
6) Architecture-dependent code that provides hardware-specific implementations of core functions.
red hat training in hyderabad,red hat certification,jboss training, online red hat Training, rh413 classes, cl210 training, RH236 training hyderabad, Server H
Hyderabad Based Red Hat Linux Training Institute For Red Hat Courses RH413, RH236, CL210, RHCJA, RHCJD, RH436, RHCA, RHCSS, Rh442, RH333, RH423. Red Hat Partner
Linux Operating System. UOG MARGHAZAR CampusSYEDASADALI38
The document provides information about Linux operating system components such as the kernel, file systems, input/output devices, and process management. It discusses the kernel and kernel modules, describing kernel modules as code that can be dynamically loaded and unloaded. It describes the major Linux file systems like ext2, ext3, and ext4. It also discusses input/output devices in Linux, differentiating between block and character devices. Finally, it touches on process management in Linux and similarities to the Unix process model using fork() and exec() calls.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
1. Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Chapter 20:
The Linux System
2. 20.2 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Chapter 20: The Linux System
Linux History
Design Principles
Kernel Modules
Process Management
Scheduling
Memory Management
File Systems
Input and Output
Interprocess Communication
Network Structure
Security
3. 20.3 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Objectives
To explore the history of the UNIX operating system from
which Linux is derived and the principles upon which Linux’s
design is based
To examine the Linux process model and illustrate how Linux
schedules processes and provides interprocess
communication
To look at memory management in Linux
To explore how Linux implements file systems and manages
I/O devices
4. 20.4 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
History
Linux is a modern, free operating system based on UNIX
standards
First developed as a small but self-contained kernel in 1991
by Linus Torvalds, with the major design goal of UNIX
compatibility, released as open source
Its history has been one of collaboration by many users from
all around the world, corresponding almost exclusively over
the Internet
It has been designed to run efficiently and reliably on common
PC hardware, but also runs on a variety of other platforms
The core Linux operating system kernel is entirely original,
but it can run much existing free UNIX software, resulting in
an entire UNIX-compatible operating system free from
proprietary code
Linux system has many, varying Linux distributions
including the kernel, applications, and management tools
5. 20.5 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
The Linux Kernel
Version 0.01 (May 1991) had no networking, ran only on 80386-
compatible Intel processors and on PC hardware, had extremely
limited device-drive support, and supported only the Minix file
system
Linux 1.0 (March 1994) included these new features:
Support for UNIX’s standard TCP/IP networking protocols
BSD-compatible socket interface for networking programming
Device-driver support for running IP over an Ethernet
Enhanced file system
Support for a range of SCSI controllers for
high-performance disk access
Extra hardware support
Version 1.2 (March 1995) was the final PC-only Linux kernel
Kernels with odd version numbers are development kernels,
those with even numbers are production kernels
6. 20.6 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Linux 2.0
Released in June 1996, 2.0 added two major new capabilities:
Support for multiple architectures, including a fully 64-bit native Alpha
port
Support for multiprocessor architectures
Other new features included:
Improved memory-management code
Improved TCP/IP performance
Support for internal kernel threads, for handling dependencies between
loadable modules, and for automatic loading of modules on demand
Standardized configuration interface
Available for Motorola 68000-series processors, Sun Sparc
systems, and for PC and PowerMac systems
2.4 and 2.6 increased SMP support, added journaling file system,
preemptive kernel, 64-bit memory support
3.0 released in 2011, 20th
anniversary of Linux, improved
virtualization support, new page write-back facility, improved
memory management, new Completely Fair Scheduler
7. 20.7 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
The Linux System
Linux uses many tools developed as part of Berkeley’s BSD
operating system, MIT’s X Window System, and the Free
Software Foundation's GNU project
The main system libraries were started by the GNU project, with
improvements provided by the Linux community
Linux networking-administration tools were derived from 4.3BSD
code; recent BSD derivatives such as Free BSD have borrowed
code from Linux in return
The Linux system is maintained by a loose network of developers
collaborating over the Internet, with a small number of public ftp
sites acting as de facto standard repositories
File System Hierarchy Standard document maintained by
the Linux community to ensure compatibility across the various
system components
Specifies overall layout of a standard Linux file system, determines
under which directory names configuration files, libraries, system
binaries, and run-time data files should be stored
8. 20.8 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Linux Distributions
Standard, precompiled sets of packages, or distributions,
include the basic Linux system, system installation and
management utilities, and ready-to-install packages of common
UNIX tools
The first distributions managed these packages by simply
providing a means of unpacking all the files into the appropriate
places; modern distributions include advanced package
management
Early distributions included SLS and Slackware
Red Hat and Debian are popular distributions from
commercial and noncommercial sources, respectively,
others include Canonical and SuSE
The RPM Package file format permits compatibility among the
various Linux distributions
9. 20.9 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Linux Licensing
The Linux kernel is distributed under the GNU General Public
License (GPL), the terms of which are set out by the Free
Software Foundation
Not public domain, in that not all rights are waived
Anyone using Linux, or creating their own derivative of Linux,
may not make the derived product proprietary; software
released under the GPL may not be redistributed as a binary-
only product
Can sell distributions, but must offer the source code too
10. 20.10 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Design Principles
Linux is a multiuser, multitasking system with a full set of UNIX-
compatible tools
Its file system adheres to traditional UNIX semantics, and it
fully implements the standard UNIX networking model
Main design goals are speed, efficiency, and standardization
Linux is designed to be compliant with the relevant POSIX
documents; at least two Linux distributions have achieved
official POSIX certification
Supports Pthreads and a subset of POSIX real-time
process control
The Linux programming interface adheres to the SVR4 UNIX
semantics, rather than to BSD behavior
11. 20.11 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Components of a Linux System
12. 20.12 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Components of a Linux System
Like most UNIX implementations, Linux is composed of three
main bodies of code; the most important distinction between the
kernel and all other components.
The kernel is responsible for maintaining the important
abstractions of the operating system
Kernel code executes in kernel mode with full access to all
the physical resources of the computer
All kernel code and data structures are kept in the same
single address space
13. 20.13 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Components of a Linux System (Cont.)
The system libraries define a standard set of functions
through which applications interact with the kernel, and which
implement much of the operating-system functionality that
does not need the full privileges of kernel code
The system utilities perform individual specialized
management tasks
User-mode programs rich and varied, including multiple
shells like the bourne-again (bash)
14. 20.14 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Kernel Modules
Sections of kernel code that can be compiled, loaded, and
unloaded independent of the rest of the kernel.
A kernel module may typically implement a device driver, a file
system, or a networking protocol
The module interface allows third parties to write and distribute, on
their own terms, device drivers or file systems that could not be
distributed under the GPL.
Kernel modules allow a Linux system to be set up with a standard,
minimal kernel, without any extra device drivers built in.
Four components to Linux module support:
module-management system
module loader and unloader
driver-registration system
conflict-resolution mechanism
15. 20.15 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Module Management
Supports loading modules into memory and letting them talk
to the rest of the kernel
Module loading is split into two separate sections:
Managing sections of module code in kernel memory
Handling symbols that modules are allowed to reference
The module requestor manages loading requested, but
currently unloaded, modules; it also regularly queries the
kernel to see whether a dynamically loaded module is still in
use, and will unload it when it is no longer actively needed
16. 20.16 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Driver Registration
Allows modules to tell the rest of the kernel that a new driver
has become available
The kernel maintains dynamic tables of all known drivers, and
provides a set of routines to allow drivers to be added to or
removed from these tables at any time
Registration tables include the following items:
Device drivers
File systems
Network protocols
Binary format
17. 20.17 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Conflict Resolution
A mechanism that allows different device drivers to reserve
hardware resources and to protect those resources from
accidental use by another driver.
The conflict resolution module aims to:
Prevent modules from clashing over access to hardware
resources
Prevent autoprobes from interfering with existing device
drivers
Resolve conflicts with multiple drivers trying to access the
same hardware:
1. Kernel maintains list of allocated HW resources
2. Driver reserves resources with kernel database first
3. Reservation request rejected if resource not available
18. 20.18 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Process Management
UNIX process management separates the creation of
processes and the running of a new program into two distinct
operations.
The fork() system call creates a new process
A new program is run after a call to exec()
Under UNIX, a process encompasses all the information that
the operating system must maintain to track the context of a
single execution of a single program
Under Linux, process properties fall into three groups: the
process’s identity, environment, and context
19. 20.19 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Process Identity
Process ID (PID) - The unique identifier for the process; used
to specify processes to the operating system when an application
makes a system call to signal, modify, or wait for another process
Credentials - Each process must have an associated user ID
and one or more group IDs that determine the process’s rights to
access system resources and files
Personality - Not traditionally found on UNIX systems, but
under Linux each process has an associated personality identifier
that can slightly modify the semantics of certain system calls
Used primarily by emulation libraries to request that system
calls be compatible with certain specific flavors of UNIX
Namespace – Specific view of file system hierarchy
Most processes share common namespace and operate on a
shared file-system hierarchy
But each can have unique file-system hierarchy with its own
root directory and set of mounted file systems
20. 20.20 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Process Environment
The process’s environment is inherited from its parent, and is
composed of two null-terminated vectors:
The argument vector lists the command-line arguments
used to invoke the running program; conventionally starts
with the name of the program itself.
The environment vector is a list of “NAME=VALUE” pairs
that associates named environment variables with arbitrary
textual values.
Passing environment variables among processes and inheriting
variables by a process’s children are flexible means of passing
information to components of the user-mode system software.
The environment-variable mechanism provides a customization of
the operating system that can be set on a per-process basis,
rather than being configured for the system as a whole.
21. 20.21 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Process Context
The (constantly changing) state of a running program at any
point in time
The scheduling context is the most important part of the
process context; it is the information that the scheduler needs to
suspend and restart the process
The kernel maintains accounting information about the
resources currently being consumed by each process, and the
total resources consumed by the process in its lifetime so far
The file table is an array of pointers to kernel file structures
When making file I/O system calls, processes refer to files by
their index into this table, the file descriptor (fd)
22. 20.22 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Process Context (Cont.)
Whereas the file table lists the existing open files, the
file-system context applies to requests to open new files
The current root and default directories to be used for new
file searches are stored here
The signal-handler table defines the routine in the
process’s address space to be called when specific signals
arrive
The virtual-memory context of a process describes the full
contents of the its private address space
23. 20.23 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Processes and Threads
Linux uses the same internal representation for processes and threads; a
thread is simply a new process that happens to share the same address
space as its parent
Both are called tasks by Linux
A distinction is only made when a new thread is created by the clone()
system call
fork() creates a new task with its own entirely new task context
clone() creates a new task with its own identity, but that is allowed
to share the data structures of its parent
Using clone() gives an application fine-grained control over exactly what
is shared between two threads
24. 20.24 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Scheduling
The job of allocating CPU time to different tasks within an
operating system
While scheduling is normally thought of as the running and
interrupting of processes, in Linux, scheduling also includes the
running of the various kernel tasks
Running kernel tasks encompasses both tasks that are
requested by a running process and tasks that execute internally
on behalf of a device driver
As of 2.5, new scheduling algorithm – preemptive, priority-based,
known as O(1)
Real-time range
nice value
Had challenges with interactive performance
2.6 introduced Completely Fair Scheduler (CFS)
25. 20.25 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
CFS
Eliminates traditional, common idea of time slice
Instead all tasks allocated portion of processor’s time
CFS calculates how long a process should run as a function
of total number of tasks
N runnable tasks means each gets 1/N of processor’s time
Then weights each task with its nice value
Smaller nice value -> higher weight (higher priority)
26. 20.26 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
CFS (Cont.)
Then each task run with for time proportional to task’s weight
divided by total weight of all runnable tasks
Configurable variable target latency is desired interval during
which each task should run at least once
Consider simple case of 2 runnable tasks with equal weight
and target latency of 10ms – each then runs for 5ms
If 10 runnable tasks, each runs for 1ms
Minimum granularity ensures each run has
reasonable amount of time (which actually violates
fairness idea)
27. 20.27 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Kernel Synchronization
A request for kernel-mode execution can occur in two ways:
A running program may request an operating system
service, either explicitly via a system call, or implicitly, for
example, when a page fault occurs
A device driver may deliver a hardware interrupt that
causes the CPU to start executing a kernel-defined
handler for that interrupt
Kernel synchronization requires a framework that will allow
the kernel’s critical sections to run without interruption by
another critical section
28. 20.28 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Kernel Synchronization (Cont.)
Linux uses two techniques to protect critical sections:
1. Normal kernel code is nonpreemptible (until 2.6)
– when a time interrupt is received while a process is
executing a kernel system service routine, the kernel’s
need_resched flag is set so that the scheduler will run
once the system call has completed and control is
about to be returned to user mode
2. The second technique applies to critical sections that occur in an
interrupt service routines
– By using the processor’s interrupt control hardware to disable
interrupts during a critical section, the kernel guarantees that it can
proceed without the risk of concurrent access of shared data structures
Provides spin locks, semaphores, and reader-writer versions of both
Behavior modified if on single processor or multi:
29. 20.29 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Kernel Synchronization (Cont.)
To avoid performance penalties, Linux’s kernel uses a
synchronization architecture that allows long critical sections to
run without having interrupts disabled for the critical section’s
entire duration
Interrupt service routines are separated into a top half and a
bottom half
The top half is a normal interrupt service routine, and runs
with recursive interrupts disabled
The bottom half is run, with all interrupts enabled, by a
miniature scheduler that ensures that bottom halves never
interrupt themselves
This architecture is completed by a mechanism for disabling
selected bottom halves while executing normal, foreground
kernel code
30. 20.30 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Interrupt Protection Levels
Each level may be interrupted by code running at a higher
level, but will never be interrupted by code running at the
same or a lower level
User processes can always be preempted by another
process when a time-sharing scheduling interrupt occurs
31. 20.31 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Symmetric Multiprocessing
Linux 2.0 was the first Linux kernel to support SMP hardware;
separate processes or threads can execute in parallel on
separate processors
Until version 2.2, to preserve the kernel’s nonpreemptible
synchronization requirements, SMP imposes the restriction, via a
single kernel spinlock, that only one processor at a time may
execute kernel-mode code
Later releases implement more scalability by splitting single
spinlock into multiple locks, each protecting a small subset of
kernel data structures
Version 3.0 adds even more fine-grained locking, processor
affinity, and load-balancing
32. 20.32 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Memory Management
Linux’s physical memory-management system deals with
allocating and freeing pages, groups of pages, and small blocks
of memory
It has additional mechanisms for handling virtual memory,
memory mapped into the address space of running processes
Splits memory into four different zones due to hardware
characteristics
Architecture specific, for example on x86:
33. 20.33 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Managing Physical Memory
The page allocator allocates and frees all physical pages; it
can allocate ranges of physically-contiguous pages on
request
The allocator uses a buddy-heap algorithm to keep track of
available physical pages
Each allocatable memory region is paired with an
adjacent partner
Whenever two allocated partner regions are both freed
up they are combined to form a larger region
If a small memory request cannot be satisfied by
allocating an existing small free region, then a larger free
region will be subdivided into two partners to satisfy the
request
34. 20.34 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Managing Physical Memory (Cont.)
Memory allocations in the Linux kernel occur either statically
(drivers reserve a contiguous area of memory during system
boot time) or dynamically (via the page allocator)
Also uses slab allocator for kernel memory
Page cache and virtual memory system also manage
physical memory
Page cache is kernel’s main cache for files and main
mechanism for I/O to block devices
Page cache stores entire pages of file contents for local
and network file I/O
35. 20.35 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Splitting of Memory in a Buddy Heap
37. 20.37 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Virtual Memory
The VM system maintains the address space visible to each
process: It creates pages of virtual memory on demand, and
manages the loading of those pages from disk or their swapping
back out to disk as required.
The VM manager maintains two separate views of a process’s
address space:
A logical view describing instructions concerning the layout of
the address space
The address space consists of a set of non-overlapping
regions, each representing a continuous, page-aligned
subset of the address space
A physical view of each address space which is stored in the
hardware page tables for the process
38. 20.38 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Virtual Memory (Cont.)
Virtual memory regions are characterized by:
The backing store, which describes from where the pages for
a region come; regions are usually backed by a file or by
nothing (demand-zero memory)
The region’s reaction to writes (page sharing or copy-on-
write
The kernel creates a new virtual address space
1. When a process runs a new program with the exec()
system call
2. Upon creation of a new process by the fork() system call
39. 20.39 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Virtual Memory (Cont.)
On executing a new program, the process is given a new,
completely empty virtual-address space; the program-loading
routines populate the address space with virtual-memory regions
Creating a new process with fork() involves creating a
complete copy of the existing process’s virtual address space
The kernel copies the parent process’s VMA descriptors,
then creates a new set of page tables for the child
The parent’s page tables are copied directly into the child’s,
with the reference count of each page covered being
incremented
After the fork, the parent and child share the same physical
pages of memory in their address spaces
40. 20.40 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Swapping and Paging
The VM paging system relocates pages of memory from
physical memory out to disk when the memory is needed for
something else
The VM paging system can be divided into two sections:
The pageout-policy algorithm decides which pages to
write out to disk, and when
The paging mechanism actually carries out the transfer,
and pages data back into physical memory as needed
Can page out to either swap device or normal files
Bitmap used to track used blocks in swap space kept in
physical memory
Allocator uses next-fit algorithm to try to write contiguous
runs
41. 20.41 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Kernel Virtual Memory
The Linux kernel reserves a constant, architecture-dependent
region of the virtual address space of every process for its own
internal use
This kernel virtual-memory area contains two regions:
A static area that contains page table references to every
available physical page of memory in the system, so that
there is a simple translation from physical to virtual
addresses when running kernel code
The reminder of the reserved section is not reserved for
any specific purpose; its page-table entries can be modified
to point to any other areas of memory
42. 20.42 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Executing and Loading User Programs
Linux maintains a table of functions for loading programs; it gives
each function the opportunity to try loading the given file when an
exec system call is made
The registration of multiple loader routines allows Linux to support
both the ELF and a.out binary formats
Initially, binary-file pages are mapped into virtual memory
Only when a program tries to access a given page will a page
fault result in that page being loaded into physical memory
An ELF-format binary file consists of a header followed by several
page-aligned sections
The ELF loader works by reading the header and mapping the
sections of the file into separate regions of virtual memory
43. 20.43 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Memory Layout for ELF Programs
44. 20.44 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Static and Dynamic Linking
A program whose necessary library functions are embedded
directly in the program’s executable binary file is statically
linked to its libraries
The main disadvantage of static linkage is that every program
generated must contain copies of exactly the same common
system library functions
Dynamic linking is more efficient in terms of both physical
memory and disk-space usage because it loads the system
libraries into memory only once
45. 20.45 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Static and Dynamic Linking (Cont.)
Linux implements dynamic linking in user mode through special
linker library
Every dynamically linked program contains small statically
linked function called when process starts
Maps the link library into memory
Link library determines dynamic libraries required by process
and names of variables and functions needed
Maps libraries into middle of virtual memory and resolves
references to symbols contained in the libraries
Shared libraries compiled to be position-independent
code (PIC) so can be loaded anywhere
46. 20.46 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
File Systems
To the user, Linux’s file system appears as a hierarchical directory tree
obeying UNIX semantics
Internally, the kernel hides implementation details and manages the
multiple different file systems via an abstraction layer, that is, the virtual file
system (VFS)
The Linux VFS is designed around object-oriented principles and is
composed of four components:
A set of definitions that define what a file object is allowed to look like
The inode object structure represent an individual file
The file object represents an open file
The superblock object represents an entire file system
A dentry object represents an individual directory entry
47. 20.47 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
File Systems (Cont.)
To the user, Linux’s file system appears as a hierarchical
directory tree obeying UNIX semantics
Internally, the kernel hides implementation details and manages
the multiple different file systems via an abstraction layer, that is,
the virtual file system (VFS)
The Linux VFS is designed around object-oriented principles and
layer of software to manipulate those objects with a set of
operations on the objects
For example for the file object operations include (from struct
file_operations in /usr/include/linux/fs.h
int open(. . .) — Open a file
ssize t read(. . .) — Read from a file
ssize t write(. . .) — Write to a file
int mmap(. . .) — Memory-map a file
48. 20.48 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
The Linux ext3 File System
ext3 is standard on disk file system for Linux
Uses a mechanism similar to that of BSD Fast File
System (FFS) for locating data blocks belonging to a
specific file
Supersedes older extfs, ext2 file systems
Work underway on ext4 adding features like extents
Of course, many other file system choices with Linux
distros
49. 20.49 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
The Linux ext3 File System (Cont.)
The main differences between ext2fs and FFS concern their disk
allocation policies
In ffs, the disk is allocated to files in blocks of 8Kb, with blocks being
subdivided into fragments of 1Kb to store small files or partially filled
blocks at the end of a file
ext3 does not use fragments; it performs its allocations in smaller
units
The default block size on ext3 varies as a function of total size of
file system with support for 1, 2, 4 and 8 KB blocks
ext3 uses cluster allocation policies designed to place logically
adjacent blocks of a file into physically adjacent blocks on disk, so
that it can submit an I/O request for several disk blocks as a single
operation on a block group
Maintains bit map of free blocks in a block group, searches for free
byte to allocate at least 8 blocks at a time
50. 20.50 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Ext2fs Block-Allocation Policies
51. 20.51 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Journaling
ext3 implements journaling, with file system updates first
written to a log file in the form of transactions
Once in log file, considered committed
Over time, log file transactions replayed over file system to
put changes in place
On system crash, some transactions might be in journal but not
yet placed into file system
Must be completed once system recovers
No other consistency checking is needed after a crash
(much faster than older methods)
Improves write performance on hard disks by turning random
I/O into sequential I/O
52. 20.52 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
The Linux Proc File System
The proc file system does not store data, rather, its contents
are computed on demand according to user file I/O requests
proc must implement a directory structure, and the file
contents within; it must then define a unique and persistent
inode number for each directory and files it contains
It uses this inode number to identify just what operation is
required when a user tries to read from a particular file
inode or perform a lookup in a particular directory inode
When data is read from one of these files, proc collects the
appropriate information, formats it into text form and places
it into the requesting process’s read buffer
53. 20.53 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Input and Output
The Linux device-oriented file system accesses disk storage
through two caches:
Data is cached in the page cache, which is unified with the
virtual memory system
Metadata is cached in the buffer cache, a separate cache
indexed by the physical disk block
Linux splits all devices into three classes:
block devices allow random access to completely
independent, fixed size blocks of data
character devices include most other devices; they don’t
need to support the functionality of regular files
network devices are interfaced via the kernel’s networking
subsystem
54. 20.54 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Block Devices
Provide the main interface to all disk devices in a system
The block buffer cache serves two main purposes:
it acts as a pool of buffers for active I/O
it serves as a cache for completed I/O
The request manager manages the reading and writing of buffer
contents to and from a block device driver
Kernel 2.6 introduced Completely Fair Queueing (CFQ)
Now the default scheduler
Fundamentally different from elevator algorithms
Maintains set of lists, one for each process by default
Uses C-SCAN algorithm, with round robin between all
outstanding I/O from all processes
Four blocks from each process put on at once
55. 20.55 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Device-Driver Block Structure
56. 20.56 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Character Devices
A device driver which does not offer random access to fixed
blocks of data
A character device driver must register a set of functions which
implement the driver’s various file I/O operations
The kernel performs almost no preprocessing of a file read or
write request to a character device, but simply passes on the
request to the device
The main exception to this rule is the special subset of character
device drivers which implement terminal devices, for which the
kernel maintains a standard interface
57. 20.57 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Character Devices (Cont.)
Line discipline is an interpreter for the information from the
terminal device
The most common line discipline is tty discipline, which glues
the terminal’s data stream onto standard input and output
streams of user’s running processes, allowing processes to
communicate directly with the user’s terminal
Several processes may be running simultaneously, tty line
discipline responsible for attaching and detaching terminal’s
input and output from various processes connected to it as
processes are suspended or awakened by user
Other line disciplines also are implemented have nothing to do
with I/O to user process – i.e. PPP and SLIP networking
protocols
58. 20.58 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Interprocess Communication
Like UNIX, Linux informs processes that an event has occurred
via signals
There is a limited number of signals, and they cannot carry
information: Only the fact that a signal occurred is available to a
process
The Linux kernel does not use signals to communicate with
processes with are running in kernel mode, rather, communication
within the kernel is accomplished via scheduling states and
wait_queue structures
Also implements System V Unix semaphores
Process can wait for a signal or a semaphore
Semaphores scale better
Operations on multiple semaphores can be atomic
59. 20.59 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Passing Data Between Processes
The pipe mechanism allows a child process to inherit a
communication channel to its parent, data written to one end
of the pipe can be read a the other
Shared memory offers an extremely fast way of
communicating; any data written by one process to a shared
memory region can be read immediately by any other
process that has mapped that region into its address space
To obtain synchronization, however, shared memory must
be used in conjunction with another Interprocess-
communication mechanism
60. 20.60 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Network Structure
Networking is a key area of functionality for Linux
It supports the standard Internet protocols for UNIX to UNIX
communications
It also implements protocols native to non-UNIX operating systems, in
particular, protocols used on PC networks, such as Appletalk and IPX
Internally, networking in the Linux kernel is implemented by three
layers of software:
The socket interface
Protocol drivers
Network device drivers
Most important set of protocols in the Linux networking system is the
internet protocol suite
It implements routing between different hosts anywhere on the network
On top of the routing protocol are built the UDP, TCP and ICMP protocols
Packets also pass to firewall management for filtering based on
firewall chains of rules
61. 20.61 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Security
The pluggable authentication modules (PAM) system is
available under Linux
PAM is based on a shared library that can be used by any
system component that needs to authenticate users
Access control under UNIX systems, including Linux, is
performed through the use of unique numeric identifiers (uid
and gid)
Access control is performed by assigning objects a protections
mask, which specifies which access modes—read, write, or
execute—are to be granted to processes with owner, group, or
world access
62. 20.62 Silberschatz, Galvin and GagneOperating System Concepts – 10th
Edition
Security (Cont.)
Linux augments the standard UNIX setuid mechanism in two
ways:
It implements the POSIX specification’s saved user-id
mechanism, which allows a process to repeatedly drop and
reacquire its effective uid
It has added a process characteristic that grants just a
subset of the rights of the effective uid
Linux provides another mechanism that allows a client to
selectively pass access to a single file to some server process
without granting it any other privileges