The document discusses the TCP/IP protocol stack and its various layers. It describes the data link layer which handles transferring data across physical links and encapsulates data into frames. The network layer routes data across networks in packets using IP. The transport layer provides end-to-end communication using protocols like TCP and UDP, with TCP providing reliable, connection-oriented communication and UDP being unreliable and connectionless.
The document discusses processes and inter-process communication (IPC). It describes how processes can be independent or cooperating. Cooperating processes require an IPC mechanism to exchange data and information, which can take two forms: shared memory or message passing. Shared memory involves processes accessing the same memory region, while message passing involves processes explicitly sending and receiving messages. The document provides examples of using shared memory and message passing for a producer-consumer problem.
Introduction to Operating Systems - Part2Amir Payberah
The document discusses computer system architecture and operating system structure. It describes multiprocessor systems, multicore systems, and blade servers. It then covers operating system concepts like multiprogramming, timesharing, and dual-mode operation. It outlines the major components of an operating system including user space programs, system programs, and the kernel. Key kernel functions like process management, memory management, storage/file systems, device control, and security are also summarized.
Introduction to Operating Systems - Part1Amir Payberah
This document provides an introduction to an operating systems course. It outlines the course objectives, which are to teach the design of operating systems and cover topics like process management, memory management, file systems, I/O management, and security. It also lists the course textbooks and explains that the course grade will be based on midterm and final exams as well as programming assignments done in groups of three students.
The document discusses processes and process management in an operating system. It defines a process as an instance of a program running in memory, as opposed to a program which is a passive executable file on disk. A process consists of the program code, current activity like the program counter, data sections containing variables, a stack, and dynamically allocated memory. Each process is represented by a process control block that contains its state, scheduling information, memory allocation, and other details. Processes can transition between states like ready, running, waiting, and terminated. A process may contain multiple threads of execution.
Introduction to Operating Systems - Part3Amir Payberah
The document discusses the structure and functions of operating systems. It describes how operating systems have two main spaces: user space for application programs and system space for the kernel. The kernel is responsible for core functions like process management, memory management, file systems, device control and security. System calls provide an interface for programs to access OS services, and are usually accessed through high-level APIs rather than direct calls. Common APIs include POSIX, Win32 and Java. Parameters are typically passed to system calls via registers, memory blocks or the stack. Major categories of system calls control processes, files, devices, system information and security.
The document discusses input/output (I/O) systems. It describes how I/O devices connect to computers through ports, busses, and device controllers. Device drivers present a uniform interface to access devices. Common I/O hardware concepts are discussed, including polling and interrupt-driven interactions between processors and controllers. Interrupt vectors route interrupts to specific handler routines. Interrupts are also used for exceptions and system calls.
The document discusses process synchronization techniques, including semaphores. It describes semaphores as a synchronization tool that provides more sophisticated ways for processes to synchronize activities than mutex locks. Semaphores use wait() and signal() operations to decrement and increment an integer value to control access to shared resources. The document also discusses potential issues with semaphores like deadlock, starvation, and priority inversion, and how monitors provide an alternative approach.
The document discusses computer protection models. It describes a model where a computer consists of objects that can be uniquely named and accessed through defined operations. The protection problem is to ensure each object is only accessed correctly by allowed processes. It also discusses principles of protection like least privilege and separation of policy and mechanism. Different types of protection domains like users, processes, and procedures are described along with examples of UNIX and Multics domain implementations using access matrices.
The document discusses processes and inter-process communication (IPC). It describes how processes can be independent or cooperating. Cooperating processes require an IPC mechanism to exchange data and information, which can take two forms: shared memory or message passing. Shared memory involves processes accessing the same memory region, while message passing involves processes explicitly sending and receiving messages. The document provides examples of using shared memory and message passing for a producer-consumer problem.
Introduction to Operating Systems - Part2Amir Payberah
The document discusses computer system architecture and operating system structure. It describes multiprocessor systems, multicore systems, and blade servers. It then covers operating system concepts like multiprogramming, timesharing, and dual-mode operation. It outlines the major components of an operating system including user space programs, system programs, and the kernel. Key kernel functions like process management, memory management, storage/file systems, device control, and security are also summarized.
Introduction to Operating Systems - Part1Amir Payberah
This document provides an introduction to an operating systems course. It outlines the course objectives, which are to teach the design of operating systems and cover topics like process management, memory management, file systems, I/O management, and security. It also lists the course textbooks and explains that the course grade will be based on midterm and final exams as well as programming assignments done in groups of three students.
The document discusses processes and process management in an operating system. It defines a process as an instance of a program running in memory, as opposed to a program which is a passive executable file on disk. A process consists of the program code, current activity like the program counter, data sections containing variables, a stack, and dynamically allocated memory. Each process is represented by a process control block that contains its state, scheduling information, memory allocation, and other details. Processes can transition between states like ready, running, waiting, and terminated. A process may contain multiple threads of execution.
Introduction to Operating Systems - Part3Amir Payberah
The document discusses the structure and functions of operating systems. It describes how operating systems have two main spaces: user space for application programs and system space for the kernel. The kernel is responsible for core functions like process management, memory management, file systems, device control and security. System calls provide an interface for programs to access OS services, and are usually accessed through high-level APIs rather than direct calls. Common APIs include POSIX, Win32 and Java. Parameters are typically passed to system calls via registers, memory blocks or the stack. Major categories of system calls control processes, files, devices, system information and security.
The document discusses input/output (I/O) systems. It describes how I/O devices connect to computers through ports, busses, and device controllers. Device drivers present a uniform interface to access devices. Common I/O hardware concepts are discussed, including polling and interrupt-driven interactions between processors and controllers. Interrupt vectors route interrupts to specific handler routines. Interrupts are also used for exceptions and system calls.
The document discusses process synchronization techniques, including semaphores. It describes semaphores as a synchronization tool that provides more sophisticated ways for processes to synchronize activities than mutex locks. Semaphores use wait() and signal() operations to decrement and increment an integer value to control access to shared resources. The document also discusses potential issues with semaphores like deadlock, starvation, and priority inversion, and how monitors provide an alternative approach.
The document discusses computer protection models. It describes a model where a computer consists of objects that can be uniquely named and accessed through defined operations. The protection problem is to ensure each object is only accessed correctly by allowed processes. It also discusses principles of protection like least privilege and separation of policy and mechanism. Different types of protection domains like users, processes, and procedures are described along with examples of UNIX and Multics domain implementations using access matrices.
The document discusses process synchronization and the producer-consumer problem. It describes how concurrent access to shared data by processes may result in data inconsistency issues. It then presents a solution to the producer-consumer problem using a counter to track the number of full buffers. The document also discusses the critical section problem, where a section of code containing shared resources can only be accessed by one process at a time. It presents Peterson's algorithm as a solution to the critical section problem for two processes.
The document discusses CPU scheduling for real-time systems. It describes two types of latencies that affect real-time performance: interrupt latency, which is the time from when an interrupt occurs to when it is serviced, and dispatch latency, which is the time to switch from the current process to a higher priority one. For hard real-time systems, tasks must be serviced by their deadline to avoid missing events. The document also mentions periodic processes that require CPU access at constant intervals.
The document discusses deadlocks that can occur in a multiprocessing system where multiple processes compete for limited resources. A deadlock occurs when a process is waiting for a resource held by another waiting process, resulting in a circular wait. The document outlines the four conditions required for deadlock and describes methods to prevent deadlocks, including imposing a total ordering of resource requests and avoiding holding resources while waiting for additional resources.
The document discusses paging in computer memory management. Paging divides both physical and logical memory into fixed-size blocks called frames and pages, respectively. It allows processes to have non-contiguous physical address spaces by mapping pages to available frames using a page table. While this avoids external fragmentation, it still suffers from internal fragmentation. Other topics covered include TLBs to reduce translation time, page sizes tradeoffs, memory protection using valid-invalid bits, and address space identifiers.
The document discusses cloud computing concepts including definitions, characteristics, service models, and deployment models. Cloud computing refers to applications and services delivered over the internet (SaaS) as well as the underlying hardware and software (utility computing). There are three main service models - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) - which provide different levels of control over computing resources, analogous to building a new house, buying an empty house, and renting a hotel room, respectively. The four deployment models define the scope and access of cloud infrastructure - public, private, community, and hybrid clouds.
The document discusses virtual memory and techniques for allocating frames to processes. It covers:
- Fixed allocation schemes like equal allocation and proportional allocation that split frames among processes.
- Priority allocation that gives more frames to higher priority processes.
- Global and local page replacement policies for determining which process's frame to replace on a page fault.
- Thrashing that occurs when a process does not have enough frames for its current locality or working set of pages. Locality and working set models are discussed for determining how many frames a process needs to avoid thrashing.
The document discusses various techniques for memory management in computer systems. It covers the following key points:
- Computer memory is made up of cells that can exist in two states corresponding to bit values. Memory can use different physical properties like electrical charge or magnetism.
- Memory is either volatile, requiring constant power, or non-volatile like hard disks that retain data without power.
- Operating systems use memory management to handle RAM and map logical to physical addresses. Techniques include paging, segmentation, and virtual memory address translation with page tables.
- Memory management aims to utilize memory efficiently and protect processes from one another through allocation schemes, relocation of code, and partitioning of physical memory.
The document provides an overview of pipelining in computer processors. It discusses how pipelining works by dividing processor operations like fetch, decode, execute, memory, and write-back into discrete stages that can overlap, improving throughput. Key points made include:
- Pipelining allows multiple instructions to be in different stages of completion at the same time, improving instruction throughput.
- The document uses an example of a sequential laundry process versus a pipelined laundry process to illustrate how pipelining improves efficiency.
- It describes the five main stages of a RISC instruction set pipeline - fetch, decode, execute, memory, and write-back. The work done and data passed between each stage
There are several techniques for performing input/output (I/O) in a computer system, including programmed I/O, interrupt-driven I/O, and direct memory access (DMA). Buffering is commonly used to improve I/O efficiency by transferring data between I/O devices and memory in advance of or after requests. The operating system handles I/O through buffering techniques like single, double, and circular buffers to avoid blocking processes waiting for I/O completion.
Pipelining is a technique used in modern processors to improve performance. It allows multiple instructions to be processed simultaneously using different processor components. This increases throughput compared to sequential processing. However, pipeline stalls can occur due to data hazards when instructions depend on each other, instruction hazards from branches or cache misses, or structural hazards when resources are needed simultaneously. Various techniques like forwarding, reordering, and branch prediction aim to reduce the impact of hazards on pipeline performance.
Pipelining is an implementation technique where multiple instructions are overlapped in execution. The computer pipeline is divided in stages. Each stage completes a part of an instruction in parallel.
Pipeline Hazards can be classified into three types: structural hazards caused by hardware resource conflicts, data hazards caused when an instruction depends on the results of a previous instruction, and control hazards from conditional branches. Structural hazards arise from limited hardware resources like register files and memory ports. Data hazards include RAW, WAW, and WAR and are resolved by stalling or forwarding. Forwarding minimizes stalls by directly connecting new values to the next stage.
1) Data transfer instructions move data between processor registers and memory without changing the data. Common instructions include load, store, move, exchange, input, and output.
2) Data manipulation instructions perform arithmetic, logical, and bitwise operations on data to provide computational capabilities. Examples include add, subtract, multiply, divide, and, or, xor.
3) Program control instructions alter the program flow by branching, jumping, calling subroutines, handling interrupts, and returning from subroutines. Status bits track results of operations.
This chapter discusses pipelining in computer processors. Pipelining improves processor throughput by allowing multiple instructions to be processed simultaneously across different stages. It involves dividing instruction execution into discrete stages, such as fetch, decode, execute, and writeback. Pipelining can improve performance but also introduces hazards such as data dependencies that require stalling the pipeline. Techniques like forwarding, reordering, and branch prediction help mitigate stalls and improve pipeline utilization.
The document provides an introduction to computer organization and architecture. It defines computer architecture as the attributes visible to the programmer, such as the instruction set, while computer organization refers to how those architectural specifications are implemented internally. The basic functions of a computer are described as data processing, storage, movement, and control. A computer's main components are the central processing unit, main memory, and input/output. The document outlines the fetch-execute cycle of instruction processing and how interrupts can alter the control flow. It also discusses trends driving the need for performance improvements like faster memory.
This document discusses I/O systems, including an overview of I/O hardware, the application I/O interface, the kernel I/O subsystem, and I/O performance. It describes how I/O requests are transformed into hardware operations through techniques like interrupts, DMA, polling, and blocking vs. asynchronous I/O. Specific I/O concepts covered include STREAMS, device characteristics, and data structures used in the kernel I/O subsystem.
The document summarizes the RISC pipeline architecture. It discusses the five stages of the classic RISC pipeline: instruction fetch, instruction decode, execute, memory access, and writeback. Each stage is involved in processing one instruction at a time through the pipeline. The instruction fetch stage retrieves instructions from the instruction cache. The decode stage decodes the instruction and computes branch targets. The execute stage performs arithmetic and logical operations. The memory access stage handles data memory access. Finally, the writeback stage writes results back to registers. The document also discusses hazards like structural, data, and control hazards that can occur in pipelines.
The document discusses peer-to-peer (P2P) content distribution systems like BitTorrent and Spotify. It describes how BitTorrent works by breaking files into pieces that are distributed among peers using a tracker. Peers select rare pieces first and cooperate through a tit-for-tat approach. Spotify's P2P system supplements streaming from servers by also getting pieces from other users' devices to improve performance and scalability.
The document discusses threads and threading models. It defines a thread as a basic unit of CPU utilization. It describes how traditional processes have a single thread, while multiple threads in a process can perform more than one task simultaneously by sharing resources. The main threading models of many-to-one, one-to-one, and many-to-many are summarized. Key thread libraries like pthreads and differences between user and kernel threads are also outlined.
The document discusses process synchronization and the producer-consumer problem. It describes how concurrent access to shared data by processes may result in data inconsistency issues. It then presents a solution to the producer-consumer problem using a counter to track the number of full buffers. The document also discusses the critical section problem, where a section of code containing shared resources can only be accessed by one process at a time. It presents Peterson's algorithm as a solution to the critical section problem for two processes.
The document discusses CPU scheduling for real-time systems. It describes two types of latencies that affect real-time performance: interrupt latency, which is the time from when an interrupt occurs to when it is serviced, and dispatch latency, which is the time to switch from the current process to a higher priority one. For hard real-time systems, tasks must be serviced by their deadline to avoid missing events. The document also mentions periodic processes that require CPU access at constant intervals.
The document discusses deadlocks that can occur in a multiprocessing system where multiple processes compete for limited resources. A deadlock occurs when a process is waiting for a resource held by another waiting process, resulting in a circular wait. The document outlines the four conditions required for deadlock and describes methods to prevent deadlocks, including imposing a total ordering of resource requests and avoiding holding resources while waiting for additional resources.
The document discusses paging in computer memory management. Paging divides both physical and logical memory into fixed-size blocks called frames and pages, respectively. It allows processes to have non-contiguous physical address spaces by mapping pages to available frames using a page table. While this avoids external fragmentation, it still suffers from internal fragmentation. Other topics covered include TLBs to reduce translation time, page sizes tradeoffs, memory protection using valid-invalid bits, and address space identifiers.
The document discusses cloud computing concepts including definitions, characteristics, service models, and deployment models. Cloud computing refers to applications and services delivered over the internet (SaaS) as well as the underlying hardware and software (utility computing). There are three main service models - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) - which provide different levels of control over computing resources, analogous to building a new house, buying an empty house, and renting a hotel room, respectively. The four deployment models define the scope and access of cloud infrastructure - public, private, community, and hybrid clouds.
The document discusses virtual memory and techniques for allocating frames to processes. It covers:
- Fixed allocation schemes like equal allocation and proportional allocation that split frames among processes.
- Priority allocation that gives more frames to higher priority processes.
- Global and local page replacement policies for determining which process's frame to replace on a page fault.
- Thrashing that occurs when a process does not have enough frames for its current locality or working set of pages. Locality and working set models are discussed for determining how many frames a process needs to avoid thrashing.
The document discusses various techniques for memory management in computer systems. It covers the following key points:
- Computer memory is made up of cells that can exist in two states corresponding to bit values. Memory can use different physical properties like electrical charge or magnetism.
- Memory is either volatile, requiring constant power, or non-volatile like hard disks that retain data without power.
- Operating systems use memory management to handle RAM and map logical to physical addresses. Techniques include paging, segmentation, and virtual memory address translation with page tables.
- Memory management aims to utilize memory efficiently and protect processes from one another through allocation schemes, relocation of code, and partitioning of physical memory.
The document provides an overview of pipelining in computer processors. It discusses how pipelining works by dividing processor operations like fetch, decode, execute, memory, and write-back into discrete stages that can overlap, improving throughput. Key points made include:
- Pipelining allows multiple instructions to be in different stages of completion at the same time, improving instruction throughput.
- The document uses an example of a sequential laundry process versus a pipelined laundry process to illustrate how pipelining improves efficiency.
- It describes the five main stages of a RISC instruction set pipeline - fetch, decode, execute, memory, and write-back. The work done and data passed between each stage
There are several techniques for performing input/output (I/O) in a computer system, including programmed I/O, interrupt-driven I/O, and direct memory access (DMA). Buffering is commonly used to improve I/O efficiency by transferring data between I/O devices and memory in advance of or after requests. The operating system handles I/O through buffering techniques like single, double, and circular buffers to avoid blocking processes waiting for I/O completion.
Pipelining is a technique used in modern processors to improve performance. It allows multiple instructions to be processed simultaneously using different processor components. This increases throughput compared to sequential processing. However, pipeline stalls can occur due to data hazards when instructions depend on each other, instruction hazards from branches or cache misses, or structural hazards when resources are needed simultaneously. Various techniques like forwarding, reordering, and branch prediction aim to reduce the impact of hazards on pipeline performance.
Pipelining is an implementation technique where multiple instructions are overlapped in execution. The computer pipeline is divided in stages. Each stage completes a part of an instruction in parallel.
Pipeline Hazards can be classified into three types: structural hazards caused by hardware resource conflicts, data hazards caused when an instruction depends on the results of a previous instruction, and control hazards from conditional branches. Structural hazards arise from limited hardware resources like register files and memory ports. Data hazards include RAW, WAW, and WAR and are resolved by stalling or forwarding. Forwarding minimizes stalls by directly connecting new values to the next stage.
1) Data transfer instructions move data between processor registers and memory without changing the data. Common instructions include load, store, move, exchange, input, and output.
2) Data manipulation instructions perform arithmetic, logical, and bitwise operations on data to provide computational capabilities. Examples include add, subtract, multiply, divide, and, or, xor.
3) Program control instructions alter the program flow by branching, jumping, calling subroutines, handling interrupts, and returning from subroutines. Status bits track results of operations.
This chapter discusses pipelining in computer processors. Pipelining improves processor throughput by allowing multiple instructions to be processed simultaneously across different stages. It involves dividing instruction execution into discrete stages, such as fetch, decode, execute, and writeback. Pipelining can improve performance but also introduces hazards such as data dependencies that require stalling the pipeline. Techniques like forwarding, reordering, and branch prediction help mitigate stalls and improve pipeline utilization.
The document provides an introduction to computer organization and architecture. It defines computer architecture as the attributes visible to the programmer, such as the instruction set, while computer organization refers to how those architectural specifications are implemented internally. The basic functions of a computer are described as data processing, storage, movement, and control. A computer's main components are the central processing unit, main memory, and input/output. The document outlines the fetch-execute cycle of instruction processing and how interrupts can alter the control flow. It also discusses trends driving the need for performance improvements like faster memory.
This document discusses I/O systems, including an overview of I/O hardware, the application I/O interface, the kernel I/O subsystem, and I/O performance. It describes how I/O requests are transformed into hardware operations through techniques like interrupts, DMA, polling, and blocking vs. asynchronous I/O. Specific I/O concepts covered include STREAMS, device characteristics, and data structures used in the kernel I/O subsystem.
The document summarizes the RISC pipeline architecture. It discusses the five stages of the classic RISC pipeline: instruction fetch, instruction decode, execute, memory access, and writeback. Each stage is involved in processing one instruction at a time through the pipeline. The instruction fetch stage retrieves instructions from the instruction cache. The decode stage decodes the instruction and computes branch targets. The execute stage performs arithmetic and logical operations. The memory access stage handles data memory access. Finally, the writeback stage writes results back to registers. The document also discusses hazards like structural, data, and control hazards that can occur in pipelines.
The document discusses peer-to-peer (P2P) content distribution systems like BitTorrent and Spotify. It describes how BitTorrent works by breaking files into pieces that are distributed among peers using a tracker. Peers select rare pieces first and cooperate through a tit-for-tat approach. Spotify's P2P system supplements streaming from servers by also getting pieces from other users' devices to improve performance and scalability.
The document discusses threads and threading models. It defines a thread as a basic unit of CPU utilization. It describes how traditional processes have a single thread, while multiple threads in a process can perform more than one task simultaneously by sharing resources. The main threading models of many-to-one, one-to-one, and many-to-many are summarized. Key thread libraries like pthreads and differences between user and kernel threads are also outlined.
The document discusses various security threats to computer systems, including breaches of confidentiality, integrity, availability, and denial of service. It describes common attack methods like masquerading, replay attacks, and man-in-the-middle attacks. Specific threats discussed in more detail include Trojan horses, trap doors, logic bombs, stack and buffer overflows, and viruses. Security needs to be implemented at the physical, human, operating system, and network levels to be effective.
The document discusses secondary storage and disk drives. It describes how magnetic disks work, including disk platters, tracks, sectors, and cylinders. It explains disk addressing and structures, as well as different types of disk attachment like host-attached storage, network-attached storage, and storage area networks. The document also covers disk scheduling algorithms like first-come first-served, shortest seek time first, SCAN, and C-SCAN that are used to determine the service order of requests to optimize disk performance.
The Stratosphere Big Data Analytics PlatformAmir Payberah
The document discusses the Stratosphere platform for distributed data analysis. Stratosphere is built on HDFS and YARN and focuses on ease of programming. It extends the MapReduce model with additional operators and supports complex data flows and iterative algorithms. The execution engine uses a master-worker architecture and provides fault tolerance through backup task deployment and recovery from intermediate results.
The document discusses data intensive computing frameworks. It provides background on big data, how it is stored and processed at scale. Specifically, it discusses distributed file systems like HDFS, databases including relational and NoSQL approaches, and data processing frameworks like MapReduce. It aims to explain challenges in big data and how these tools address issues like scalability, fault tolerance and distributed computation.
The document discusses file systems and file operations. It states that for most users, the file system is the most visible part of an operating system as it provides mechanisms to access data and programs stored on devices. The file system consists of files organized in a directory structure. Files have attributes like name, size, permissions. Common file operations are create, read, write, delete. Information about open files is tracked in tables. The document also discusses file locking, structures, and the stat system call for retrieving file metadata.
The document discusses CPU scheduling in operating systems. It provides background on CPU scheduling, including its purpose in multiprogrammed operating systems. It describes basic concepts like the CPU-I/O burst cycle and CPU burst distributions. It then covers the CPU scheduler, dispatcher, and different criteria used for scheduling like CPU utilization, throughput, turnaround time, waiting time, and response time. Finally, it introduces some common scheduling algorithms like first-come first-served, shortest-job-first, priority scheduling, round-robin, multilevel queue scheduling, and multilevel feedback queue scheduling.
The document discusses virtual memory and demand paging. It motivates virtual memory by explaining how programs can exceed physical memory limits and how virtual memory allows programs to run partially loaded. Demand paging is introduced as a technique where pages are swapped into memory on demand when accessed, rather than pre-loading the entire program. This reduces I/O and allows multiple programs to share memory.
The document discusses the motivation and design of file system implementations. It describes how file systems map the logical structure to physical storage, using various on-disk and in-memory data structures. These include boot blocks, superblocks, directories, inodes/file control blocks, buffer caches, open file tables, and more. Common operations like creating, opening, reading and closing files are also outlined.
Main memory is a large array of bytes that each have their own address. Programs must be loaded into main memory from disk in order to run. The CPU fetches instructions from main memory based on the program counter value. Main memory access takes many CPU cycles, so cache is used to reduce access time. Memory management units map virtual addresses used by programs to physical addresses in main memory through address translation.
The document discusses distributed hash tables (DHTs) and describes the Chord protocol as an example of a DHT. It explains that a DHT distributes a hash table among nodes in a network, allowing items to be stored at specific nodes based on their hash key. The Chord protocol constructs a DHT by having nodes form a ring, assigning each node an ID, and storing each item at the successor node of its hash key on the ring. Lookups are routed around the ring to the item's successor using finger tables to speed up routing. The ring is maintained through periodic stabilization to update successor and predecessor pointers when nodes join or leave.
The document describes the Google File System (GFS). GFS is a distributed file system that runs on top of commodity hardware. It addresses problems with scaling to very large datasets and files by splitting files into large chunks (64MB or 128MB) and replicating chunks across multiple machines. The key components of GFS are the master, which manages metadata and chunk placement, chunkservers, which store chunks, and clients, which access chunks. The master handles operations like namespace management, replica placement, garbage collection and stale replica detection to provide a fault-tolerant filesystem.
The document discusses various techniques for implementing file systems, including on-disk and in-memory data structures. On-disk structures include boot blocks, volume blocks, directory structures, and file control blocks. In-memory structures include mount tables, directory trees, open file tables, and buffered file system blocks. The document also discusses virtual file systems, directory implementations using lists and hashes, allocation methods, and free space management techniques like bit vectors, linked lists, grouping, counting, and space maps.
The document discusses Pregel, a graph-parallel processing platform developed at Google for large-scale graph processing. Pregel is inspired by the bulk synchronous parallel (BSP) model and uses a vertex-centric programming model where computation is viewed as messages passed between graph vertices. In Pregel, applications run as a series of supersteps where vertices can update themselves and pass messages to other vertices, with global synchronization between supersteps. This model is better suited for graph problems compared to more general data-parallel systems.
The document discusses stream processing in the cloud. It describes how new stream processing systems are designed to scale to large numbers of cloud-hosted machines to meet users' expectations of fresh results from big data applications. It also outlines some of the key challenges in elastic data-parallel processing and fault-tolerant processing in cloud environments. The document introduces the SEEP (Stanford Stream Execution Engine for Partitioned-parallelism) system, which aims to build a stream processing system that can scale out while remaining fault-tolerant when queries contain stateful operators by making operator state an external entity managed by the system.
The document discusses the key components of internet infrastructure:
- The TCP/IP model is a 4-layer communication protocol developed by the Department of Defense in the 1960s. It includes the host-to-network layer for physical transmission, the internet layer for logical data transmission using IP, the transport layer for reliable delivery using TCP or UDP, and the application layer for user-level protocols.
- Common application layer protocols are HTTP, FTP, SMTP, DNS, Telnet, and SNMP. The transport layer ensures reliable or unreliable delivery. The host-to-network layer defines physical transmission standards like Ethernet and Frame Relay.
The document provides an overview of the OSI model and TCP/IP networking model. It describes the seven layers of the OSI model from the physical layer to the application layer and their responsibilities in networking. It also discusses the four layers of the TCP/IP model and compares it to the OSI model. Key protocols like TCP, UDP, IP, Ethernet, and HTTP are explained in their respective layers along with functions like encapsulation and data flow between layers. Network analysis tools like Wireshark are also mentioned.
presentation on TCP/IP protocols data comunicationsAnyapuPranav
The document provides an overview of the TCP/IP protocol architecture. It discusses the five layers of TCP/IP including the physical, network access, internet, transport, and application layers. It describes the protocols used at each layer, such as IP, TCP, UDP, HTTP, and FTP. The document also discusses how data is encapsulated as it passes through each layer of the TCP/IP model and is transmitted from one host to another across networks and the internet.
The document discusses TCP/IP (Transmission Control Protocol/Internet Protocol), which is a suite of communication protocols used to connect devices on the internet and private networks. It describes the history of TCP/IP's development by DARPA in the 1970s and its use in Unix operating systems. The document outlines the importance, uses, layers, and basic functioning of TCP/IP.
The document discusses computer network layers based on the OSI and TCP/IP models. It describes the seven layers of the OSI model from the physical layer to the application layer. It also covers the four layers of the TCP/IP model from the network interface layer to the application layer. The document compares the OSI and TCP/IP models and their differences, such as the OSI model distinguishing services, interfaces, and protocols while the TCP/IP model focuses more on existing protocols.
This document summarizes key aspects of protocol architecture, TCP/IP, and internet-based applications. It discusses the need for a protocol architecture to break communication tasks into layers. It then describes the layered TCP/IP protocol architecture and its components, including the physical, network access, internet, transport, and application layers. It also summarizes TCP and IP addressing requirements and operation, as well as standard TCP/IP applications like SMTP, FTP, and Telnet. Finally, it contrasts traditional data-based applications with newer multimedia applications involving large amounts of real-time audio and video data.
This document discusses various tools that can be used for network troubleshooting. It describes command line tools like ping and traceroute that provide basic network reachability information. It also discusses using the command line or web interfaces of network devices to check metrics like packet counts, errors, and CPU utilization. Protocol analyzers like Wireshark are mentioned as tools to analyze packets and protocols. SNMP tools that monitor network elements using SNMP are also discussed. Specialized tools like NetFlow that provide traffic statistics are covered. The document provides a high-level overview of different classes of tools available for network troubleshooting.
This document provides an overview of computer networking concepts including the OSI and TCP/IP models. It describes the seven layers of the OSI model from physical to application layer and their responsibilities. It also summarizes the four layers of the TCP/IP model from network interface to application layer. The document compares the two models and explains that while they cover similar topics, the TCP/IP model does so with fewer layers and is more practical for locating specific protocols.
The document discusses protocol architectures, including TCP/IP and OSI models. It covers the need for protocol architectures to break communication tasks into subtasks implemented in protocol stacks. TCP/IP is described as having application, transport, internet, and network access layers. It also discusses traditional applications like email and FTP compared to emerging real-time multimedia applications with different network requirements.
The document provides an overview of TCP/IP (Transmission Control Protocol/Internet Protocol), which is the set of protocols used for communication on the internet and local area networks. It describes the layers of the TCP/IP model including the physical, data link, network, transport, and application layers. It also discusses TCP/IP addressing using IPv4 and IPv6 addresses, routing protocols, and the roles of core protocols like TCP, UDP, and IP in reliable data transmission.
The document provides an overview of TCP/IP (Transmission Control Protocol/Internet Protocol). It discusses the history and development of TCP/IP, how it relates to the OSI model, IP addressing, TCP and UDP protocols, and how connections are established and torn down using TCP. Key points covered include TCP/IP covering the network and transport layers, IP providing unreliable datagram delivery, TCP providing reliable byte-stream delivery, and the three-way handshake used to open TCP connections.
The document discusses TCP/IP and the OSI model. It provides details on:
- TCP/IP consisting of rules for protocol used with IP to send data between computers over the Internet. IP handles delivery while TCP tracks data transmission.
- The 7-layer OSI model with layers grouped into physical/data link, network/transport, and application/presentation/session. Layers define communication details and encapsulation/decapsulation of data.
- Common data units including segments, packets, datagrams, frames, cells, and bits/bytes. Encapsulation adds headers at each layer.
- Other topics covered include IP addressing, domain name servers, URLs, wireless networks, Wi-Fi, WiMax
Power point presentation on osi model.
A good presentation cover all topics.
For any other type of ppt's or pdf's to be created on demand contact -dhawalm8@gmail.com
mob. no-7023419969
The document discusses computer networks and network protocols. It begins with an introduction to network protocols and the Internet protocols. It then provides definitions and explanations of communication protocols, including addressing, transmission modes, and error detection/recovery techniques. It lists and describes common network protocols like TCP/IP, routing protocols, FTP, SMTP, and more. It also discusses the OSI model layers, TCP/IP protocol suite, data encapsulation, protocol data units, protocol assignments to layers, and addresses at each layer.
The TCP/IP model was developed by DARPA in the late 1970s and defines the protocols used for network communication on the internet. It has four layers - the lowest is the host to network layer which connects hosts to different networks using various protocols. Above this is the internet layer which allows data packets to be routed independently to their destination using the Internet Protocol. The transport layer segments messages and uses protocols like TCP and UDP. The highest application layer provides services that applications use for functions like file transfer, email, and web browsing.
The document discusses network reference models and the OSI and TCP/IP models. It provides details on each layer of the OSI model and its functions. The key points are that reference models divide network communication into simpler components, provide standardization, and prevent changes in one layer from affecting others. The OSI model has 7 layers and separates network functions into upper layers for applications and lower layers for data transmission. The TCP/IP model is based on widely used TCP and IP protocols.
TCP and UDP are transport layer protocols that package and deliver data between applications. TCP provides reliable, ordered delivery through connection establishment and packet sequencing. UDP provides faster, unreliable datagram delivery without connections. Common applications using TCP include HTTP, FTP, and SMTP. Common UDP applications include DNS, DHCP, and streaming media.
Internet Technology Lectures
network protocols, TCP/IP Model
Lecturer: Saman M. Almufti / Kurdistan Region, Nawroz University
facebook: https://www.facebook.com/saman.malmufti
YouTube Link:https://youtu.be/JgbAWAc0fDs
Aplication and Transport layer- a practical approachSarah R. Dowlath
This presentation was done for a Networking course. It really shows from a more practical standpoint how the application layer and the transport layer communicates with each other and operates on a whole to get the job done. It gives the reader more insight of how the pieces come together in an IT networking world.
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
WhatsApp offers simple, reliable, and private messaging and calling services for free worldwide. With end-to-end encryption, your personal messages and calls are secure, ensuring only you and the recipient can access them. Enjoy voice and video calls to stay connected with loved ones or colleagues. Express yourself using stickers, GIFs, or by sharing moments on Status. WhatsApp Business enables global customer outreach, facilitating sales growth and relationship building through showcasing products and services. Stay connected effortlessly with group chats for planning outings with friends or staying updated on family conversations.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
Odoo ERP software
Odoo ERP software, a leading open-source software for Enterprise Resource Planning (ERP) and business management, has recently launched its latest version, Odoo 17 Community Edition. This update introduces a range of new features and enhancements designed to streamline business operations and support growth.
The Odoo Community serves as a cost-free edition within the Odoo suite of ERP systems. Tailored to accommodate the standard needs of business operations, it provides a robust platform suitable for organisations of different sizes and business sectors. Within the Odoo Community Edition, users can access a variety of essential features and services essential for managing day-to-day tasks efficiently.
This blog presents a detailed overview of the features available within the Odoo 17 Community edition, and the differences between Odoo 17 community and enterprise editions, aiming to equip you with the necessary information to make an informed decision about its suitability for your business.
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
Fundamentals of Programming and Language Processors
Process Management - Part3
1. Processes (Part III)
Amir H. Payberah
amir@sics.se
Amirkabir University of Technology
(Tehran Polytechnic)
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 1 / 50
7. Internetworking
An internetwork (internet (with a lowercase i)) is a network of com-
puter networks.
Subnetwork refers to one of the networks composing an internet.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 6 / 50
8. Internetworking
An internetwork (internet (with a lowercase i)) is a network of com-
puter networks.
Subnetwork refers to one of the networks composing an internet.
An internet aims to hide the details of different physical networks,
to present a unified network architecture.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 6 / 50
9. The Internet
TCP/IP has become the dominant protocol for the internetworking.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 7 / 50
10. The Internet
TCP/IP has become the dominant protocol for the internetworking.
The Internet (with an uppercase I) refers to the TCP/IP internet
that connects millions of computers globally.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 7 / 50
11. The Internet
TCP/IP has become the dominant protocol for the internetworking.
The Internet (with an uppercase I) refers to the TCP/IP internet
that connects millions of computers globally.
The first widespread implementation of TCP/IP appeared with
4.2BSD in 1983.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 7 / 50
12. Networking Protocols and Layers
A networking protocol is a set of rules defining how information is
to be transmitted across a network.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 8 / 50
13. Networking Protocols and Layers
A networking protocol is a set of rules defining how information is
to be transmitted across a network.
Networking protocols are generally organized as a series of layers.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 8 / 50
14. Networking Protocols and Layers
A networking protocol is a set of rules defining how information is
to be transmitted across a network.
Networking protocols are generally organized as a series of layers.
Each layer building on the layer below it to add features that are
made available to higher layers.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 8 / 50
15. Networking Protocols and Layers
A networking protocol is a set of rules defining how information is
to be transmitted across a network.
Networking protocols are generally organized as a series of layers.
Each layer building on the layer below it to add features that are
made available to higher layers.
Transparency: each protocol layer shields higher layers from the
operation and complexity of lower layers.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 8 / 50
16. TCP/IP Protocol Suite
The TCP/IP protocol suite is a layered networking protocol.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 9 / 50
17. TCP/IP Protocol Layers
Data-Link layer
Network layer (IP)
Transport layer (TCP, UDP)
Application
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 10 / 50
18. Encapsulation
Encapsulation: the information passed from a higher layer to a lower
layer is treated as opaque data by the lower layer.
• The lower layer does not interpret information from the upper layer.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 11 / 50
19. Encapsulation
Encapsulation: the information passed from a higher layer to a lower
layer is treated as opaque data by the lower layer.
• The lower layer does not interpret information from the upper layer.
When data is passed up from a lower layer to a higher layer, a
converse unpacking process takes place.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 11 / 50
20. Data-Link Layer (1/3)
It is concerned with transferring data across a physical link in a
network.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 12 / 50
21. Data-Link Layer (1/3)
It is concerned with transferring data across a physical link in a
network.
It consists of the device driver and the hardware interface (network
card) to the underlying physical medium, e.g., fiber-optic cable.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 12 / 50
22. Data-Link Layer (2/3)
The data-link layer encapsulates datagrams from the network layer
into units, called frames.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 13 / 50
23. Data-Link Layer (2/3)
The data-link layer encapsulates datagrams from the network layer
into units, called frames.
It also adds each frame a header containing the destination address
and frame size.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 13 / 50
24. Data-Link Layer (2/3)
The data-link layer encapsulates datagrams from the network layer
into units, called frames.
It also adds each frame a header containing the destination address
and frame size.
The data-link layer transmits the frames across the physical link and
handles acknowledgements from the receiver.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 13 / 50
25. Data-Link Layer (3/3)
From an application-programming point of view, we can generally
ignore the data-link layer, since all communication details are han-
dled in the driver and hardware.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 14 / 50
26. Data-Link Layer (3/3)
From an application-programming point of view, we can generally
ignore the data-link layer, since all communication details are han-
dled in the driver and hardware.
Maximum Transmission Unit (MTU): the upper limit that the layer
places on the size of a frame.
• data-link layers have different MTUs.
netstat -i
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 14 / 50
27. Network Layer (1/4)
It is concerned with delivering data from the source host to the
destination host.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 15 / 50
28. Network Layer (1/4)
It is concerned with delivering data from the source host to the
destination host.
It tasks include:
• Breaking data into fragments small enough for transmission via the
data-link layer.
• Routing data across the internet.
• Providing services to the transport layer.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 15 / 50
29. Network Layer (1/4)
It is concerned with delivering data from the source host to the
destination host.
It tasks include:
• Breaking data into fragments small enough for transmission via the
data-link layer.
• Routing data across the internet.
• Providing services to the transport layer.
In the TCP/IP protocol suite, the principal protocol in the network
layer is IP.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 15 / 50
30. Network Layer (2/4)
IP transmits data in the form of packets.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 16 / 50
31. Network Layer (2/4)
IP transmits data in the form of packets.
Each packet sent between two hosts travels independently across
the network.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 16 / 50
32. Network Layer (2/4)
IP transmits data in the form of packets.
Each packet sent between two hosts travels independently across
the network.
An IP packet includes a header that contains the address of the
source and target hosts.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 16 / 50
33. Network Layer (3/4)
IP is a connectionless protocol: it does not provide a virtual circuit
connecting two hosts.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 17 / 50
34. Network Layer (3/4)
IP is a connectionless protocol: it does not provide a virtual circuit
connecting two hosts.
IP is an unreliable protocol: it makes a best effort to transmit data-
grams from the sender to the receiver, but it does not guarantee:
• that packets will arrive in the order they were transmitted,
• that they will not be duplicated,
• that they will arrive at all.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 17 / 50
35. Network Layer (4/4)
An IP address consists of two parts:
• Network ID: specifies the network on which a host resides.
• Host ID: identifies the host within that network.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 18 / 50
36. Network Layer (4/4)
An IP address consists of two parts:
• Network ID: specifies the network on which a host resides.
• Host ID: identifies the host within that network.
An IPv4 address consists of 32 bits: 204.152.189.0/24
• loopback 127.0.0.1 refers to system on which process is running.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 18 / 50
37. Network Layer (4/4)
An IP address consists of two parts:
• Network ID: specifies the network on which a host resides.
• Host ID: identifies the host within that network.
An IPv4 address consists of 32 bits: 204.152.189.0/24
• loopback 127.0.0.1 refers to system on which process is running.
Network mask: a sequence of 1s in the leftmost bits, followed by a
sequence of 0s
• The 1s indicate which part of the address contains the assigned
network ID.
• The 0s indicate which part of the address is available to assign as
host IDs.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 18 / 50
38. Transport Layer (1/5)
Transport protocol provides an end-to-end communication service
to applications residing on different hosts.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 19 / 50
39. Transport Layer (1/5)
Transport protocol provides an end-to-end communication service
to applications residing on different hosts.
Two widely used transport-layer protocols in the TCP/IP suite:
• User Datagram Protocol (UDP): the protocol used for datagram
sockets.
• Transmission Control Protocol (TCP): the protocol used for stream
sockets.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 19 / 50
40. Transport Layer (2/5)
Port: a method of differentiating the applications on a host.
• 16-bit number
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 20 / 50
41. Transport Layer (2/5)
Port: a method of differentiating the applications on a host.
• 16-bit number
• All ports below 1024 are well known, used for standard services,
e.g., http: 80, ssh: 22.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 20 / 50
42. Transport Layer (2/5)
Port: a method of differentiating the applications on a host.
• 16-bit number
• All ports below 1024 are well known, used for standard services,
e.g., http: 80, ssh: 22.
• Shown as 192.168.1.1:8080.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 20 / 50
43. Transport Layer (3/5)
UDP, like IP, is connectionless and unreliable.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 21 / 50
44. Transport Layer (3/5)
UDP, like IP, is connectionless and unreliable.
If an application layered on top of UDP requires reliability, then this
must be implemented within the application.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 21 / 50
45. Transport Layer (3/5)
UDP, like IP, is connectionless and unreliable.
If an application layered on top of UDP requires reliability, then this
must be implemented within the application.
UDP adds just two features to IP:
• Port number
• Data checksum to allow the detection of errors in the transmitted
data.
[http://www.tamos.net/∼rhay/overhead/ip-packet-overhead.htm]
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 21 / 50
46. Transport Layer (4/5)
TCP provides a reliable, connection-oriented, bidirectional, byte-
stream communication channel between two endpoints.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 22 / 50
47. Transport Layer (4/5)
TCP provides a reliable, connection-oriented, bidirectional, byte-
stream communication channel between two endpoints.
Before communication can commence, TCP establishes a commu-
nication channel between the two endpoints.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 22 / 50
48. Transport Layer (5/5)
In TCP, data is broken into segments: each is transmitted in a single
IP packet.
[http://www.tamos.net/∼rhay/overhead/ip-packet-overhead.htm]
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 23 / 50
49. Transport Layer (5/5)
In TCP, data is broken into segments: each is transmitted in a single
IP packet.
When a destination receives a TCP segment, it sends an ack. to
the sender, informing weather it received the segment correctly or
not.
[http://www.tamos.net/∼rhay/overhead/ip-packet-overhead.htm]
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 23 / 50
50. Transport Layer (5/5)
In TCP, data is broken into segments: each is transmitted in a single
IP packet.
When a destination receives a TCP segment, it sends an ack. to
the sender, informing weather it received the segment correctly or
not.
Other features of TCP:
• Sequencing
• Flow control
• Congestion control
[http://www.tamos.net/∼rhay/overhead/ip-packet-overhead.htm]
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 23 / 50
51. OK, Let’s Back to Socket
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 24 / 50
52. Socket
A socket is defined as an endpoint for communication.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 25 / 50
53. Socket
A socket is defined as an endpoint for communication.
A typical client-server scenario:
• Each process creates a socket: both processes require one.
• The server binds its socket to a well-known address (name) so that
clients can locate it.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 25 / 50
54. Creating a Socket
socket() creates a new socket.
#include <sys/socket.h>
int socket(int domain, int type, int protocol);
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 26 / 50
55. Socket Domains
The UNIX domain (AF UNIX)
• Communication between processes on the same host (within the
kernel).
• Address format: path name.
int socket(int domain, int type, int protocol);
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 27 / 50
56. Socket Domains
The UNIX domain (AF UNIX)
• Communication between processes on the same host (within the
kernel).
• Address format: path name.
The IPV4 domain (AF INET)
• Communication between processes running on hosts connected via
an IPv4 network.
• Address format: 32-bit IPv4 address + 16-bit port number.
int socket(int domain, int type, int protocol);
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 27 / 50
57. Socket Types
Stream sockets (SOCK STREAM)
• It provides a reliable, bidirectional, byte-stream communication
channel.
• Called connection-oriented.
int socket(int domain, int type, int protocol);
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 28 / 50
58. Socket Types
Stream sockets (SOCK STREAM)
• It provides a reliable, bidirectional, byte-stream communication
channel.
• Called connection-oriented.
Datagram sockets (SOCK DGRAM)
• Allow data to be exchanged in the form of messages called
datagrams.
• Called connectionless.
int socket(int domain, int type, int protocol);
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 28 / 50
59. Binding a Socket to an Address
bind() binds a socket to an address.
#include <sys/socket.h>
int bind(int sockfd, const struct sockaddr *addr, socklen_t addrlen);
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 29 / 50
60. Listening for Incoming Connections
listen() marks the stream socket passive.
The socket will subsequently be used to accept connections from
other (active) sockets.
#include <sys/socket.h>
int listen(int sockfd, int backlog);
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 30 / 50
61. Accepting a Connection
accept() accepts an incoming connection on the listening stream
socket.
If there are no pending connections when accept() is called, the
call blocks until a connection request arrives.
#include <sys/socket.h>
int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen);
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 31 / 50
62. Connecting to a Peer Socket
connect() connects the active socket to the listening socket whose
address is specified by addr and addrlen.
#include <sys/socket.h>
int connect(int sockfd, const struct sockaddr *addr, socklen_t addrlen);
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 32 / 50
72. Signals (1/2)
Signals are software interrupts to notify a process that a particular
event has occurred.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 42 / 50
73. Signals (1/2)
Signals are software interrupts to notify a process that a particular
event has occurred.
These events can originate from outside the system, e.g., by pressing
Ctrl-C, or when a process executes code that divides by zero.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 42 / 50
74. Signals (1/2)
Signals are software interrupts to notify a process that a particular
event has occurred.
These events can originate from outside the system, e.g., by pressing
Ctrl-C, or when a process executes code that divides by zero.
As a primitive form of IPC, one process can also send a signal to
another process.
ps -aux | grep acrobat
amir 7302 0.0 0.0 ...
kill -9 7302
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 42 / 50
75. Signals (2/2)
A signal handler is used to process signals.
1 Signal is generated by particular event.
2 Signal is delivered to a process.
3 Signal is handled by one of two signal handlers: default or
user-defined.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 43 / 50
76. Signals (2/2)
A signal handler is used to process signals.
1 Signal is generated by particular event.
2 Signal is delivered to a process.
3 Signal is handled by one of two signal handlers: default or
user-defined.
Every signal has default handler that kernel runs when handling
signal
User-defined signal handler can override default.
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 43 / 50
77. Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 44 / 50
78. Signal Management
signal() removes the current action taken on receipt of the sig-
nal signo and instead handles the signal with the signal handler
specified by handler.
#include <signal.h>
typedef void (*sighandler_t)(int);
sighandler_t signal(int signo, sighandler_t handler);
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 45 / 50
79. Waiting for a Signal
pause() puts a process to sleep until it receives a signal.
#include <unistd.h>
int pause(void);
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 46 / 50
80. Signal Example
// handler for SIGINT
static void sigint_handler(int signo) {
printf("Caught SIGINT!n");
exit(0);
}
int main(void) {
// Register sigint_handler as our signal handler for SIGINT.
if (signal(SIGINT, sigint_handler) == SIG_ERR)
exit(1);
pause();
return 0;
}
Amir H. Payberah (Tehran Polytechnic) Processes 1393/7/7 47 / 50