The document discusses different methods of inter-process communication (IPC) in Unix systems. It describes process tracing which allows a debugger process to control the execution of a traced process using the ptrace system call. It also describes the three main IPC mechanisms in Unix - messages, shared memory, and semaphores. Messages allow processes to exchange data streams using message queues. Shared memory allows processes to share parts of virtual memory. Semaphores allow processes to synchronize using integer values.
The document discusses Jugaad, a proof-of-concept toolkit that demonstrates code injection on Linux systems similar to CreateRemoteThread on Windows. It does this using the ptrace system call to attach to a process, read/write its memory, and inject shellcode that allocates memory and creates a thread to execute arbitrary code within the target process context. First, it explains how ptrace can be used to manipulate another process. Then it describes how Jugaad uses these ptrace capabilities to meet the requirements of allocating memory, creating a thread, and executing payload code inside the target process.
REs and Python: Regular Expressions, Sequence Characters in Regular Expressions, Quantifiers in Regular Expressions, Special Characters in Regular Expressions, Using Regular Expressions on Files, Retrieving Information from a HTML File
Threading : Concurrent Programming and GIL, Uses of Threads, Creating Threads in Python, Thread Class Methods, Single Tasking using a Thread, Multitasking using Multiple Threads, Thread Synchronization Deadlock of Threads, Avoiding Deadlocks in a Program, Communication between Threads, Thread Communication using notify() and wait() Methods, Thread Communication using a Queue, Daemon Threads
The document discusses trap handling in Linux, focusing on system calls. It begins with background on interrupts, traps, and system calls. It then describes the function call flow from start_kernel() and initialization of the Interrupt Descriptor Table (IDT). Next, it covers system call entry and initialization of the system call table. Finally, it discusses the system call procedure from a user application using glibc and the Linux kernel. Key topics include IDT structure and gates, MSR register usage for system calls, fast vs slow system call paths, and how system calls are invoked and handled in the kernel.
Threads Advance in System Administration with LinuxSoumen Santra
ย
Threads Advance in System Administration with Linux
Process Descriptor Handling
Kernel Stack
Pid_hash Table and Chained Lists
PID Hash Table Handling Functions and Macros
Wait Queues
Process Resource Limits
Task State Segment
System Calls
Pthread Operations
POSIX threads on GNU/Linux
Programs on Thread in C
POSIX is a standard for maintaining compatibility between operating systems. It defines APIs for threads, mutexes, semaphores, condition variables, and message queues to enable processes and threads to synchronize access to shared resources and communicate. The document provides details on the POSIX interfaces for thread management, mutexes, condition variables, semaphores, and message queues. It includes code examples demonstrating their use for inter-process synchronization and communication using shared memory.
The document provides an overview of the C programming language. It states that C was developed in 1972 by Dennis Ritchie at Bell Labs and was used to develop the UNIX operating system. The document then covers various features of C like it being a mid-level programming language, having structured programming, pointers, loops, functions, arrays, and more. It provides examples to explain concepts like input/output functions, data types, operators, control structures, and pointers.
The document discusses different methods of inter-process communication (IPC) in Unix systems. It describes process tracing which allows a debugger process to control the execution of a traced process using the ptrace system call. It also describes the three main IPC mechanisms in Unix - messages, shared memory, and semaphores. Messages allow processes to exchange data streams using message queues. Shared memory allows processes to share parts of virtual memory. Semaphores allow processes to synchronize using integer values.
The document discusses Jugaad, a proof-of-concept toolkit that demonstrates code injection on Linux systems similar to CreateRemoteThread on Windows. It does this using the ptrace system call to attach to a process, read/write its memory, and inject shellcode that allocates memory and creates a thread to execute arbitrary code within the target process context. First, it explains how ptrace can be used to manipulate another process. Then it describes how Jugaad uses these ptrace capabilities to meet the requirements of allocating memory, creating a thread, and executing payload code inside the target process.
REs and Python: Regular Expressions, Sequence Characters in Regular Expressions, Quantifiers in Regular Expressions, Special Characters in Regular Expressions, Using Regular Expressions on Files, Retrieving Information from a HTML File
Threading : Concurrent Programming and GIL, Uses of Threads, Creating Threads in Python, Thread Class Methods, Single Tasking using a Thread, Multitasking using Multiple Threads, Thread Synchronization Deadlock of Threads, Avoiding Deadlocks in a Program, Communication between Threads, Thread Communication using notify() and wait() Methods, Thread Communication using a Queue, Daemon Threads
The document discusses trap handling in Linux, focusing on system calls. It begins with background on interrupts, traps, and system calls. It then describes the function call flow from start_kernel() and initialization of the Interrupt Descriptor Table (IDT). Next, it covers system call entry and initialization of the system call table. Finally, it discusses the system call procedure from a user application using glibc and the Linux kernel. Key topics include IDT structure and gates, MSR register usage for system calls, fast vs slow system call paths, and how system calls are invoked and handled in the kernel.
Threads Advance in System Administration with LinuxSoumen Santra
ย
Threads Advance in System Administration with Linux
Process Descriptor Handling
Kernel Stack
Pid_hash Table and Chained Lists
PID Hash Table Handling Functions and Macros
Wait Queues
Process Resource Limits
Task State Segment
System Calls
Pthread Operations
POSIX threads on GNU/Linux
Programs on Thread in C
POSIX is a standard for maintaining compatibility between operating systems. It defines APIs for threads, mutexes, semaphores, condition variables, and message queues to enable processes and threads to synchronize access to shared resources and communicate. The document provides details on the POSIX interfaces for thread management, mutexes, condition variables, semaphores, and message queues. It includes code examples demonstrating their use for inter-process synchronization and communication using shared memory.
The document provides an overview of the C programming language. It states that C was developed in 1972 by Dennis Ritchie at Bell Labs and was used to develop the UNIX operating system. The document then covers various features of C like it being a mid-level programming language, having structured programming, pointers, loops, functions, arrays, and more. It provides examples to explain concepts like input/output functions, data types, operators, control structures, and pointers.
The Linux kernel acts as an interface between applications and hardware, managing system resources and providing access through system calls; it uses a monolithic design where all components run in the kernel thread for high performance but can be difficult to maintain, though Linux allows dynamic loading of modules. Device drivers interface hardware like keyboards, hard disks, and network devices with the operating system, and are implemented as loadable kernel modules that are compiled using special makefiles and registered with the kernel through system calls.
Program Assignment Process ManagementObjective This program a.docxwkyra78
ย
Program Assignment : Process Management
Objective: This program assignment is given to the Operating Systems course to allow the students to figure out how a single process (parent process) creates a child process and how they work on Unix/Linux(/Mac OS X/Windows) environment. Additionally, student should combine the code for describing inter-process communication into this assignment. Both parent and child processes interact with each other through shared memory-based communication scheme or message passing scheme.
Environment: Unix/Linux environment (VM Linux or Triton Server, or Mac OS X), Windows platform
Language: C or C++, Java
Requirements:
i. You have wide range of choices for this assignment. First, design your program to explain the basic concept of the process management in Unix Kernel. This main idea will be evolved to show your understanding on inter-process communication, file processing, etc.ย
ii. Refer to the following system calls:
- fork(), getpid(), family of exec(), wait(), sleep() system calls for process management
- shmget(), shmat(), shmdt(), shmctl() for shared memory support or
- msgget(), msgsnd(), msgrcv(), msgctl(), etc. for message passing support
iii. The program should present that two different processes, both parent and child, execute as they are supposed to.
iv. The output should contain the screen capture of the execution procedure of the program.
v. Interaction between parent and child processes can be provided through inter-process communication schemes, such as shared-memory or message passing schemes.
vi. Result should be organized as a document which explains the overview of your program, code, execution results, and the conclusion including justification of your program, lessons you've learned, comments, etc.ย
Note:
i. In addition, please try to understand how the local and global variables work across the processes
ii. read() or write () functions are used to understand how they work on the different processes.
iii. For extra credit, you can also incorporate advanced features, like socket or thread functions, into your code.
Examples:
1. Process Creation and IPC with Shared Memory Scheme
=============================================================
#include <stdio.h>
#include <sys/shm.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
int main(){
pid_t pid;
int segment_id;ย //allocate the memory
char *shared_memory;ย //pointer to memory
const int size = 4096;
segment_id = shmget(IPC_PRIVATE, size, S_IRUSR | S_IWUSR);
shared_memory = (char *) shmat(segment_id, NULL, 0);
pid = fork();
if(pid < 0) {ย //error
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย fprintf(stderr, "Fork failed");
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย return 1;
}
else if(pid == 0){ย //child process
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย char *child_shared_memory;
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย child_shared_memory = (char *) shmat(segment_id,NULL,0);ย //attach mem
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย sprintf(child_shared_memory, "Hello parent process!");ย //write to the shared mem
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย shmdt(child_shared_memory);
}
else ...
The document discusses the simulation of a Triple Data Encryption Standard (Triple DES) circuit using VHDL. It provides background on Triple DES, describes the design and structure of the Triple DES circuit in VHDL, and presents the results of testing the encryption and decryption functions of the circuit through simulation. Testing showed the circuit correctly performed encryption and decryption on input data using the Triple DES algorithm. The design utilized some FPGA resources but would require a clock generator and RAM for implementation on an actual FPGA board.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The document discusses a software development team called Team-9 at SoftCorp that is developing a mobile email client application. The team wants to develop an email client that supports multimedia emails to address a limitation in their competitor's product, and covers topics like identifying security requirements and vulnerabilities. The team finds issues in code fragments that could lead to security threats and proposes solutions to address them. Design patterns like thin client and protected process are identified that could enhance security for the product.
This document discusses various inter-process communication (IPC) types including shared memory, mapped memory, pipes, FIFOs, message queues, sockets, and signals. Shared memory allows processes to directly read and write to the same region of memory, requiring synchronization between processes. Mapped memory permits processes to communicate by mapping the same file into memory. Pipes and FIFOs allow for sequential data transfer between related and unrelated processes. Message queues provide a way for processes to exchange messages via a common queue. Signals are used to asynchronously notify processes of events.
Contiki introduction II-from what to howDingxin Xu
ย
The document discusses the Contiki operating system framework, including how it uses processes and events for scheduling work, inter-process communication using event posting, and how modules like Rime and the TDMA MAC layer separate protocol logic from header construction and buffer management for flexible networking implementations. Key data structures include a process list and event queue that the kernel uses to schedule work across asynchronous processes.
MPI is a language-independent communications protocol used to program parallel computers. It allows processes to communicate and synchronize through point-to-point and collective communication primitives. The document discusses how to install MPI on Linux clusters, describes common MPI functions for communication (e.g. MPI_Send, MPI_Recv) and collective operations (e.g. MPI_Bcast, MPI_Reduce). It also provides an example MPI program to demonstrate basic point-to-point communication between two processes.
This document discusses various methods of interprocess communication (IPC) supported on UNIX systems, including pipes, FIFOs, message queues, semaphores, and shared memory. It provides details on how each method works, such as how processes can create and access pipes, FIFOs, and shared memory segments. It also describes the key system calls used to implement IPC, such as pipe, mkfifo, msgget, semget, and shmget.
The Raspberry Pi is a series of credit cardโsized single-board computers developed in the UK by the Raspberry Pi Foundation with the intention of promoting the teaching of basic computer science in schools.
The original Raspberry Pi and Raspberry Pi 2 are manufactured in several board configurations through licensed manufacturing agreements with Newark element14 (Premier Farnell), RS Components and Egoman. These companies sell the Raspberry Pi online. Egoman produces a version for distribution solely in China and Taiwan, which can be distinguished from other Pis by their red colouring and lack of FCC/CE marks. The hardware is the same across all manufacturers.
The original Raspberry Pi is based on the Broadcom BCM2835 system on a chip (SoC), which includes an ARM1176JZF-S 700 MHz processor, VideoCore IV GPU, and was originally shipped with 256 megabytes of RAM, later upgraded (models B and B+) to 512 MB. The system has Secure Digital (SD) (models A and B) or MicroSD (models A+ and B+) sockets for boot media and persistent storage.
The document discusses arrays, strings, and functions in C programming. It begins by explaining how to initialize and access 2D arrays, including examples of declaring and initializing a 2D integer array and adding elements of two 2D arrays. It also covers initializing and accessing multidimensional arrays. The document then discusses string basics like declaration and initialization of character arrays that represent strings. It explains various string functions like strlen(), strcat(), strcmp(). Finally, it covers functions in C including declaration, definition, call by value vs reference, and passing arrays to functions.
Message queues allow messages to be passed from one process to another. There can be multiple writers to the queue as well as multiple readers.
For Script:
https://docs.google.com/document/d/1JIdvzZdoV7jlr3cOTx9GYhDhGpt2sv9HcYWaf_dc2NA/edit?usp=sharing
Disk and terminal drivers allow processes to communicate with peripheral devices like disks, terminals, printers and networks. A disk driver translates file system addresses to specific disk sectors. A terminal driver controls data transmission to and from terminals and processes special characters through line disciplines. Device drivers are essential for all input/output in the UNIX operating system.
The document outlines the schedule and objectives for an operating systems lab course over 10 weeks. The first few weeks focus on writing programs using Unix system calls like fork, exec, wait. Later weeks involve implementing I/O system calls, simulating commands like ls and grep, and scheduling algorithms like FCFS, SJF, priority and round robin. Students are asked to display Gantt charts, compute waiting times and turnaround times for each algorithm. The final weeks cover inter-process communication, the producer-consumer problem, and memory management techniques.
Post Exploitation Bliss: Loading Meterpreter on a Factory iPhone, Black Hat U...Vincenzo Iozzo
ย
Charlie Miller and Vincenzo Iozzo presented techniques for post-exploitation on the iPhone 2 including:
1. Running arbitrary shellcode by overwriting memory protections and calling vm_protect to mark pages as read/write/executable.
2. Loading an unsigned dynamic library called Meterpreter by mapping it over an existing signed library, patching dyld to ignore code signing, and forcing unloaded of linked libraries.
3. Adding new functionality to Meterpreter, such as a module to vibrate and play a sound on the iPhone, demonstrating how payloads can be extended once loaded into memory.
System calls allow programs to request services from the operating system kernel. Some common system calls include read and write for file access, fork for creating new processes, and wait for a parent process to wait for a child process to complete. The steps a system call takes include the program pushing parameters onto the stack, calling the library procedure to place the system call number in a register, triggering a trap instruction to switch to kernel mode, the kernel dispatching the system call to the appropriate handler, the handler running, then returning control to the user program after completing the operation.
The document describes experiments conducted on digital logic circuits using VHDL. It includes summaries of experiments on multiplexers, logic gates, demultiplexers, half adders, full adders, half subtractors, full subtractors, SR latches, and SR clocked latches. Code snippets in VHDL are provided for each circuit along with truth tables and conclusions.
Iaetsd implementation of aho corasick algorithmIaetsd Iaetsd
ย
The document discusses implementing the Aho-Corasick string matching algorithm for network intrusion detection systems using ternary content-addressable memory (TCAM). It begins with an overview of how network intrusion detection systems use pattern matching to detect attacks and discusses challenges with software-based approaches. It then provides details on the Aho-Corasick algorithm and how it constructs a deterministic finite automaton to match multiple patterns simultaneously. The document proposes using TCAM to implement the Aho-Corasick automaton in hardware for high-speed pattern matching. It describes how TCAM allows parallel searching and compares implementing the automaton as a deterministic or nondeterministic finite automaton in TCAM.
This document provides an outline for a lecture on software security. It introduces the lecturer, Roman Oliynykov, and covers various topics related to software vulnerabilities like buffer overflows, heap overflows, integer overflows, and format string vulnerabilities. It provides examples of vulnerable code and exploits, and recommendations for writing more secure code to avoid these vulnerabilities.
The Linux kernel acts as an interface between applications and hardware, managing system resources and providing access through system calls; it uses a monolithic design where all components run in the kernel thread for high performance but can be difficult to maintain, though Linux allows dynamic loading of modules. Device drivers interface hardware like keyboards, hard disks, and network devices with the operating system, and are implemented as loadable kernel modules that are compiled using special makefiles and registered with the kernel through system calls.
Program Assignment Process ManagementObjective This program a.docxwkyra78
ย
Program Assignment : Process Management
Objective: This program assignment is given to the Operating Systems course to allow the students to figure out how a single process (parent process) creates a child process and how they work on Unix/Linux(/Mac OS X/Windows) environment. Additionally, student should combine the code for describing inter-process communication into this assignment. Both parent and child processes interact with each other through shared memory-based communication scheme or message passing scheme.
Environment: Unix/Linux environment (VM Linux or Triton Server, or Mac OS X), Windows platform
Language: C or C++, Java
Requirements:
i. You have wide range of choices for this assignment. First, design your program to explain the basic concept of the process management in Unix Kernel. This main idea will be evolved to show your understanding on inter-process communication, file processing, etc.ย
ii. Refer to the following system calls:
- fork(), getpid(), family of exec(), wait(), sleep() system calls for process management
- shmget(), shmat(), shmdt(), shmctl() for shared memory support or
- msgget(), msgsnd(), msgrcv(), msgctl(), etc. for message passing support
iii. The program should present that two different processes, both parent and child, execute as they are supposed to.
iv. The output should contain the screen capture of the execution procedure of the program.
v. Interaction between parent and child processes can be provided through inter-process communication schemes, such as shared-memory or message passing schemes.
vi. Result should be organized as a document which explains the overview of your program, code, execution results, and the conclusion including justification of your program, lessons you've learned, comments, etc.ย
Note:
i. In addition, please try to understand how the local and global variables work across the processes
ii. read() or write () functions are used to understand how they work on the different processes.
iii. For extra credit, you can also incorporate advanced features, like socket or thread functions, into your code.
Examples:
1. Process Creation and IPC with Shared Memory Scheme
=============================================================
#include <stdio.h>
#include <sys/shm.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
int main(){
pid_t pid;
int segment_id;ย //allocate the memory
char *shared_memory;ย //pointer to memory
const int size = 4096;
segment_id = shmget(IPC_PRIVATE, size, S_IRUSR | S_IWUSR);
shared_memory = (char *) shmat(segment_id, NULL, 0);
pid = fork();
if(pid < 0) {ย //error
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย fprintf(stderr, "Fork failed");
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย return 1;
}
else if(pid == 0){ย //child process
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย char *child_shared_memory;
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย child_shared_memory = (char *) shmat(segment_id,NULL,0);ย //attach mem
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย sprintf(child_shared_memory, "Hello parent process!");ย //write to the shared mem
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย shmdt(child_shared_memory);
}
else ...
The document discusses the simulation of a Triple Data Encryption Standard (Triple DES) circuit using VHDL. It provides background on Triple DES, describes the design and structure of the Triple DES circuit in VHDL, and presents the results of testing the encryption and decryption functions of the circuit through simulation. Testing showed the circuit correctly performed encryption and decryption on input data using the Triple DES algorithm. The design utilized some FPGA resources but would require a clock generator and RAM for implementation on an actual FPGA board.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The document discusses a software development team called Team-9 at SoftCorp that is developing a mobile email client application. The team wants to develop an email client that supports multimedia emails to address a limitation in their competitor's product, and covers topics like identifying security requirements and vulnerabilities. The team finds issues in code fragments that could lead to security threats and proposes solutions to address them. Design patterns like thin client and protected process are identified that could enhance security for the product.
This document discusses various inter-process communication (IPC) types including shared memory, mapped memory, pipes, FIFOs, message queues, sockets, and signals. Shared memory allows processes to directly read and write to the same region of memory, requiring synchronization between processes. Mapped memory permits processes to communicate by mapping the same file into memory. Pipes and FIFOs allow for sequential data transfer between related and unrelated processes. Message queues provide a way for processes to exchange messages via a common queue. Signals are used to asynchronously notify processes of events.
Contiki introduction II-from what to howDingxin Xu
ย
The document discusses the Contiki operating system framework, including how it uses processes and events for scheduling work, inter-process communication using event posting, and how modules like Rime and the TDMA MAC layer separate protocol logic from header construction and buffer management for flexible networking implementations. Key data structures include a process list and event queue that the kernel uses to schedule work across asynchronous processes.
MPI is a language-independent communications protocol used to program parallel computers. It allows processes to communicate and synchronize through point-to-point and collective communication primitives. The document discusses how to install MPI on Linux clusters, describes common MPI functions for communication (e.g. MPI_Send, MPI_Recv) and collective operations (e.g. MPI_Bcast, MPI_Reduce). It also provides an example MPI program to demonstrate basic point-to-point communication between two processes.
This document discusses various methods of interprocess communication (IPC) supported on UNIX systems, including pipes, FIFOs, message queues, semaphores, and shared memory. It provides details on how each method works, such as how processes can create and access pipes, FIFOs, and shared memory segments. It also describes the key system calls used to implement IPC, such as pipe, mkfifo, msgget, semget, and shmget.
The Raspberry Pi is a series of credit cardโsized single-board computers developed in the UK by the Raspberry Pi Foundation with the intention of promoting the teaching of basic computer science in schools.
The original Raspberry Pi and Raspberry Pi 2 are manufactured in several board configurations through licensed manufacturing agreements with Newark element14 (Premier Farnell), RS Components and Egoman. These companies sell the Raspberry Pi online. Egoman produces a version for distribution solely in China and Taiwan, which can be distinguished from other Pis by their red colouring and lack of FCC/CE marks. The hardware is the same across all manufacturers.
The original Raspberry Pi is based on the Broadcom BCM2835 system on a chip (SoC), which includes an ARM1176JZF-S 700 MHz processor, VideoCore IV GPU, and was originally shipped with 256 megabytes of RAM, later upgraded (models B and B+) to 512 MB. The system has Secure Digital (SD) (models A and B) or MicroSD (models A+ and B+) sockets for boot media and persistent storage.
The document discusses arrays, strings, and functions in C programming. It begins by explaining how to initialize and access 2D arrays, including examples of declaring and initializing a 2D integer array and adding elements of two 2D arrays. It also covers initializing and accessing multidimensional arrays. The document then discusses string basics like declaration and initialization of character arrays that represent strings. It explains various string functions like strlen(), strcat(), strcmp(). Finally, it covers functions in C including declaration, definition, call by value vs reference, and passing arrays to functions.
Message queues allow messages to be passed from one process to another. There can be multiple writers to the queue as well as multiple readers.
For Script:
https://docs.google.com/document/d/1JIdvzZdoV7jlr3cOTx9GYhDhGpt2sv9HcYWaf_dc2NA/edit?usp=sharing
Disk and terminal drivers allow processes to communicate with peripheral devices like disks, terminals, printers and networks. A disk driver translates file system addresses to specific disk sectors. A terminal driver controls data transmission to and from terminals and processes special characters through line disciplines. Device drivers are essential for all input/output in the UNIX operating system.
The document outlines the schedule and objectives for an operating systems lab course over 10 weeks. The first few weeks focus on writing programs using Unix system calls like fork, exec, wait. Later weeks involve implementing I/O system calls, simulating commands like ls and grep, and scheduling algorithms like FCFS, SJF, priority and round robin. Students are asked to display Gantt charts, compute waiting times and turnaround times for each algorithm. The final weeks cover inter-process communication, the producer-consumer problem, and memory management techniques.
Post Exploitation Bliss: Loading Meterpreter on a Factory iPhone, Black Hat U...Vincenzo Iozzo
ย
Charlie Miller and Vincenzo Iozzo presented techniques for post-exploitation on the iPhone 2 including:
1. Running arbitrary shellcode by overwriting memory protections and calling vm_protect to mark pages as read/write/executable.
2. Loading an unsigned dynamic library called Meterpreter by mapping it over an existing signed library, patching dyld to ignore code signing, and forcing unloaded of linked libraries.
3. Adding new functionality to Meterpreter, such as a module to vibrate and play a sound on the iPhone, demonstrating how payloads can be extended once loaded into memory.
System calls allow programs to request services from the operating system kernel. Some common system calls include read and write for file access, fork for creating new processes, and wait for a parent process to wait for a child process to complete. The steps a system call takes include the program pushing parameters onto the stack, calling the library procedure to place the system call number in a register, triggering a trap instruction to switch to kernel mode, the kernel dispatching the system call to the appropriate handler, the handler running, then returning control to the user program after completing the operation.
The document describes experiments conducted on digital logic circuits using VHDL. It includes summaries of experiments on multiplexers, logic gates, demultiplexers, half adders, full adders, half subtractors, full subtractors, SR latches, and SR clocked latches. Code snippets in VHDL are provided for each circuit along with truth tables and conclusions.
Iaetsd implementation of aho corasick algorithmIaetsd Iaetsd
ย
The document discusses implementing the Aho-Corasick string matching algorithm for network intrusion detection systems using ternary content-addressable memory (TCAM). It begins with an overview of how network intrusion detection systems use pattern matching to detect attacks and discusses challenges with software-based approaches. It then provides details on the Aho-Corasick algorithm and how it constructs a deterministic finite automaton to match multiple patterns simultaneously. The document proposes using TCAM to implement the Aho-Corasick automaton in hardware for high-speed pattern matching. It describes how TCAM allows parallel searching and compares implementing the automaton as a deterministic or nondeterministic finite automaton in TCAM.
This document provides an outline for a lecture on software security. It introduces the lecturer, Roman Oliynykov, and covers various topics related to software vulnerabilities like buffer overflows, heap overflows, integer overflows, and format string vulnerabilities. It provides examples of vulnerable code and exploits, and recommendations for writing more secure code to avoid these vulnerabilities.
Similar to Unit_ 5.3 Interprocess communication.pdf (20)
- The Unix operating system originated from the MULTICS project in the 1960s. Bell Labs researchers Ken Thompson and Dennis Ritchie developed the first version of Unix in 1969.
- Unix became popular due to its portability, simplicity, and ability to support complex programs built from simpler ones. It introduced a hierarchical file system, consistent file format, and multi-user capabilities.
- Over time, Unix evolved into two main versions - System V (SVR) and Berkeley Software Distribution (BSD). Linux was later developed as a Unix-like system and shares many similarities with SVR.
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
ย
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
How Barcodes Can Be Leveraged Within Odoo 17Celine George
ย
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
ย
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
ย
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
ย
(๐๐๐ ๐๐๐) (๐๐๐ฌ๐ฌ๐จ๐ง ๐)-๐๐ซ๐๐ฅ๐ข๐ฆ๐ฌ
๐๐ข๐ฌ๐๐ฎ๐ฌ๐ฌ ๐ญ๐ก๐ ๐๐๐ ๐๐ฎ๐ซ๐ซ๐ข๐๐ฎ๐ฅ๐ฎ๐ฆ ๐ข๐ง ๐ญ๐ก๐ ๐๐ก๐ข๐ฅ๐ข๐ฉ๐ฉ๐ข๐ง๐๐ฌ:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
๐๐ฑ๐ฉ๐ฅ๐๐ข๐ง ๐ญ๐ก๐ ๐๐๐ญ๐ฎ๐ซ๐ ๐๐ง๐ ๐๐๐จ๐ฉ๐ ๐จ๐ ๐๐ง ๐๐ง๐ญ๐ซ๐๐ฉ๐ซ๐๐ง๐๐ฎ๐ซ:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
ย
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
ย
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Pengantar Penggunaan Flutter - Dart programming language1.pptx
ย
Unit_ 5.3 Interprocess communication.pdf
1. Interprocess Communication
Processes have to communicate to exchange data and to synchronize operations. Several forms
of interprocess communication are: pipes, named pipes and signals. But each one has some
limitations.
Process Tracing
The UNIX system provides a primitive form of interprocess communication for tracing processes,
useful for debugging. A debugger process, such as sdb, spawns a process to be traced and
controls its execution with the ptrace system call, setting and clearing break points, and reading
and writing data in its virtual address space. Process tracing consists of synchronization of the
debugger process and the traced process and controlling the execution of the traced process.
The pseudo-code shows the typical structure of a debugger program:
if ((pid = fork()) == 0)
{
// child - traced process
ptrace(0, 0, 0, 0);
exec("name of traced process here");
}
// debugger process continues here
for (;;)
{
wait((int *) 0);
read(input for tracing instructions);
ptrace(cmd, pid, ...);
if (quitting trace)
break;
}
When ptrace is invoked, the kernel sets a trace bit in the child process table entry.
ptrace (cmd, pid, addr, data);
where cmd specifies various commands such as reading/writing data, resuming execution and so
on, pid is the process ID of the traced process, addr is the virtual address to be read or written in
the child process, and data is an integer value to be written. When executing the ptrace system
call, the kernel verifies that the debugger has a child whose *D is pid and that the child is in the
traced state and then uses a global trace data structure to transfer data between the two
processes. It locks the trace data structure to prevent other tracing processes from overwriting it,
copies cmd, addr, and data into the data structure, wakes up the child process and puts it into
the "ready-to-run" state, then sleeps until the child responds. When the child resumes execution
(in kernel mode), it does the appropriate trace command, writes its reply into the trace data
structure, then awakens the debugger.
2. Consider the following two programs:
int data[32];
main()
{
int i;
for (i = 0; i < 32; i++)
printf("data[%d] = %dn", i, data[i]);
printf("ptrace data addr 0x%xn", data);
}
#define TR_SETUP 0
#define TR_WRITE 5
#define TR_RESUME 7
int addr;
main(argc, argv)
int argc;
char *argv[];
{
int i, pid;
sscanf(argv[1], "%x", &addr);
if ((pid = fork()) == 0)
{
ptrace(TR_SETUP, 0, 0, 0);
execl("trace", "trace", 0);
exit();
}
for (i = 0; i < 32; i++)
{
wait((int*) 0);
// write value of i into address addr in proc pid
if (ptrace(TR_WRITE, pid, addr, i) == -1)
exit();
addr += sizeof(int);
}
// traced process should resume execution
ptrace(TR_RESUME, pid, 1, 0);
}
The first program is called trace and the second is called debug. Running trace at a terminal, the
array values for data will be 0. Running debug with a parameter equal to the value printed out
by trace, debug saves the parameter in addr, creates a child process that invokes ptrace to make
itself eligible for tracing, and execs trace. The kernel sends the child process a SIGTRAP signal at
the end of exec, and trace enters the trace state, waiting for command from debug. If debug had
been sleeping in wait, it wakes up, finds the traced child process, and returns
from wait. debug then calls ptrace, writes the value of loop variable i into the data space
of trace at address addr, and increments addr. In trace, addr is an address of an entry in the
array data.
Drawbacks of ptrace:
๏ท Four context switches are required to transfer a word of data between a debugger and a
traced process. The kernel switches context in the debugger in the ptrace call until the
3. traced process replies to a query, switches context to and from the traced process, and
switches context back to the debugger process with the answer to the ptrace call.
๏ท The debugger can trace only its children, not grandchildren (and subsequent children).
๏ท A debugger cannot trace a process that is already executing if the debugged process
had not called ptrace. It must be killed and restarted in debug mode.
๏ท It is impossible to trace setuid programs, because users could violate security by writing
their address space via ptrace and doing illegal operations. exec ignores the setuid bit if
the process is traced to prevent a user from overwriting the address space of
a setuid program.
System V IPC
The UNIX System V IPC package consists of three mechanisms:
1. Messages allow process to send formatted data streams to arbitrary processes.
2. Shared memory allows processes to share parts of their virtual address space.
3. Semaphores allow processes to synchronize execution.
The share common properties:
๏ท Each mechanism contains a table whose entries describe all instances of the mechanism.
๏ท Each entry contains a numeric key, which is its user-chosen name.
๏ท Each mechanism contains a "get" system call to create a new entry or to retrieve an
existing one, and parameters to the calls include a key and flags.
๏ท The kernel uses the following formula to find the index into the table of data structures
from the descriptor: index = descriptor modulo (number of entries in table);
๏ท Each entry has a permissions structure that includes the user ID and group ID of the
process that created the entry, a user and group ID set by the "control" system call
(studied below), and a set of read-write-execute permissions for user, group, and others,
similar to the file permission modes.
๏ท Each entry contains other information such as the process ID of the last process to
update the entry, and time of last access or update.
๏ท Each mechanism contains a "control" system call to query status of an entry, to set status
information, or to remove the entry from the system.
4. Messages
There are four system calls for messages:
1. msgget returns (and possibly creates) a message descriptor.
2. msgctl has options to set and return parameters associated with a message descriptor
and an option to remove descriptors.
3. msgsnd sends a message.
4. msgrcv receives a message.
Syntax of msgget:
msgqid = msgget(key, flag);
where msgqid is the descriptor returned by the call, and key and flag have the semantics
described above for the general "get" calls. The kernel stores messages on a linked list (queue)
per descriptor, and it uses msgqid as an index into an array of message queue headers. The
queue structure contains the following fields, in addition to the common fields:
๏ท Pointers to the first and last messages on a linked list.
๏ท The number of messages and total number of data bytes on the linked list.
๏ท The maximum number of bytes of data that can be on the linked list.
๏ท The process IDs of the last processes to send and receive messages.
๏ท Time stamps of the last msgsnd, msgrcv, and msgctl operations.
Syntax of msgsnd:
msgsnd(msgqid, msg, count, flag);
flag describes the action the kernel should take if it runs out of internal buffer space. The
algorithm is given below:
/* Algorithm: msgsnd
* Input: message queue descriptor
* address of message structure
* size of message
* flags
* Output: number of bytes send
*/
{
check legality of descriptor, permissions;
while (not enough space to store message)
{
if (flags specify not to wait)
return;
sleep(event: enough space is available);
}
get message header;
read message text from user space to kernel;
adjust data structures: enqueue message header,
message header points to data, counts,
5. time stamps, process ID;
wakeup all processes waiting to read message from queue;
}
The diagram shows the structure of
Syntax for msgrcv:
count = msgrcv(id, msg, maxcount, type, flag);
The algorithm is given below:
/* Algorithm: msgrcv
* Input: message descriptor
* address of data array for incoming message
* size of data array
* requested message type
* flags
* Output: number of bytes in returned message
*/
{
check permissions;
loop:
time stamps, process ID;
wakeup all processes waiting to read message from queue;
The diagram shows the structure of message queues:
count = msgrcv(id, msg, maxcount, type, flag);
address of data array for incoming message
* requested message type
* Output: number of bytes in returned message
6. check legality of message descriptor;
// find message to return to user
if (requested message type == 0)
consider first message on queue;
else if (requested message type > 0)
consider first message on queue with given type;
else // requested message type < 0
consider first of the lowest typed messages on queue,
such that its type is <= absolute value of requested type;
if (there is a message)
{
adjust message size or return error if user size too small;
copy message type, text from kernel space to user space;
unlink message from queue;
return;
}
// no message
if (flags specify not to sleep)
return with error;
sleep(event: message arrives on queue);
goto loop;
}
If processes were waiting to send messages because there was no room on the list, the kernel
awakens them after it removes a message from the message queue. If a message is bigger
than maxcount, the kernel returns an error for the system call leaves the message on the queue.
If a process ignores the size constraints (MSG_NOERROR bit is set in flag), the kernel truncates
the message, returns the requested number of bytes, and removes the entire message from the
list.
If the type is a positive integer, the kernel returns the first message of the given type. If it is a
negative, the kernel finds the lowest type of all message on the queue, provided it is less than or
equal to the absolute value of the type, and returns the first message of that type. For example,
if a queue contains three messages whose types are 3, 1, and 2, respectively, and a user requests
a message with type -2, the kernel returns the message of type 1.
The syntax of msgctl:
msgctl(id, cmd, mstatbuf);
where cmd specifies the type of command, and mstatbuf is the address of a user data structure
that will contain control parameters or the results of a query.
Shared Memory
Sharing the part of virtual memory and reading to and writing from it, is another way for the
processes to communicate. The system calls are:
๏ท shmget creates a new region of shared memory or returns an existing one.
๏ท shmat logically attaches a region to the virtual address space of a process.
๏ท shmdt logically detaches a region.
๏ท shmctl manipulates the parameters associated with the region.
7. shmid = shmget(key, size, flag);
where size is the number of bytes in the region. If the region is to be created,
sets a flag in the shared memory table to indicate that the memory is not allocated to the
region. The memory is allocated only when the region gets attached. A flag is set in the region
table which indicates that the region should not be freed when the
it, exits. The data structure are shown below:
Syntax for shmat:
virtaddr = shmat(id, addr, flags);
where addr is the virtual address where the user wants to attach the shared memory. And
the flags specify whether the region is read
user-specified address. The return value
the region, not necessarily the value requested by the process.
The algorithm is given below:
/* Algorithm: shmat
* Input: shared memory descriptor
* virtual addresses to attach memory
* flags
* Output: virtual address where memory was attached
*/
{
check validity of descriptor, permissions;
if (user specified virtual address)
{
round off virtual address, as specified by flags;
check legality of virtual address, size of region;
}
else
kernel picks virtual address: error if none available;
attach region to process address space (al
shmid = shmget(key, size, flag);
is the number of bytes in the region. If the region is to be created, allocreg
sets a flag in the shared memory table to indicate that the memory is not allocated to the
region. The memory is allocated only when the region gets attached. A flag is set in the region
table which indicates that the region should not be freed when the last process referencing
s. The data structure are shown below:
virtaddr = shmat(id, addr, flags);
is the virtual address where the user wants to attach the shared memory. And
specify whether the region is read-only and whether the kernel should
specified address. The return value virtaddr is the virtual address where the kernel attached
the region, not necessarily the value requested by the process.
scriptor
* virtual addresses to attach memory
* Output: virtual address where memory was attached
check validity of descriptor, permissions;
if (user specified virtual address)
round off virtual address, as specified by flags;
check legality of virtual address, size of region;
// user wants kernel to find good address
kernel picks virtual address: error if none available;
attach region to process address space (algorithm: attachreg);
allocreg is used. It
sets a flag in the shared memory table to indicate that the memory is not allocated to the
region. The memory is allocated only when the region gets attached. A flag is set in the region
last process referencing
is the virtual address where the user wants to attach the shared memory. And
only and whether the kernel should round off the
is the virtual address where the kernel attached
// user wants kernel to find good address
8. if (region being attached for first time)
allocate page tables, memory for region (algorithm: growreg);
return (virtual address where attached);
}
If the address where the region is to be attached is given as 0, the kernel chooses a convenient
virtual address. If the calling process is the first process to attach that region, it means that page
tables and memory are not allocated for that region, therefore, the kernel allocated both
using growreg.
The syntax for shmdt:
shmdt(addr);
where addr is the virtual address returned by a prior shmat call. The kernel searches for the
process region attached at the indicated virtual address and detaches it using detachreg.
Because the region tables have no back pointers to the shared memory table, the kernel
searches the shared memory table for the entry that points to the region and adjusts the field
for the time the region was last detached.
Syntax of shmctl
shmctl(id, cmd, shmstatbuf);
which is similar to msgctl
Semaphores
A semaphore is UNIX System V consists of the following elements:
๏ท The value of the semaphore.
๏ท The process ID of the last process to manipulate the semaphore.
๏ท The number of processes waiting for semaphore value to increase.
๏ท The number of processes waiting for the semaphore value to equal 0.
The system calls are:
๏ท semget to create and gain access to a set of semaphores.
๏ท semctl to do various control operations on the set.
๏ท semop to manipulate the values of semaphores.
semget creates an array of semaphores:
id = semget(key, count, flag);
The kernel allocates an entry that points to an array of semaphore structure with count elements.
It is shown in the figure:
9. The entry also specifies the number of semaphores in the array, the time of
and the time of the last semctl call.
Processes manipulate semaphores with the
oldval = semop(id, oplist, count);
where oplist is a pointer to an array of semaphore operations, and
The return value, oldval, is the value of the last semaphore operated on in the set before the
operation was done. The format of each element of
๏ท The semaphore number identifying the semaphore array entry being operated
๏ท The operation
๏ท Flags
The entry also specifies the number of semaphores in the array, the time of the last
call.
Processes manipulate semaphores with the semop system call:
oldval = semop(id, oplist, count);
is a pointer to an array of semaphore operations, and count is the size of the array.
, is the value of the last semaphore operated on in the set before the
operation was done. The format of each element of oplist is,
The semaphore number identifying the semaphore array entry being operated
the last semop call,
is the size of the array.
, is the value of the last semaphore operated on in the set before the
The semaphore number identifying the semaphore array entry being operated on
10. The algorithm is given below:
/* Algorithm: semop
* Input: semaphore descriptor
* array of semaphore operations
* number of elements in array
* Output: start value of last semaphore operated on
*/
{
check legality of semaphore descriptor;
start: read array of semaphore operations from user to kernel space;
check permissions for all semaphore operations;
for (each semaphore operation in array)
{
if (semaphore operation is positive)
{
add "operation" to semaphore value;
if (UNDO flag set on semaphore operation)
update process undo structure;
wakeup all processes sleeping (event: semaphore value increases);
}
else if (semaphore operation is negative)
{
if ("operation" + semaphore value >= 0)
{
add "operation" to semaphore value;
if (UNDO flag set)
update process undo structure;
if (semaphore value 0)
wakeup all processes sleeping (event: semaphore
value becomes 0);
continue;
}
reverse all semaphore operations already done
this system call (previous iterations);
if (flags specify not to sleep)
return with error;
sleep (event: semaphore value increases);
goto start;
}
else // semaphore operation is zero
{
if (semaphore value non 0)
{
reverse all semaphore operations done this system call;
if (flags specify not to sleep)
return with error;
sleep (event: semaphore value == 0);
goto start;
}
}
}
// semaphore operations all succeeded
update time stamps, process ID's;
return value of last semaphore operated on before call succeeded;
}
If the kernel must sleep, it restores all the operations previously done and then sleeps until the
event it is sleeping for, happens, and then it restarts the system call. The kernel stores the
11. operations in a global array, it reads the array from user space again if it must restart the system
call. That is how the operations are done atomically -- either all at once or not at all.
Whenever a process sleeps in the middle of a semaphore operation, it sleeps at an interruptible
priority. If a process exits without resetting the semaphore value, a dangerous situation could
occur. To avoid this, a process can set the SEM_UNDO flag in the semop call. If this flag is set,
the kernel reverses the effect of every semaphore operation the process had done. The kernel
maintains a table with one entry for every process in the system. Each entry points to a set
of undo structures, one for each semaphore used by the process. Each undo structure is an array
of triples consisting of a semaphore ID, a semaphore number in the set identified by ID, and an
adjustment value. The kernel allocates undo structure dynamically when a process executes its
first semop system call with the SEM_UNDO flag set.
Syntax of semctl:
semctl(id, number, cmd, arg);
where arg is declared as a union:
union semunion {
int val;
struct semid_ds *semstat;
unsigned short *array;
} arg;
where kernel interprets arg based on the value of cmd, similar to the way it interprets
the ioctl command.
12. Sockets
To provide common methods for interprocess communication and to allow use of sophisticated
network protocols, the BSD system provides a mechanism known as
consists of three parts: the socket layer, the protocol layer, and the device layer, as s
figure:
The socket layer contains the protocol modules used for communication, and the device layer
contains the device drivers that control the network devices. Sockets that share common
communications properties, such as naming conventions and protocol address f
grouped into domains. For example, the "UNIX system domain" or "Internet domain". Each
socket has a type -- a virtual circuit
delivery of data. Datagrams do not guarantee sequenced, reli
they are less expensive than virtual circuits, because they do not require expensive setup
operations.
The socket system call establishes the end point of a communications link:
sd = socket(format, type, protocol);
where format specifies the communications domain,
over the socket, and protocol indicates a particular protocol to control the communication.
The close system call closes sockets.
The bind system call associates a name w
bind(sd, address, length);
methods for interprocess communication and to allow use of sophisticated
network protocols, the BSD system provides a mechanism known as sockets. The kernel structure
consists of three parts: the socket layer, the protocol layer, and the device layer, as s
The socket layer contains the protocol modules used for communication, and the device layer
contains the device drivers that control the network devices. Sockets that share common
communications properties, such as naming conventions and protocol address f
. For example, the "UNIX system domain" or "Internet domain". Each
virtual circuit or datagram. A virtual circuit allows sequenced, reliable
delivery of data. Datagrams do not guarantee sequenced, reliable, or unduplicated delivery, but
they are less expensive than virtual circuits, because they do not require expensive setup
system call establishes the end point of a communications link:
sd = socket(format, type, protocol);
specifies the communications domain, type indicates the type of communication
indicates a particular protocol to control the communication.
system call closes sockets.
system call associates a name with the socket descriptor:
methods for interprocess communication and to allow use of sophisticated
. The kernel structure
consists of three parts: the socket layer, the protocol layer, and the device layer, as shown in the
The socket layer contains the protocol modules used for communication, and the device layer
contains the device drivers that control the network devices. Sockets that share common
communications properties, such as naming conventions and protocol address formats, are
. For example, the "UNIX system domain" or "Internet domain". Each
. A virtual circuit allows sequenced, reliable
able, or unduplicated delivery, but
they are less expensive than virtual circuits, because they do not require expensive setup
indicates the type of communication
indicates a particular protocol to control the communication.
13. where address points to a structure that specifies an identifier specific to the communications
domain and protocol specified in the socket system call. length is the length of
the address structure.
The connect system call requests that the kernel make a connection to an existing socket:
connect(sd, address, length);
where address is the address of the target socket. Both sockets must use the same
communications domain and protocol.
If the type of the socket is a datagram, the connect call informs the kernel of the address to be
used on subsequent send calls over the socket; no connections are made at the time of the call.
When a server process arranges to accept connections over a virtual circuit, the kernel must
queue incoming requests until it can service them. The listen system call specifies the maximum
queue length:
listen(sd, qlength);
The accept call receives incoming requests for a connection to a server process:
nsd = accept(sd, address, addrlen);
where address points to a user data array that the kernel fills with the return address of the
connecting client, and addrlen indicates the size of the user array. When accept returns, the
kernel overwrites the contents of addrlen with a number that indicates the amount of space
taken up by the address. accept returns a new socket descriptor nsd different from the socket
descriptor sd.
The send and recv system calls transmit data over a connected socket:
count = send(sd, msg, length, flags);
count = recv(sd, buf, length, flags);
The datagram versions of these system calls, sendto and recvfrom have additional parameters for
addresses. Processes can also use read and write system calls on stream (virtual circuit) sockets
after the connection is set up.
The shutdown system call closes a socket connection:
shutdown(sd, mode);
where mode indicates whether the sending side, the receiving side, or both sides no longer allow
data transmission. After this call, the socket descriptors are still intact. The close system call frees
the socket descriptor.
The getsocketname system call gets the name of a socket bound by a previous bind call:
getsocketname(sd, name, length);
The getsockopt and setsockopt calls retrieve and set various options associated with the socket,
according to the communications domain and protocol of the socket.