MPI is a language-independent communications protocol used to program parallel computers. It allows processes to communicate and synchronize through point-to-point and collective communication primitives. The document discusses how to install MPI on Linux clusters, describes common MPI functions for communication (e.g. MPI_Send, MPI_Recv) and collective operations (e.g. MPI_Bcast, MPI_Reduce). It also provides an example MPI program to demonstrate basic point-to-point communication between two processes.
There are several mechanisms for inter-process communication (IPC) in UNIX systems, including message queues, shared memory, and semaphores. Message queues allow processes to exchange data by placing messages into a queue that can be accessed by other processes. Shared memory allows processes to communicate by declaring a section of memory that can be accessed simultaneously. Semaphores are used to synchronize processes so they do not access critical sections at the same time.
This document discusses various inter-process communication (IPC) types including shared memory, mapped memory, pipes, FIFOs, message queues, sockets, and signals. Shared memory allows processes to directly read and write to the same region of memory, requiring synchronization between processes. Mapped memory permits processes to communicate by mapping the same file into memory. Pipes and FIFOs allow for sequential data transfer between related and unrelated processes. Message queues provide a way for processes to exchange messages via a common queue. Signals are used to asynchronously notify processes of events.
This document provides an overview of MPI (Message Passing Interface), which is a standard for parallel programming using message passing. The key points covered include:
- MPI allows programs to run across multiple computers in a distributed memory environment. It has functions for point-to-point and collective communication.
- Common MPI functions introduced are MPI_Send, MPI_Recv for point-to-point communication, and MPI_Bcast, MPI_Gather for collective operations.
- More advanced topics like derived data types and examples of Poisson equation and FFT solvers are also briefly discussed.
System Calls.pptxnsjsnssbhsbbebdbdbshshsbshsbbsashukiller7
System calls allow processes to request services from the operating system kernel. There are several categories of system calls including process control, file management, process information maintenance, and inter-process communication.
Process control system calls like fork(), exit(), and exec() allow processes to be created, terminated, and new programs to be run. File management system calls like open(), read(), write(), and close() allow processes to open, read, write to, and close files. Process information maintenance system calls like getpid(), alarm(), and sleep() allow processes to access information about themselves or other processes. Communication system calls like pipe() and shmget() allow processes to communicate with each other.
Program Assignment Process ManagementObjective This program a.docxwkyra78
Program Assignment : Process Management
Objective: This program assignment is given to the Operating Systems course to allow the students to figure out how a single process (parent process) creates a child process and how they work on Unix/Linux(/Mac OS X/Windows) environment. Additionally, student should combine the code for describing inter-process communication into this assignment. Both parent and child processes interact with each other through shared memory-based communication scheme or message passing scheme.
Environment: Unix/Linux environment (VM Linux or Triton Server, or Mac OS X), Windows platform
Language: C or C++, Java
Requirements:
i. You have wide range of choices for this assignment. First, design your program to explain the basic concept of the process management in Unix Kernel. This main idea will be evolved to show your understanding on inter-process communication, file processing, etc.
ii. Refer to the following system calls:
- fork(), getpid(), family of exec(), wait(), sleep() system calls for process management
- shmget(), shmat(), shmdt(), shmctl() for shared memory support or
- msgget(), msgsnd(), msgrcv(), msgctl(), etc. for message passing support
iii. The program should present that two different processes, both parent and child, execute as they are supposed to.
iv. The output should contain the screen capture of the execution procedure of the program.
v. Interaction between parent and child processes can be provided through inter-process communication schemes, such as shared-memory or message passing schemes.
vi. Result should be organized as a document which explains the overview of your program, code, execution results, and the conclusion including justification of your program, lessons you've learned, comments, etc.
Note:
i. In addition, please try to understand how the local and global variables work across the processes
ii. read() or write () functions are used to understand how they work on the different processes.
iii. For extra credit, you can also incorporate advanced features, like socket or thread functions, into your code.
Examples:
1. Process Creation and IPC with Shared Memory Scheme
=============================================================
#include <stdio.h>
#include <sys/shm.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
int main(){
pid_t pid;
int segment_id; //allocate the memory
char *shared_memory; //pointer to memory
const int size = 4096;
segment_id = shmget(IPC_PRIVATE, size, S_IRUSR | S_IWUSR);
shared_memory = (char *) shmat(segment_id, NULL, 0);
pid = fork();
if(pid < 0) { //error
fprintf(stderr, "Fork failed");
return 1;
}
else if(pid == 0){ //child process
char *child_shared_memory;
child_shared_memory = (char *) shmat(segment_id,NULL,0); //attach mem
sprintf(child_shared_memory, "Hello parent process!"); //write to the shared mem
shmdt(child_shared_memory);
}
else ...
The document provides an overview of the C programming language. It states that C was developed in 1972 by Dennis Ritchie at Bell Labs and was used to develop the UNIX operating system. The document then covers various features of C like it being a mid-level programming language, having structured programming, pointers, loops, functions, arrays, and more. It provides examples to explain concepts like input/output functions, data types, operators, control structures, and pointers.
MPI is a language-independent communications protocol used to program parallel computers. It allows processes to communicate and synchronize through point-to-point and collective communication primitives. The document discusses how to install MPI on Linux clusters, describes common MPI functions for communication (e.g. MPI_Send, MPI_Recv) and collective operations (e.g. MPI_Bcast, MPI_Reduce). It also provides an example MPI program to demonstrate basic point-to-point communication between two processes.
There are several mechanisms for inter-process communication (IPC) in UNIX systems, including message queues, shared memory, and semaphores. Message queues allow processes to exchange data by placing messages into a queue that can be accessed by other processes. Shared memory allows processes to communicate by declaring a section of memory that can be accessed simultaneously. Semaphores are used to synchronize processes so they do not access critical sections at the same time.
This document discusses various inter-process communication (IPC) types including shared memory, mapped memory, pipes, FIFOs, message queues, sockets, and signals. Shared memory allows processes to directly read and write to the same region of memory, requiring synchronization between processes. Mapped memory permits processes to communicate by mapping the same file into memory. Pipes and FIFOs allow for sequential data transfer between related and unrelated processes. Message queues provide a way for processes to exchange messages via a common queue. Signals are used to asynchronously notify processes of events.
This document provides an overview of MPI (Message Passing Interface), which is a standard for parallel programming using message passing. The key points covered include:
- MPI allows programs to run across multiple computers in a distributed memory environment. It has functions for point-to-point and collective communication.
- Common MPI functions introduced are MPI_Send, MPI_Recv for point-to-point communication, and MPI_Bcast, MPI_Gather for collective operations.
- More advanced topics like derived data types and examples of Poisson equation and FFT solvers are also briefly discussed.
System Calls.pptxnsjsnssbhsbbebdbdbshshsbshsbbsashukiller7
System calls allow processes to request services from the operating system kernel. There are several categories of system calls including process control, file management, process information maintenance, and inter-process communication.
Process control system calls like fork(), exit(), and exec() allow processes to be created, terminated, and new programs to be run. File management system calls like open(), read(), write(), and close() allow processes to open, read, write to, and close files. Process information maintenance system calls like getpid(), alarm(), and sleep() allow processes to access information about themselves or other processes. Communication system calls like pipe() and shmget() allow processes to communicate with each other.
Program Assignment Process ManagementObjective This program a.docxwkyra78
Program Assignment : Process Management
Objective: This program assignment is given to the Operating Systems course to allow the students to figure out how a single process (parent process) creates a child process and how they work on Unix/Linux(/Mac OS X/Windows) environment. Additionally, student should combine the code for describing inter-process communication into this assignment. Both parent and child processes interact with each other through shared memory-based communication scheme or message passing scheme.
Environment: Unix/Linux environment (VM Linux or Triton Server, or Mac OS X), Windows platform
Language: C or C++, Java
Requirements:
i. You have wide range of choices for this assignment. First, design your program to explain the basic concept of the process management in Unix Kernel. This main idea will be evolved to show your understanding on inter-process communication, file processing, etc.
ii. Refer to the following system calls:
- fork(), getpid(), family of exec(), wait(), sleep() system calls for process management
- shmget(), shmat(), shmdt(), shmctl() for shared memory support or
- msgget(), msgsnd(), msgrcv(), msgctl(), etc. for message passing support
iii. The program should present that two different processes, both parent and child, execute as they are supposed to.
iv. The output should contain the screen capture of the execution procedure of the program.
v. Interaction between parent and child processes can be provided through inter-process communication schemes, such as shared-memory or message passing schemes.
vi. Result should be organized as a document which explains the overview of your program, code, execution results, and the conclusion including justification of your program, lessons you've learned, comments, etc.
Note:
i. In addition, please try to understand how the local and global variables work across the processes
ii. read() or write () functions are used to understand how they work on the different processes.
iii. For extra credit, you can also incorporate advanced features, like socket or thread functions, into your code.
Examples:
1. Process Creation and IPC with Shared Memory Scheme
=============================================================
#include <stdio.h>
#include <sys/shm.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
int main(){
pid_t pid;
int segment_id; //allocate the memory
char *shared_memory; //pointer to memory
const int size = 4096;
segment_id = shmget(IPC_PRIVATE, size, S_IRUSR | S_IWUSR);
shared_memory = (char *) shmat(segment_id, NULL, 0);
pid = fork();
if(pid < 0) { //error
fprintf(stderr, "Fork failed");
return 1;
}
else if(pid == 0){ //child process
char *child_shared_memory;
child_shared_memory = (char *) shmat(segment_id,NULL,0); //attach mem
sprintf(child_shared_memory, "Hello parent process!"); //write to the shared mem
shmdt(child_shared_memory);
}
else ...
The document provides an overview of the C programming language. It states that C was developed in 1972 by Dennis Ritchie at Bell Labs and was used to develop the UNIX operating system. The document then covers various features of C like it being a mid-level programming language, having structured programming, pointers, loops, functions, arrays, and more. It provides examples to explain concepts like input/output functions, data types, operators, control structures, and pointers.
The document discusses different methods of inter-process communication (IPC) in Unix systems. It describes process tracing which allows a debugger process to control the execution of a traced process using the ptrace system call. It also describes the three main IPC mechanisms in Unix - messages, shared memory, and semaphores. Messages allow processes to exchange data streams using message queues. Shared memory allows processes to share parts of virtual memory. Semaphores allow processes to synchronize using integer values.
This document provides an introduction and overview of MPI (Message Passing Interface). It discusses:
- MPI is a standard for message passing parallel programming that allows processes to communicate in distributed memory systems.
- MPI programs use function calls to perform all operations. Basic definitions are included in mpi.h header file.
- The basic model in MPI includes communicators, groups, and ranks to identify processes. MPI_COMM_WORLD identifies all processes.
- Sample MPI programs are provided to demonstrate point-to-point communication, collective communication, and matrix multiplication using multiple processes.
- Classification of common MPI functions like initialization, communication, and information queries are discussed.
The document provides an introduction to Message Passing Interface (MPI), which is a standard for message passing parallel programming. It discusses key MPI concepts like communicators, data types, point-to-point and collective communication routines. It also presents examples of common parallel programming patterns like broadcast, scatter-gather, and parallel sorting and matrix multiplication. Programming hints are provided, along with references for further reading.
Message queues allow messages to be passed from one process to another. There can be multiple writers to the queue as well as multiple readers.
For Script:
https://docs.google.com/document/d/1JIdvzZdoV7jlr3cOTx9GYhDhGpt2sv9HcYWaf_dc2NA/edit?usp=sharing
httplinux.die.netman3execfork() creates a new process by.docxadampcarr67227
http://linux.die.net/man/3/exec
fork() creates a new process by duplicating the calling process. The new process, referred to as the child, is an exact duplicate of the calling process, referred to as the parent#include <unistd.h>pid_t fork(void);
The exec() family of functions replaces the current process image with a new process image. The functions described in this manual page are front-ends for execve(2). (See the manual page for execve(2) for further details about the replacement of the current process image.)
The exec() family of functions include execl, execlp, execle, execv, execvp, and execvpe to execute a file.
The ANSI prototype for execl() is:
int execl(const char *path, const char *arg0,..., const char *argn, 0)
http://www.cems.uwe.ac.uk/~irjohnso/coursenotes/lrc/system/pc/pc4.htm #inciude <stdio.h> #inciude <unistd.h> main() { execl("/bin/ls", "ls", "-l", 0); printf("Can only get here on error\n"); }
The first parameter to execl() in this example is the full pathname to the ls command. This is the file whose contents will be run, provided the process has execute permission on the file. The rest of the execl() parameters provide the strings to which the argv array elements in the new program will point. In this example, it means that the ls program will see the string ls pointed to by its argv[0], and the string -l pointed to by itsargv[1]. In addition to making all these parameters available to the new program, the exec() calls also pass a value for the variable: extern char **environ;
This variable has the same format as the argv variable except that the items passed via environ are the values in the environment of the process (like any exported shell variables), rather than the command line parameters. In the case of execl(), the value of the environ variable in the new program will be a copy of the value of this variable in the calling process.
The execl() version of exec() is fine in the circumstances where you can ex-plicitly list all of the parameters, as in the previous example. Now suppose you want to write a program that doesn't just run ls, but will run any program you wish, and pass it any number of appropriate command line parameters. Obviously, execl() won't do the job.
The example program below, which implements this requirement, shows, however, that the system call execv() will perform as required: #inciude <stdio.h> main(int argc, char **argv) { if (argc==1) { printf("Usage: run <command> [<paraneters>]\n"); exit(1) } execv(argv[l], &argv[1)); printf("Sorry... couldn't run that!\n"); }
The prototype for execv() shows that it only takes two parameters, the first is the full pathname to the command to execute and the second is the argv value you want to pass into the new program. In the previous example this value was derived from the argv value passed into the run command, so that the run command can take the command line parameter values you pass it and just pass them on. int execl(.
This document discusses MPI (Message Passing Interface) and OpenMP for parallel programming. MPI is a standard for message passing parallel programs that requires explicit communication between processes. It provides functions for point-to-point and collective communication. OpenMP is a specification for shared memory parallel programming that uses compiler directives to parallelize loops and sections of code. It provides constructs for work sharing, synchronization, and managing shared memory between threads. The document compares the two approaches and provides examples of simple MPI and OpenMP programs.
The document discusses various C/C++ programs that demonstrate APIs for files, directories, and devices in Unix-like systems. The programs show how to use APIs such as open, read, write, close, fcntl, lseek, link, unlink, stat, chmod, chown, utime, access, opendir, readdir, closedir, and ioctl. They illustrate functions for file handling, file metadata operations, directory operations, and device I/O. The programs output details like file contents, attributes and permissions to confirm the correct behavior of the various file system and device APIs.
Post Exploitation Bliss: Loading Meterpreter on a Factory iPhone, Black Hat U...Vincenzo Iozzo
Charlie Miller and Vincenzo Iozzo presented techniques for post-exploitation on the iPhone 2 including:
1. Running arbitrary shellcode by overwriting memory protections and calling vm_protect to mark pages as read/write/executable.
2. Loading an unsigned dynamic library called Meterpreter by mapping it over an existing signed library, patching dyld to ignore code signing, and forcing unloaded of linked libraries.
3. Adding new functionality to Meterpreter, such as a module to vibrate and play a sound on the iPhone, demonstrating how payloads can be extended once loaded into memory.
PascalScript is a scripting engine that allows scripts written in Object Pascal to be executed at runtime in Delphi and Free Pascal applications. It provides advantages like allowing customization without recompiling and updating applications by distributing new script files. The engine works by compiling scripts to bytecode using a compiler component, and executing the bytecode using an executer component. It supports common data types, functions, classes, and calling external libraries.
The document outlines the schedule and objectives for an operating systems lab course over 10 weeks. The first few weeks focus on writing programs using Unix system calls like fork, exec, wait. Later weeks involve implementing I/O system calls, simulating commands like ls and grep, and scheduling algorithms like FCFS, SJF, priority and round robin. Students are asked to display Gantt charts, compute waiting times and turnaround times for each algorithm. The final weeks cover inter-process communication, the producer-consumer problem, and memory management techniques.
Help Needed!UNIX Shell and History Feature This project consists.pdfmohdjakirfb
Help Needed!
UNIX Shell and History Feature
This project consists of designing a C program to serve as a shell interface
that accepts user commands and then executes each command in a separate
process. This project can be completed on any Linux,
UNIX,orMacOS X system.
A shell interface gives the user a prompt, after which the next command
is entered. The example below illustrates the prompt
osh> and the user’s
next command:
cat prog.c. (This command displays the le prog.c on the
terminal using the
UNIX cat command.)
osh> cat prog.c
One technique for implementing a shell interface is to have the parent process
rst read what the user enters on the command line (in this case,
cat
prog.c), and then create a separate child process that performs the command.
Unless otherwise specied, the parent process waits for the child to exit
before continuing. This is similar in functionality to the new process creation
illustrated in Figure 3.10. However,
UNIX shells typically also allow the child
process to run in the background, or concurrently. To accomplish this, we add
an ampersand (&) at the end of the command. Thus, if we rewrite the above
command as
osh> cat prog.c &
the parent and child processes will run concurrently.
The separate child process is created using the
fork() system call, and the
user’s command is executed using one of the system calls in the
exec() family
A C program that provides the general operations of a command-line shell
is supplied in Figure 3.36. The
main() function presents the prompt osh->
and outlines the steps to be taken after input from the user has been read. The
main() function continually loops as long as should run equals 1; when the
user enters
exit at the prompt, your program will set should run to 0 and
terminate.
This project is organized into two parts: (1) creating the child process and
executing the command in the child, and (2) modifying the shell to allow a
history feature.
#include
#include
#define MAXLINE 80 /* The maximum length command */
int main(void)
{
char *args[MAXLINE/2 + 1]; /* command line arguments */
int should
run = 1; /* flag to determine when to exit program */
while (should run) {
printf(\"osh>\");
}
fflush(stdout);
/**
* After reading user input, the steps are:
* (1) fork a child process using fork()
* (2) the child process will invoke execvp()
* (3) if command included &, parent will invoke wait()
*/
return 0;
}
Part I — Creating a Child Process
The rst task is to modify the
main() function in the above program so that a child
process is forked and executes the command specied by the user. This will
require parsing what the user has entered into separate tokens and storing the
tokens in an array of character strings (
args in the above program. For example, if the
user enters the command
ps -ael at the osh> prompt, the values stored in the
args array are:
args[0] = \"ps\"
args[1] = \"-ael\"
args[2] = NULL
This args array will be passed to the execvp() function, which has the
following prot.
POSIX is a standard for maintaining compatibility between operating systems. It defines APIs for threads, mutexes, semaphores, condition variables, and message queues to enable processes and threads to synchronize access to shared resources and communicate. The document provides details on the POSIX interfaces for thread management, mutexes, condition variables, semaphores, and message queues. It includes code examples demonstrating their use for inter-process synchronization and communication using shared memory.
Message queue and shared memory both are used to achieve interprocess communication(IPC). Use of message queue and shared memory both have there own advantages as well as disadvantages.
This document discusses various methods of interprocess communication (IPC) supported on UNIX systems, including pipes, FIFOs, message queues, semaphores, and shared memory. It provides details on how each method works, such as how processes can create and access pipes, FIFOs, and shared memory segments. It also describes the key system calls used to implement IPC, such as pipe, mkfifo, msgget, semget, and shmget.
computer notes - Inter process communicationecomputernotes
Processes execute to accomplish specified computations. An interesting and innovative
way to use a computer system is to spread a given computation over several processes.
The need for such communicating processes arises in parallel and distributed processing
contexts.
Threads Advance in System Administration with LinuxSoumen Santra
Threads Advance in System Administration with Linux
Process Descriptor Handling
Kernel Stack
Pid_hash Table and Chained Lists
PID Hash Table Handling Functions and Macros
Wait Queues
Process Resource Limits
Task State Segment
System Calls
Pthread Operations
POSIX threads on GNU/Linux
Programs on Thread in C
The document discusses trap handling in Linux, focusing on system calls. It begins with background on interrupts, traps, and system calls. It then describes the function call flow from start_kernel() and initialization of the Interrupt Descriptor Table (IDT). Next, it covers system call entry and initialization of the system call table. Finally, it discusses the system call procedure from a user application using glibc and the Linux kernel. Key topics include IDT structure and gates, MSR register usage for system calls, fast vs slow system call paths, and how system calls are invoked and handled in the kernel.
The document discusses different methods of inter-process communication (IPC) in Unix systems. It describes process tracing which allows a debugger process to control the execution of a traced process using the ptrace system call. It also describes the three main IPC mechanisms in Unix - messages, shared memory, and semaphores. Messages allow processes to exchange data streams using message queues. Shared memory allows processes to share parts of virtual memory. Semaphores allow processes to synchronize using integer values.
This document provides an introduction and overview of MPI (Message Passing Interface). It discusses:
- MPI is a standard for message passing parallel programming that allows processes to communicate in distributed memory systems.
- MPI programs use function calls to perform all operations. Basic definitions are included in mpi.h header file.
- The basic model in MPI includes communicators, groups, and ranks to identify processes. MPI_COMM_WORLD identifies all processes.
- Sample MPI programs are provided to demonstrate point-to-point communication, collective communication, and matrix multiplication using multiple processes.
- Classification of common MPI functions like initialization, communication, and information queries are discussed.
The document provides an introduction to Message Passing Interface (MPI), which is a standard for message passing parallel programming. It discusses key MPI concepts like communicators, data types, point-to-point and collective communication routines. It also presents examples of common parallel programming patterns like broadcast, scatter-gather, and parallel sorting and matrix multiplication. Programming hints are provided, along with references for further reading.
Message queues allow messages to be passed from one process to another. There can be multiple writers to the queue as well as multiple readers.
For Script:
https://docs.google.com/document/d/1JIdvzZdoV7jlr3cOTx9GYhDhGpt2sv9HcYWaf_dc2NA/edit?usp=sharing
httplinux.die.netman3execfork() creates a new process by.docxadampcarr67227
http://linux.die.net/man/3/exec
fork() creates a new process by duplicating the calling process. The new process, referred to as the child, is an exact duplicate of the calling process, referred to as the parent#include <unistd.h>pid_t fork(void);
The exec() family of functions replaces the current process image with a new process image. The functions described in this manual page are front-ends for execve(2). (See the manual page for execve(2) for further details about the replacement of the current process image.)
The exec() family of functions include execl, execlp, execle, execv, execvp, and execvpe to execute a file.
The ANSI prototype for execl() is:
int execl(const char *path, const char *arg0,..., const char *argn, 0)
http://www.cems.uwe.ac.uk/~irjohnso/coursenotes/lrc/system/pc/pc4.htm #inciude <stdio.h> #inciude <unistd.h> main() { execl("/bin/ls", "ls", "-l", 0); printf("Can only get here on error\n"); }
The first parameter to execl() in this example is the full pathname to the ls command. This is the file whose contents will be run, provided the process has execute permission on the file. The rest of the execl() parameters provide the strings to which the argv array elements in the new program will point. In this example, it means that the ls program will see the string ls pointed to by its argv[0], and the string -l pointed to by itsargv[1]. In addition to making all these parameters available to the new program, the exec() calls also pass a value for the variable: extern char **environ;
This variable has the same format as the argv variable except that the items passed via environ are the values in the environment of the process (like any exported shell variables), rather than the command line parameters. In the case of execl(), the value of the environ variable in the new program will be a copy of the value of this variable in the calling process.
The execl() version of exec() is fine in the circumstances where you can ex-plicitly list all of the parameters, as in the previous example. Now suppose you want to write a program that doesn't just run ls, but will run any program you wish, and pass it any number of appropriate command line parameters. Obviously, execl() won't do the job.
The example program below, which implements this requirement, shows, however, that the system call execv() will perform as required: #inciude <stdio.h> main(int argc, char **argv) { if (argc==1) { printf("Usage: run <command> [<paraneters>]\n"); exit(1) } execv(argv[l], &argv[1)); printf("Sorry... couldn't run that!\n"); }
The prototype for execv() shows that it only takes two parameters, the first is the full pathname to the command to execute and the second is the argv value you want to pass into the new program. In the previous example this value was derived from the argv value passed into the run command, so that the run command can take the command line parameter values you pass it and just pass them on. int execl(.
This document discusses MPI (Message Passing Interface) and OpenMP for parallel programming. MPI is a standard for message passing parallel programs that requires explicit communication between processes. It provides functions for point-to-point and collective communication. OpenMP is a specification for shared memory parallel programming that uses compiler directives to parallelize loops and sections of code. It provides constructs for work sharing, synchronization, and managing shared memory between threads. The document compares the two approaches and provides examples of simple MPI and OpenMP programs.
The document discusses various C/C++ programs that demonstrate APIs for files, directories, and devices in Unix-like systems. The programs show how to use APIs such as open, read, write, close, fcntl, lseek, link, unlink, stat, chmod, chown, utime, access, opendir, readdir, closedir, and ioctl. They illustrate functions for file handling, file metadata operations, directory operations, and device I/O. The programs output details like file contents, attributes and permissions to confirm the correct behavior of the various file system and device APIs.
Post Exploitation Bliss: Loading Meterpreter on a Factory iPhone, Black Hat U...Vincenzo Iozzo
Charlie Miller and Vincenzo Iozzo presented techniques for post-exploitation on the iPhone 2 including:
1. Running arbitrary shellcode by overwriting memory protections and calling vm_protect to mark pages as read/write/executable.
2. Loading an unsigned dynamic library called Meterpreter by mapping it over an existing signed library, patching dyld to ignore code signing, and forcing unloaded of linked libraries.
3. Adding new functionality to Meterpreter, such as a module to vibrate and play a sound on the iPhone, demonstrating how payloads can be extended once loaded into memory.
PascalScript is a scripting engine that allows scripts written in Object Pascal to be executed at runtime in Delphi and Free Pascal applications. It provides advantages like allowing customization without recompiling and updating applications by distributing new script files. The engine works by compiling scripts to bytecode using a compiler component, and executing the bytecode using an executer component. It supports common data types, functions, classes, and calling external libraries.
The document outlines the schedule and objectives for an operating systems lab course over 10 weeks. The first few weeks focus on writing programs using Unix system calls like fork, exec, wait. Later weeks involve implementing I/O system calls, simulating commands like ls and grep, and scheduling algorithms like FCFS, SJF, priority and round robin. Students are asked to display Gantt charts, compute waiting times and turnaround times for each algorithm. The final weeks cover inter-process communication, the producer-consumer problem, and memory management techniques.
Help Needed!UNIX Shell and History Feature This project consists.pdfmohdjakirfb
Help Needed!
UNIX Shell and History Feature
This project consists of designing a C program to serve as a shell interface
that accepts user commands and then executes each command in a separate
process. This project can be completed on any Linux,
UNIX,orMacOS X system.
A shell interface gives the user a prompt, after which the next command
is entered. The example below illustrates the prompt
osh> and the user’s
next command:
cat prog.c. (This command displays the le prog.c on the
terminal using the
UNIX cat command.)
osh> cat prog.c
One technique for implementing a shell interface is to have the parent process
rst read what the user enters on the command line (in this case,
cat
prog.c), and then create a separate child process that performs the command.
Unless otherwise specied, the parent process waits for the child to exit
before continuing. This is similar in functionality to the new process creation
illustrated in Figure 3.10. However,
UNIX shells typically also allow the child
process to run in the background, or concurrently. To accomplish this, we add
an ampersand (&) at the end of the command. Thus, if we rewrite the above
command as
osh> cat prog.c &
the parent and child processes will run concurrently.
The separate child process is created using the
fork() system call, and the
user’s command is executed using one of the system calls in the
exec() family
A C program that provides the general operations of a command-line shell
is supplied in Figure 3.36. The
main() function presents the prompt osh->
and outlines the steps to be taken after input from the user has been read. The
main() function continually loops as long as should run equals 1; when the
user enters
exit at the prompt, your program will set should run to 0 and
terminate.
This project is organized into two parts: (1) creating the child process and
executing the command in the child, and (2) modifying the shell to allow a
history feature.
#include
#include
#define MAXLINE 80 /* The maximum length command */
int main(void)
{
char *args[MAXLINE/2 + 1]; /* command line arguments */
int should
run = 1; /* flag to determine when to exit program */
while (should run) {
printf(\"osh>\");
}
fflush(stdout);
/**
* After reading user input, the steps are:
* (1) fork a child process using fork()
* (2) the child process will invoke execvp()
* (3) if command included &, parent will invoke wait()
*/
return 0;
}
Part I — Creating a Child Process
The rst task is to modify the
main() function in the above program so that a child
process is forked and executes the command specied by the user. This will
require parsing what the user has entered into separate tokens and storing the
tokens in an array of character strings (
args in the above program. For example, if the
user enters the command
ps -ael at the osh> prompt, the values stored in the
args array are:
args[0] = \"ps\"
args[1] = \"-ael\"
args[2] = NULL
This args array will be passed to the execvp() function, which has the
following prot.
POSIX is a standard for maintaining compatibility between operating systems. It defines APIs for threads, mutexes, semaphores, condition variables, and message queues to enable processes and threads to synchronize access to shared resources and communicate. The document provides details on the POSIX interfaces for thread management, mutexes, condition variables, semaphores, and message queues. It includes code examples demonstrating their use for inter-process synchronization and communication using shared memory.
Message queue and shared memory both are used to achieve interprocess communication(IPC). Use of message queue and shared memory both have there own advantages as well as disadvantages.
This document discusses various methods of interprocess communication (IPC) supported on UNIX systems, including pipes, FIFOs, message queues, semaphores, and shared memory. It provides details on how each method works, such as how processes can create and access pipes, FIFOs, and shared memory segments. It also describes the key system calls used to implement IPC, such as pipe, mkfifo, msgget, semget, and shmget.
computer notes - Inter process communicationecomputernotes
Processes execute to accomplish specified computations. An interesting and innovative
way to use a computer system is to spread a given computation over several processes.
The need for such communicating processes arises in parallel and distributed processing
contexts.
Threads Advance in System Administration with LinuxSoumen Santra
Threads Advance in System Administration with Linux
Process Descriptor Handling
Kernel Stack
Pid_hash Table and Chained Lists
PID Hash Table Handling Functions and Macros
Wait Queues
Process Resource Limits
Task State Segment
System Calls
Pthread Operations
POSIX threads on GNU/Linux
Programs on Thread in C
The document discusses trap handling in Linux, focusing on system calls. It begins with background on interrupts, traps, and system calls. It then describes the function call flow from start_kernel() and initialization of the Interrupt Descriptor Table (IDT). Next, it covers system call entry and initialization of the system call table. Finally, it discusses the system call procedure from a user application using glibc and the Linux kernel. Key topics include IDT structure and gates, MSR register usage for system calls, fast vs slow system call paths, and how system calls are invoked and handled in the kernel.
Similar to Linux Systems Programming: Semaphores, Shared Memory, and Message Queues (20)
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Gas agency management system project report.pdfKamal Acharya
The project entitled "Gas Agency" is done to make the manual process easier by making it a computerized system for billing and maintaining stock. The Gas Agencies get the order request through phone calls or by personal from their customers and deliver the gas cylinders to their address based on their demand and previous delivery date. This process is made computerized and the customer's name, address and stock details are stored in a database. Based on this the billing for a customer is made simple and easier, since a customer order for gas can be accepted only after completing a certain period from the previous delivery. This can be calculated and billed easily through this. There are two types of delivery like domestic purpose use delivery and commercial purpose use delivery. The bill rate and capacity differs for both. This can be easily maintained and charged accordingly.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Linux Systems Programming: Semaphores, Shared Memory, and Message Queues
1. Linux System
Programming
Semaphores, Shared Memory,
and Message Queues
Engr. Rashid Farid Chishti
chishti@iiu.edu.pk
https://youtube.com/rfchishti
https://sites.google.com/site/chishti
2. To communicate between different processes (inter-process communication or
IPC) we used Signals and Pipes.
Other methods for IPC are (a) Message Queues (b) Shared memory and (c)
Semaphores.
Message Queues: Information to be communicated is placed in a predefined
message structure. The process generating the message specifies its type and
places the message in a system-maintained message queue.
Processes accessing the message queue can use the message type to
selectively read messages of specific types in a first in first out (FIFO) manner.
Message queues provide the user with a means of asynchronously
multiplexing data from multiple processes.
Introduction to IPC
3. Shared Memory: Information is communicated by accessing shared process
data space. This is the fastest method of inter-process communication.
Shared memory allows participating processes to randomly access a shared
memory segment.
Semaphores are often used to synchronize the access to the shared memory
segments.
Semaphores: Semaphores are system-implemented data structures used to
communicate small amounts of data between processes. Most often,
semaphores are used for process synchronization.
All three of these facilities can be used by related and unrelated processes, but
these processes must be on the same system (machine).
Introduction to IPC
4. Summary of the System V IPC Calls
Functionality
Message
Queue
Semaphore
Shared
Memory
Allocate an IPC resource; gain access to an
existing IPC resource.
msgget Semget Shmget
Control an IPC resource: obtain / modify status
information, remove the resource.
msgctl Semctl Shmctl
send / receive messages, perform semaphore
operations, attach / free a shared memory
segment.
msgsnd
Msgrcv
Semop
Shmat
Shmdt
5. Programming interface for message queue
Initializing a message queue
The function msgget() creates and accesses a Message Queue
#include <sys/msg.h>
#include <sys/ipc.h>
int msgget(key_t key, int msgflg);
Function msgget() returns a positive integer, as a message queue identifier.
Parameters:
Key: an arbitrary integer number;
msgflg: the permission mode and creation control flag.
Message Queue
6. #include <sys/msg.h> #include <sys/ipc.h>
#include <sys/types.h> #include <unistd.h>
#include <errno.h> #include <stdio.h>
#include <stdlib.h>
int main() {
key_t key = 1234; // key to be passed to msgget()
int msgflg = 0666 | IPC_CREAT; // msgflg to msgget()
int msgid; // return value from msgget()
// read and write permission for owner, and create a message
// queue if not exists
if ((msgid = msgget(key, msgflg))== -1) {
perror("msgget: msgget failed"); exit(-1);
}
else {
printf("Created a MSG Queue %d with key %x n", msgid, key);
exit(0);
}
}
Creating a Message Queue msgq_create.c
8. Function msgctl() alters the permissions and other characteristics of a message queue.
int msgctl(int msgid, int cmd, struct msgid_ds *buf )
It returns 0 on success, -1 for failure.
Parameters:
msgid: message identifier, which is the return value from msgget().
struct msgid_ds {
uid_t msg_perm.uid;
uid_t msg_perm.gid;
mode_t msg_perm.mode;
}
Message Queue Settings
9. cmd: the action to take, can be one of the following settings:
IPC_STAT: Place information about the status of the queue in the data
structure pointed to by buf. The process must have read permission for this
call to succeed.
IPC_SET: Set the owner's user ID (msg_perm.uid) and group ID
(msg_perm.gid), the permissions (msg_perm.mode) , and the size
(msg_qbytes) of the message queue. A process must have the effective user ID
of the owner, creator, or super user for this call to succeed.
IPC_RMID: Remove the message queue.
Message Queue Settings
11. // msgq_ctl.c Displaying message queue status information
#include <stdio.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/msg.h>
int main () {
int mgid; key_t key = 1234;
struct msqid_ds buf;
if ((mgid = msgget (key, IPC_CREAT | 0660)) == -1) {
perror ("Queue create"); return 1;
}
msgctl (mgid, IPC_STAT, &buf);
printf("Message Queue *Permission* Structure Informationn");
printf("Owner's user ID t %d n", buf.msg_perm.uid);
printf("Owner's group ID t %d n", buf.msg_perm.gid);
printf("Creator's user ID t %d n", buf.msg_perm.cuid);
printf("Creator's group IDt %d n", buf.msg_perm.cgid);
printf("Access mode in Octal t %o n", buf.msg_perm.mode);
Showing Status of a Message Queue msgq_ctl.c
12. printf("nAdditional Selected Message Queue Structure Informationn");
printf("Current # of bytes on queue t %d n", buf.__msg_cbytes);
printf("Current # of messages on queuet %d n", buf.msg_qnum);
printf("Maximum # of bytes on queue t %d n", buf.msg_qbytes);
msgctl (mgid, IPC_RMID, (struct msqid_ds *) 0);
return 0;
}
Showing Status of a Message Queue msgq_ctl.c
13. The msgsnd() and msgrcv() functions send and receive messages, respectively:
int msgsnd(int msgid, const void *msgp, size_t msgsz, int msgflg);
int msgrcv(int msgid, void *msgp, size_t msgsz, long msgpriori,
int msgflg);
Parameters
msgp: a pointer to a structure that contains the type of the message and its text. The
structure below is an example of what this user-defined buffer might look like:
struct mymsg {
long msgtype; /* message type */
char mtext[MSGSZ]; /* message text of length MSGSZ */
}
Sending and Receiving Message
14. The structure member msgtype is used in message reception, must be initialized (into a
integer value) by the sending process.
msgsz: specifies the length of the message in bytes.
msgpriori: reception priority. Set 0 means you simply want to retrieve the message in the
order in which they were sent. Or some other values must be equal to the specified
message type.
msgflg: for msgsnd(), controls what happens if either the queue is full, or reaches the
system limit on queued messages. Set as 0 means process suspends and wait for the space
become available.
msgflg: for msgrcv(), controls what happens if no message of the proper priority is waiting
to be received. Set as 0 means process suspends and wait for an message with proper
priority to arrive.
Sending and Receiving Message
15. // Message Sending Program
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#define MAX_TEXT 512
struct my_msg_st/* define a message structure */
{
long int my_msg_type;
char some_text[MAX_TEXT];
};
Sending a Message msgq_send.c
16. int main () {
int running = 1;
/* declare an instance of structure of my_msg_st */
struct my_msg_st some_data;
int msgid; /* return value from msgget() */
char buffer[BUFSIZ]; /* declare a buffer for text msg */
// read and write permission for owner, and create a
// message queue if not exists
msgid = msgget ((key_t) 1234, 0666 | IPC_CREAT);
if (msgid == -1) {
fprintf (stderr, "msgget failed with error: %dn", errno);
exit (EXIT_FAILURE);
}
Sending a Message msgq_send.c
17. while (running) { /*read user input from the keyboard */
printf ("Enter some text: ");
fgets (buffer, BUFSIZ, stdin);
some_data.my_msg_type = 1; /*set the msgtype */
/*copy from buffer to message*/
strcpy (some_data.some_text, buffer);
/*send the message */
if (msgsnd (msgid, (void *) &some_data, MAX_TEXT, 0) == -1)
{
fprintf (stderr, "msgsnd failedn");
exit (EXIT_FAILURE);
}
if (strncmp (buffer, "end", 3) == 0)
{
running = 0;
}
}
exit (EXIT_SUCCESS);
}
Sending a Message msgq_send.c
18. /* the receiver program. */
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/msg.h>
struct my_msg_st {
long int my_msg_type;
char some_text[BUFSIZ];
};
int main() {
int running = 1;
int msgid;
struct my_msg_st some_data;
long int msg_to_receive = 0;
Receiving a Message msgq_receive.c
19. /* First, we set up the message queue. */
msgid = msgget((key_t)1234, 0666 | IPC_CREAT);
if (msgid == -1) {
fprintf(stderr, "msgget failed with error: %dn", errno);
exit(EXIT_FAILURE);
}
/* Then the messages are retrieved from the queue,
until an end message is encountered. Lastly, the
message queue is deleted. */
while(running) {
if (msgrcv(msgid, (void *)&some_data, BUFSIZ,
msg_to_receive, 0) == -1) {
fprintf(stderr, "msgrcv failed with error: %dn", errno);
exit(EXIT_FAILURE);
}
printf("You wrote: %s", some_data.some_text);
if (strncmp(some_data.some_text, "end", 3) == 0) {
running = 0;
}
}
Receiving a Message msgq_receive.c
20. /* First, we set up the message queue. */
msgid = msgget((key_t)1234, 0666 | IPC_CREAT);
if (msgid == -1) {
fprintf(stderr, "msgget failed with error: %dn", errno);
exit(EXIT_FAILURE);
}
/* Then the messages are retrieved from the queue,
until an end message is encountered. Lastly, the
message queue is deleted. */
while(running) {
if (msgrcv(msgid, (void *)&some_data, BUFSIZ,
msg_to_receive, 0) == -1) {
fprintf(stderr, "msgrcv failed with error: %dn", errno);
exit(EXIT_FAILURE);
}
printf("You wrote: %s", some_data.some_text);
if (strncmp(some_data.some_text, "end", 3) == 0) {
running = 0;
}
}
Receiving a Message msgq_receive.c
21. if (msgctl(msgid, IPC_RMID, 0) == -1) {
fprintf(stderr, "msgctl(IPC_RMID) failedn");
exit(EXIT_FAILURE);
}
exit(EXIT_SUCCESS);
}
Receiving a Message msgq_receive.c
22. Shared memory is an efficient way of transferring data between two running processes.
It lets multiple processes attach to a segment of physical memory to their virtual address
spaces.
If one process writes to a shared memory, the changes immediately become visible to any
other processes that has access to the same shared memory.
The shared memory doesn't provide synchronization of accessing, we may use semaphore
to prevent inconsistencies and collisions
Shared Memory
23. Programming interface of Shared Memory
The function shmget() is used to obtain access to a shared memory segment. It is
prottyped by:
int shmget(key_t key, size_t size,int shmflg);
Parameters:
key: an access value associated with the semaphore ID.
size: the size (in bytes) of the requested shared memory.
shmflg: specifies the initial access permissions and creation control flags.
A successful function call should return the shared memory segment ID, otherwise, returns
-1.
Shared Memory
24. #include <stdio.h>
#include <sys/shm.h>
int main ()
{
key_t key = 1234; // key to be passed to shmget()
size_t size = 4096; // size of shared memory
int shmflag = 0666 | IPC_CREAT; // read and write permission,
// create shared memory if not exist
int shmid; // return value from shmget()
// Create the shared memory segment using key 1234
shmid = shmget (key, size, shmflag);
if (shmid >= 0){
printf ("Created a shared memory segment %d using key %x n", shmid, key);
}
return 0;
}
Creating a Shared Memory shm_create.c
26. The function shmctl() is used to alter the permissions and other characteristics of a shared
memory segment. It is prototyped as follows:
int shmctl(int shmid, int cmd, struct shmid_ds *buf);
The process must have an effective shmid of owner, creator or super user to perform this
command.
Parameters: cmd argument is one of following control commands:
SHM_LOCK: Lock the specified shared memory segment.
SHM_UNLOCK: Unlock the shared memory segment.
IPC_STAT: Return the status information contained in the control structure and place it
in the buffer pointed to by buf.
IPC_SET: Set the effective user and group identification and access permissions.
IPC_RMID: Remove the shared memory segment.
Shared Memory Settings
27. The buf is a pointer to a type structure, struct shmid_ds which is defined in < sys/shm.h >
struct shmid_ds {
struct ipc_perm shm_perm; /* permissions */
size_t shm_segsz; /* size of segment in bytes */
pid_t shm_lpid; /* pid of last shmop() */
pid_t shm_cpid; /* pid of creator */
shmatt_t shm_nattch; /* number of current attaches */
time_t shm_atime; /* last-attach time */
time_t shm_dtime; /* last-detach time */
time_t shm_ctime; /* last-change time */
.
.
};
Shared Memory Settings
28. shmat() and shmdt() are used to attach and detach shared memory segments. They are
prototypes as follows:
void *shmat(int shmid, const void *shmaddr, int shmflg);
int shmdt(const void *shmaddr);
shmat() returns a pointer, shmaddr, to the head of the shared segment associated with a
valid shmid.
shmdt() detaches the shared memory segment located at the address indicated by
shmaddr.
Attaching and detaching a Shared Memory
29. #include <stdio.h>
#include <time.h> // for ctime()
#include <sys/shm.h>
#include <errno.h>
int main(){
key_t key = 1234; // key to be passed to shmget()
size_t size = 4096; // size of shared memory
int shmflag = 0666 | IPC_CREAT; // read and write permission,
// create shared memory if not exist
int shmid; // return value from shmget()
int ret; // return value from shmctl()
struct shmid_ds shmds; // structure to store information
// about shared memory
// Create the shared memory segment using key 1234
shmid = shmget( key, size, shmflag );
Showing Status of a Shared Memory shm_status.c
30. if ( shmid >= 0 ) {
ret = shmctl( shmid, IPC_STAT, &shmds );
if (ret == 0) {
printf( "Size of memory segment is %ldn", shmds.shm_segsz);
printf( "Number of attaches %ldn", shmds.shm_nattch );
printf( "Access permissions %on", shmds.shm_perm.mode );
printf( "Create time %ld n", shmds.shm_ctime);
printf( "Create time %s n", ctime(&shmds.shm_ctime));
}
else {
printf( "shmctl failed (%d)n", errno );
}
}
else {
printf( "Shared memory segment not found.n" );
}
return 0;
}
Showing Status of a Shared Memory shm_status.c
31. #include <stdio.h>
#include <sys/shm.h>
#include <errno.h>
#include <time.h>
int main(){
int shmid, ret;
struct shmid_ds shmds;
shmid = shmget( 1234, 0, 0 );
if ( shmid >= 0 ) {
ret = shmctl( shmid, IPC_STAT, &shmds );
if (ret == 0) {
printf("old permissions were %on", shmds.shm_perm.mode );
shmds.shm_perm.mode = 0444;
ret = shmctl( shmid, IPC_SET, &shmds );
ret = shmctl( shmid, IPC_STAT, &shmds );
printf("new permissions are %on", shmds.shm_perm.mode );
ret = shmctl( shmid, SHM_LOCK, 0 );
ret = shmctl( shmid, SHM_UNLOCK, 0 );
}
Changing Permission of a Shared Memory shm_perm.c
36. Introduction to Semaphores
Semaphores are used to avoid race condition. Therefore, Semaphores are
control mechanism.
Race condition occurs when two or more processes are in their critical section.
Critical section is a part of code in which a process accesses a shared
resource.
So whenever two or more processes in their critical sections, their may be
chances that they might go in race condition.
A semaphore is a data structure, which can be shared by a group of processes.
When several processes compete for same resource (e.g. a file, a message
queue, or a shared memory), their operations may be synchronized with the
use of semaphores, so that they do not interfere with each other.
Semaphores
37. Semaphores ensure that only one process will be granted exclusive access of a
particular resource.
All other processes requiring the same resource will be suspended by Linux
kernel.
Once a process releases the resource, the kernel will grant exclusive access of
the resource to a suspended process.
Semaphores are often used in pair in order to lock (protect) a resource and to
subsequently unlock (release) a resource.
Semaphores
38. Controlling the synchronization using an abstract data type called a semaphore was
proposes by by Dijksta in1965.
A Semaphore S is a an integer variable that can be accesses only through two atomic
operations.
i.e. these operation can not be interrupted and can be treated as an indivisible step.
Operation Semaphore Dutch Meaning
Wait P proberen to test
Signal V verhogen to increment
Semaphores
wait(S) {
while (S<=0);
S--;
}
signal (S) {
S++;
}
39. At its simplest a semaphore is variable S which takes one of two values:
0 when a resource is locked (protected), and should not be accessed by
other processes
1 when the resource is unlocked (released).
Such semaphore, which could take two values 0 or 1, is called binary
semaphore.
But in Linux implementation the two values are
–1, which is our P operation to wait for a semaphore to become available
+1, which is our V operation to signal that a semaphore is now available.
Semaphores
40. As we know that a semaphore is a variable, so before performing any
operation on semaphore it must exist. The semget call creates a set of
semaphore along with its associated data structure (or gets an existing
semaphore ) It is prototyped by:
#include <sys/sem.h>
#include <sys/ipc.h>
#include <sys/types.h>
int semget(key_t key, int nsems, int semflg);
When the call succeeds, it returns the semaphore ID (semid). otherwise it
returns –1, if call fails.
Creating a new semaphores
41. Parameters
key: is a access value associated with the semaphore ID.
nsems: the number of semaphores required (since UNIX provides an array of
individual semaphores, to give control of multiple resources).
Setting it 1, means create one semaphore in a set.
semflg: specifies the initial access permissions and creation control flags.
This will be zero if the semaphore already exists
If not it should be like IPC_CREAT | 0666
Please Note, the initial value of a semaphore is zero when it is created first
time.
Creating a new semaphores
42. #include <sys/types.h>
#include <sys/ipc.h>
#include <sys/sem.h>
#include <unistd.h>
#include <errno.h>
#include <stdio.h>
int main(){
key_t key = 1234; // key to pass to semget()
int semflg = 0666 | IPC_CREAT; // semflg to pass to semget()
int nsems = 1; // nsems to pass to semget()
int semid; //return value from semget()
if((semid = semget(key, nsems, semflg)) ==-1){
perror("semget: semget failed");
exit(-1);
}
printf( "semget Created a semaphore with id = %d n", semid );
return 0;
}
Removing a Shared Memory sem_create.c
43. Controlling Semaphores
Function semctl() changes permissions and other characteristics of a
semaphore set. It's prototype is as followed:
int semctl(int semid,int semnum,int cmd, union semun sem_union);
It returns different values depending on the cmd.
For setting the value of a single semaphore or removing the semaphore set, it
returns 0 on success, -1 for failure.
Parameters:
semctl must be called with a valid semaphore ID, semid.
The semnum value selects a semaphore within an array by its index.
To access first semaphore its values is 0.
Creating a new semaphores
44. All the control operations on the set of semaphores, which are identified by semaphores ID
semid are specified by third the argument cmd.
The parameters cmd can take the values are given by:
GETVAL: Returns the semaphore’s value.
SETVAL: Set the semaphore value given in sem_union
GETPID: Get PID of the process that last called semop.
IPC_STAT: Return the effective user, group, and permission for a semaphore in
sem_union.
IPC_RMID: Remove the specified semaphore set with semid along with its associated
data structure.
If the semctl call fails, it returns –1, otherwise it returns an integer value of depending on
the value of cmd.
Creating a new semaphores
45. #include <stdio.h>
#include <sys/sem.h>
#include <stdlib.h>
int main(){
int semid, cnt;
semid = semget( 1234, 1, 0 ); // Get the semaphore with the key 1234
if (semid >= 0) {
// Read the current semaphore count. Index of semaphore is
// 0. The command is GETVAL. The return value from this
// command is either –1 for error or the count of the
// semaphore.
cnt = semctl( semid, 0, GETVAL );
if (cnt != -1) {
printf("semcrd: current semaphore count %d.n", cnt);
}
}
return 0;
}
Semaphore: Get Current Count sem_getval.c
46. // Setting the Current Semaphore Count
#include <stdio.h>
#include <sys/sem.h>
#include <stdlib.h>
int main(){
int semid, ret;
/* Get the semaphore with the key 1234 */
semid = semget( 1234, 1, 0 );
if (semid >= 0) {
// Set the semaphore No.0 to a value 6, changing the binary
// semaphore to a counting semaphore
ret = semctl( semid, 0, SETVAL, 6);
if (ret != -1) {
printf("current semaphore count updated n");
}
}
return 0;
}
Semaphore: Set Count sem_setval.c
47. #include <stdio.h>
#include <sys/sem.h>
#include <stdlib.h>
int main() {
int semid, ret;
/* Get the semaphore with the key 1234 */
semid = semget( 1234, 1, 0 );
if (semid >= 0) {
/* Remove Semaphore having id semid */
ret = semctl( semid, 0, IPC_RMID);
if (ret != -1) {
printf("Semaphore %d removed.n", semid );
}
}
return 0;
}
Removing a Semaphore sem_rm.c
48. The semop() API function provides the means to acquire and release a
semaphore or semaphore array.
The basic operations provided by semop are
Decrement semaphore (to acquire semaphores)
Increment a semaphore (to release semaphores).
It is prototyped by:
int semop(int semid , struct sembuf *sops , unsigned int nsops);
Parameters:
semid: the semaphore ID returned by semget() call.
sops: is a pointer to an array of structures, each containing the following
information about a semaphore operation:
Semaphore Operations
49. The semaphore number of interest sem_num, usually 0 means first
semaphore from an array.
The operation to be performed sem_op, is the value by which the
semaphore should be changed. (You can change a semaphore by amounts
other than 1.) In general, only two values are used,
–1, which is your P operation to wait for a semaphore to become
available
+1, which is your V operation to signal that a semaphore is now available.
a flags word sem_flg can be used to alter the behavior of the operation.
usually set to SEM_UNDO means if the process terminates without releasing
the semaphore, allows the operating system to automatically release the
semaphore.
Semaphore Operations
50. The sembuf structure is shown below:
struct sembuf {
unsigned short sem_num;
short sem_op;
short sem_flg;
};
nsops: Number of semaphores in the array
Semaphore Operations
51. #include <sys/types.h> #include <sys/ipc.h>
#include <sys/sem.h> #include <unistd.h>
#include <errno.h> #include <stdio.h>
int main() {
int semid; struct sembuf sb; semid = semget( 1234, 1, 0 );
if (semid >= 0){ /* build semaphore operations structure */
sb.sem_num = 0; /*semaphore number,0 for first semaphore */
sb.sem_op = -1; /* operation -1 means acquire semaphore */
sb.sem_flg = 0; /* operation flags*/
printf( "Attempting to acquire semaphore %dn", semid );
if ( semop( semid, &sb, 1 ) == -1 ) { /* if the value of
a semaphore is 0 this will wait until it becomes > 0 */
printf( "semacq: semop failed.n" ); exit(-1);
}
printf( "semacq: Semaphore acquired %dn", semid );
} return 0;
}
Getting a Semaphore sem_acquire.c
56. Q 1. Write a client Server application in C using message queues in which the client
takes an integer from the user and sends this integer to the server through message
queue, for computation of its factorial.
The client program should return message queue ID, so that you could use this ID
when you run the server.
Q 2. Write a client Server application in C using message queues in which the client
takes Linux command from the user and sends this command to the server through
message queue, for execution of Linux command.
The server takes that command through message queue and executes this
command.
Assignment Topic: Message Queues
57. Q 3. Write Client and Server programs in C using the concept of shared memory. If for
example, the client program writes an integer n in the shared memory, server reads this
integer n from the shared memory, and prints this integer n number of time. The server
program should be terminated if it reads a negative integer from the shared memory.
Q 4. Write a Client & Server program in C using the concept of shared memory. The server
creates a shared memory segment to store its process ID. The client reads process ID of the
server. Using server process ID, the client program can raise signals SIGINT, SIGTSTP,
SIGQUIT (by using kill() system call), in such a way that when a user presses:
i or I it raises signal SIGINT
z or Z it raises signal SIGTSTP
q or Q it raises SIGQUIT
Error message if user presses any other character.
The server program should provide handlers for catching above signals. The server ignores
signals on receiving SIGINT, or SIGTSTP (use SIG_IGN for ignoring the signal). However, the
server program quits on receiving SIGQUIT. The client program also quits after sending
SIGQUIT signal.
Assignment Topic: Shared Memory
58. Q 5. Write a C-program, in which
1. first create an integer type of shared memory segment,
2. Then lock this segment through Semaphore, so that other processes cannot access it.
3. it writes a number in this shared segment.
4. Then unlock this segment through Semaphore so that other processes can access it.
Q 6. Write another C-program, in which
1. locks this segment through Semaphore first
2. Reads integer from shared memory
3. Unlocks semaphore
4. Prints integer on screen
Assignment