The document summarizes the implementation of a scalable broadcasting algorithm for file sharing in a mobile ad hoc network (MANET) simulation. It was built using the Omnet++ discrete event simulator and its INET framework. Key aspects include:
1) Implementing the scalable broadcasting algorithm on top of the AODV routing protocol to discover routes for file requests.
2) Using the INET routing table to track neighbors and routes, with stability determined by how recently routes were updated.
3) Defining a file matching function that determines which nodes have the requested file based on their node ID modulo a preset value.
Inter-Process Communication (IPC) techniques on Mac OS XHEM DUTT
Inter-process communication (IPC) refers to techniques for exchanging data between processes and threads. Common IPC methods include message passing, synchronization, shared memory, and remote procedure calls. IPC is useful for information sharing, increasing computational speed, modularity, convenience, and privilege separation. Common IPC techniques on Mac OS X include shared memory, Mach ports, sockets, Apple Events, distributed notifications, pasteboards, and distributed objects. Each method has advantages and disadvantages for different use cases.
Vector processing involves executing the same operation on multiple data elements simultaneously using a single instruction. Early implementations like the CDC Cyber 100 had limitations. The Cray-1 was the first successful vector processing supercomputer, using vector registers to perform calculations faster than requiring memory access. Seymour Cray led the development of vector processing machines that dominated the field for many years. While vector processing is no longer a focus, its principles are still used today in multimedia SIMD instructions.
This document discusses various inter-process communication (IPC) types including shared memory, mapped memory, pipes, FIFOs, message queues, sockets, and signals. Shared memory allows processes to directly read and write to the same region of memory, requiring synchronization between processes. Mapped memory permits processes to communicate by mapping the same file into memory. Pipes and FIFOs allow for sequential data transfer between related and unrelated processes. Message queues provide a way for processes to exchange messages via a common queue. Signals are used to asynchronously notify processes of events.
A loader performs key functions like allocating memory, relocating addresses, linking between object files, and loading programs into memory for execution. Different loading schemes are used depending on the needs of the system and programming language. Direct linking loaders allow for relocatable code and external references between program segments through the use of object file records and tables for symbols, relocation, and code loading.
This PPT discusses the concept of Dynamic Linker as in Linux and its porting to Solaris ARM platform. It starts from the very basics of linking process
The document discusses linkers and loaders, describing their functions in combining object files into executable files. It covers the ELF format, static vs dynamic linking, and how executable files are run using static or dynamic linkers. Key points include how static linkers resolve symbols and perform relocation, while dynamic linkers use shared libraries and handle relocation at runtime via the dynamic linker.
The document provides an overview of the TCP/IP protocol suite and OSI model. It discusses the seven layers of the OSI model and their functions. It then explains that the TCP/IP protocol suite consists of five layers that correspond to the bottom four layers of the OSI model, with the top three OSI layers represented by a single application layer in TCP/IP. The document goes on to cover addressing in TCP/IP networks, different versions of the IP protocol, and methods for connecting local area networks.
Inter-Process Communication (IPC) techniques on Mac OS XHEM DUTT
Inter-process communication (IPC) refers to techniques for exchanging data between processes and threads. Common IPC methods include message passing, synchronization, shared memory, and remote procedure calls. IPC is useful for information sharing, increasing computational speed, modularity, convenience, and privilege separation. Common IPC techniques on Mac OS X include shared memory, Mach ports, sockets, Apple Events, distributed notifications, pasteboards, and distributed objects. Each method has advantages and disadvantages for different use cases.
Vector processing involves executing the same operation on multiple data elements simultaneously using a single instruction. Early implementations like the CDC Cyber 100 had limitations. The Cray-1 was the first successful vector processing supercomputer, using vector registers to perform calculations faster than requiring memory access. Seymour Cray led the development of vector processing machines that dominated the field for many years. While vector processing is no longer a focus, its principles are still used today in multimedia SIMD instructions.
This document discusses various inter-process communication (IPC) types including shared memory, mapped memory, pipes, FIFOs, message queues, sockets, and signals. Shared memory allows processes to directly read and write to the same region of memory, requiring synchronization between processes. Mapped memory permits processes to communicate by mapping the same file into memory. Pipes and FIFOs allow for sequential data transfer between related and unrelated processes. Message queues provide a way for processes to exchange messages via a common queue. Signals are used to asynchronously notify processes of events.
A loader performs key functions like allocating memory, relocating addresses, linking between object files, and loading programs into memory for execution. Different loading schemes are used depending on the needs of the system and programming language. Direct linking loaders allow for relocatable code and external references between program segments through the use of object file records and tables for symbols, relocation, and code loading.
This PPT discusses the concept of Dynamic Linker as in Linux and its porting to Solaris ARM platform. It starts from the very basics of linking process
The document discusses linkers and loaders, describing their functions in combining object files into executable files. It covers the ELF format, static vs dynamic linking, and how executable files are run using static or dynamic linkers. Key points include how static linkers resolve symbols and perform relocation, while dynamic linkers use shared libraries and handle relocation at runtime via the dynamic linker.
The document provides an overview of the TCP/IP protocol suite and OSI model. It discusses the seven layers of the OSI model and their functions. It then explains that the TCP/IP protocol suite consists of five layers that correspond to the bottom four layers of the OSI model, with the top three OSI layers represented by a single application layer in TCP/IP. The document goes on to cover addressing in TCP/IP networks, different versions of the IP protocol, and methods for connecting local area networks.
loader and linker are both system software which capable of loads the object code, assembled by an assembler, (loader) and link a different kind of block of a huge program. both software works at the bottom of the operation (i.e. closer to the hardware). in fact, both have machine dependent and independent features.
The document discusses interprocess communication (IPC) and protocols. It describes different IPC paradigms like message queues, semaphores, and shared memory. It also covers unicast and multicast communication, synchronous vs asynchronous operations, data representation for communication between processes, and examples of protocols like HTTP.
nterprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.
IPC methods include pipes and named pipes; message queueing;semaphores; shared memory; and sockets.
Interprocess communication (IPC) allows processes to communicate through message passing. There are two main forms of IPC - synchronous which blocks sending and receiving processes, and asynchronous which is non-blocking for sending. IPC uses protocols like TCP and UDP, with TCP providing reliable, ordered streams and UDP providing unreliable datagram delivery. For communication, data must be serialized into a byte sequence and deserialized on the receiving end.
IPC allows processes to communicate and share resources. There are several common IPC mechanisms, including message passing, shared memory, semaphores, files, signals, sockets, message queues, and pipes. Message passing involves establishing a communication link and exchanging fixed or variable sized messages using send and receive operations. Shared memory allows processes to access the same memory area. Semaphores are used to synchronize processes. Files provide durable storage that outlives individual processes. Signals asynchronously notify processes of events. Sockets enable two-way point-to-point communication between processes. Message queues allow asynchronous communication where senders and receivers do not need to interact simultaneously. Pipes create a pipeline between processes by connecting standard streams.
The document provides information on the syllabus for semesters V and VI of the B.Sc. Information Technology program at the University of Mumbai for the 2013-2014 academic year. It lists the courses, course codes, titles, and credit hours for each semester. Semester V includes courses on Network Security, ASP.Net with C#, Software Testing, Advanced Java, and Linux Administration. Semester VI includes courses on Internet Technology, Project Management, Data Warehousing, Project Report, and electives on IPR and Cyber Laws, Digital Signal and Systems, and Geographic Information Systems. It provides brief descriptions and references for some sample courses, including the topics to be covered, textbooks, and evaluation criteria.
This document analyzes and compares the performance of different inter-process communication (IPC) mechanisms in Unix-based operating systems, including pipes, message queues, and shared memory. Programs were written to transfer data between processes using each IPC mechanism. Pipes transferred around 95 MB/s, message queues transferred 120 MB/s, and shared memory, being the fastest, transferred around 4 GB/s. Therefore, the analysis showed that shared memory provides the best performance for inter-process communication compared to pipes and message queues.
The document provides an overview of the TCP/IP network model and how data is packaged and sent over a network, explaining that data moves through four layers (Application, Transport, Internet, Link) where it is encapsulated with headers at each layer, and that the Transport layer handles reliable transmission using TCP or unreliable transmission using UDP by adding a header and creating a TCP segment or datagram to pass to the Internet layer.
Assemblers Elements of Assembly Language Programming, Design of the Assembler, Assembler Design Criteria, Types of Assemblers, Two-Pass Assemblers, One-Pass Assemblers, Single pass Assembler for Intel x86 , Algorithm of Single Pass Assembler, Multi-Pass Assemblers, Advanced Assembly Process, Variants of Assemblers Design of two pass assembler
Implementation of a Deadline Monotonic algorithm for aperiodic traffic schedu...Andrea Tino
Polling Server and Nodes synchronize their clocks to emulate TDMA-scheduled communication. A scheduling file assigns slots to components like Polling Server and flow generators. During their slots, generators probabilistically generate aperiodic flows, which are received and scheduled by Polling Server. If Polling Server's queue overflows, new generators are rejected to maintain schedulability.
Processes communicate through interprocess communication (IPC) using two main models: shared memory and message passing. Shared memory allows processes to access the same memory regions, while message passing involves processes exchanging messages through mechanisms like mailboxes, pipes, signals, and sockets. Common IPC techniques include semaphores, shared memory, message queues, and sockets that allow processes to synchronize actions and share data in both blocking and non-blocking ways. Deadlocks can occur if processes form a circular chain while waiting for resources held by other processes.
The document discusses NS3's implementation of WiFi networking. It provides an overview of the WiFiNetDevice and WifiPhy models, and describes the modular implementation including the MAC high, low, and physical layers. It explains concepts like the DcfManager, DcaTxop, rate control algorithms, and provides examples of modifying the WiFi model and using trace sources.
This document provides an overview of parallel computing and parallel processing. It discusses:
1. The three types of concurrent events in parallel processing: parallel, simultaneous, and pipelined events.
2. The five fundamental factors for projecting computer performance: clock rate, cycles per instruction (CPI), execution time, million instructions per second (MIPS) rate, and throughput rate.
3. The four programmatic levels of parallel processing from highest to lowest: job/program level, task/procedure level, interinstruction level, and intrainstruction level.
The document discusses parallelism and techniques to improve computer performance through parallel execution. It describes instruction level parallelism (ILP) where multiple instructions can be executed simultaneously through techniques like pipelining and superscalar processing. It also discusses processor level parallelism using multiple processors or processor cores to concurrently execute different tasks or threads.
TensorRT is an NVIDIA tool that optimizes and accelerates deep learning models for production deployment. It performs optimizations like layer fusion, reduced precision from FP32 to FP16 and INT8, kernel auto-tuning, and multi-stream execution. These optimizations reduce latency and increase throughput. TensorRT automatically optimizes models by taking in a graph, performing optimizations, and outputting an optimized runtime engine.
1) Inter-process communication (IPC) allows processes to communicate and synchronize their actions through message passing.
2) There are two main primitives for message passing - send and receive. Messages have a header and body and can be of fixed or variable size.
3) Message passing can be blocking or non-blocking, where blocking waits for the message and non-blocking continues after sending or receiving.
Move Message Passing Interface Applications to the Next LevelIntel® Software
Explore techniques to reduce and remove message passing interface (MPI) parallelization costs. Get practical examples and examples of performance improvements.
In this paper we describe paradigms for building and designing parallel computing machines. Firstly we
elaborate the uniqueness of MIMD model for the execution of diverse applications. Then we compare the
General Purpose Architecture of Parallel Computers with Special Purpose Architecture of Parallel
Computers in terms of cost, throughput and efficiency. Then we describe how Parallel Computer
Architecture employs parallelism and concurrency through pipelining. Since Pipelining improves the
performance of a machine by dividing an instruction into a number of stages, therefore we describe how
the performance of a vector processor is enhanced by employing multi pipelining among its processing
elements. Also we have elaborated the RISC architecture and Pipelining in RISC machines After comparing
RISC computers with CISC computers we observe that although the high speed of RISC computers is very
desirable but the significance of speed of a computer is dependent on implementation strategies. Only CPU
clock speed is not the only parameter to move the system software from CISC to RISC computers but the
other parameters should also be considered like instruction size or format, addressing modes, complexity of
instructions and machine cycles required by instructions. Considering all parameters will give performance
gain . We discuss Multiprocessor and Data Flow Machines in a concise manner. Then we discuss three
SIMD (Single Instruction stream Multiple Data stream) machines which are DEC/MasPar MP-1, Systolic
Processors and Wavefront array Processors. The DEC/MasPar MP-1 is a massively parallel SIMD array
processor. A wide variety of number representations and arithmetic systems for computers can be
implemented easily on the DEC/MasPar MP-1 system. The principal advantages of using such 64×64
SIMD array of 4-bit processors for the implementation of a computer arithmetic laboratory arise out of its
flexibility. After comparison of Systolic Processors with Wave front Processors we found that both of the
Systolic Processors and Wave front Processors are fast and implemented in VLSI. The major drawback of
Systolic Processors is the problem of availability of inputs when clock ticks because of propagation delays
in connection buses. The Wave front Processors combine the Systolic Processor architecture with Data
Flow machine architecture. Although the Wave front processors use asynchronous data flow computing
structure, the timing in the interconnection buses, at input and at output is not problematic..
loader and linker are both system software which capable of loads the object code, assembled by an assembler, (loader) and link a different kind of block of a huge program. both software works at the bottom of the operation (i.e. closer to the hardware). in fact, both have machine dependent and independent features.
The document discusses interprocess communication (IPC) and protocols. It describes different IPC paradigms like message queues, semaphores, and shared memory. It also covers unicast and multicast communication, synchronous vs asynchronous operations, data representation for communication between processes, and examples of protocols like HTTP.
nterprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.
IPC methods include pipes and named pipes; message queueing;semaphores; shared memory; and sockets.
Interprocess communication (IPC) allows processes to communicate through message passing. There are two main forms of IPC - synchronous which blocks sending and receiving processes, and asynchronous which is non-blocking for sending. IPC uses protocols like TCP and UDP, with TCP providing reliable, ordered streams and UDP providing unreliable datagram delivery. For communication, data must be serialized into a byte sequence and deserialized on the receiving end.
IPC allows processes to communicate and share resources. There are several common IPC mechanisms, including message passing, shared memory, semaphores, files, signals, sockets, message queues, and pipes. Message passing involves establishing a communication link and exchanging fixed or variable sized messages using send and receive operations. Shared memory allows processes to access the same memory area. Semaphores are used to synchronize processes. Files provide durable storage that outlives individual processes. Signals asynchronously notify processes of events. Sockets enable two-way point-to-point communication between processes. Message queues allow asynchronous communication where senders and receivers do not need to interact simultaneously. Pipes create a pipeline between processes by connecting standard streams.
The document provides information on the syllabus for semesters V and VI of the B.Sc. Information Technology program at the University of Mumbai for the 2013-2014 academic year. It lists the courses, course codes, titles, and credit hours for each semester. Semester V includes courses on Network Security, ASP.Net with C#, Software Testing, Advanced Java, and Linux Administration. Semester VI includes courses on Internet Technology, Project Management, Data Warehousing, Project Report, and electives on IPR and Cyber Laws, Digital Signal and Systems, and Geographic Information Systems. It provides brief descriptions and references for some sample courses, including the topics to be covered, textbooks, and evaluation criteria.
This document analyzes and compares the performance of different inter-process communication (IPC) mechanisms in Unix-based operating systems, including pipes, message queues, and shared memory. Programs were written to transfer data between processes using each IPC mechanism. Pipes transferred around 95 MB/s, message queues transferred 120 MB/s, and shared memory, being the fastest, transferred around 4 GB/s. Therefore, the analysis showed that shared memory provides the best performance for inter-process communication compared to pipes and message queues.
The document provides an overview of the TCP/IP network model and how data is packaged and sent over a network, explaining that data moves through four layers (Application, Transport, Internet, Link) where it is encapsulated with headers at each layer, and that the Transport layer handles reliable transmission using TCP or unreliable transmission using UDP by adding a header and creating a TCP segment or datagram to pass to the Internet layer.
Assemblers Elements of Assembly Language Programming, Design of the Assembler, Assembler Design Criteria, Types of Assemblers, Two-Pass Assemblers, One-Pass Assemblers, Single pass Assembler for Intel x86 , Algorithm of Single Pass Assembler, Multi-Pass Assemblers, Advanced Assembly Process, Variants of Assemblers Design of two pass assembler
Implementation of a Deadline Monotonic algorithm for aperiodic traffic schedu...Andrea Tino
Polling Server and Nodes synchronize their clocks to emulate TDMA-scheduled communication. A scheduling file assigns slots to components like Polling Server and flow generators. During their slots, generators probabilistically generate aperiodic flows, which are received and scheduled by Polling Server. If Polling Server's queue overflows, new generators are rejected to maintain schedulability.
Processes communicate through interprocess communication (IPC) using two main models: shared memory and message passing. Shared memory allows processes to access the same memory regions, while message passing involves processes exchanging messages through mechanisms like mailboxes, pipes, signals, and sockets. Common IPC techniques include semaphores, shared memory, message queues, and sockets that allow processes to synchronize actions and share data in both blocking and non-blocking ways. Deadlocks can occur if processes form a circular chain while waiting for resources held by other processes.
The document discusses NS3's implementation of WiFi networking. It provides an overview of the WiFiNetDevice and WifiPhy models, and describes the modular implementation including the MAC high, low, and physical layers. It explains concepts like the DcfManager, DcaTxop, rate control algorithms, and provides examples of modifying the WiFi model and using trace sources.
This document provides an overview of parallel computing and parallel processing. It discusses:
1. The three types of concurrent events in parallel processing: parallel, simultaneous, and pipelined events.
2. The five fundamental factors for projecting computer performance: clock rate, cycles per instruction (CPI), execution time, million instructions per second (MIPS) rate, and throughput rate.
3. The four programmatic levels of parallel processing from highest to lowest: job/program level, task/procedure level, interinstruction level, and intrainstruction level.
The document discusses parallelism and techniques to improve computer performance through parallel execution. It describes instruction level parallelism (ILP) where multiple instructions can be executed simultaneously through techniques like pipelining and superscalar processing. It also discusses processor level parallelism using multiple processors or processor cores to concurrently execute different tasks or threads.
TensorRT is an NVIDIA tool that optimizes and accelerates deep learning models for production deployment. It performs optimizations like layer fusion, reduced precision from FP32 to FP16 and INT8, kernel auto-tuning, and multi-stream execution. These optimizations reduce latency and increase throughput. TensorRT automatically optimizes models by taking in a graph, performing optimizations, and outputting an optimized runtime engine.
1) Inter-process communication (IPC) allows processes to communicate and synchronize their actions through message passing.
2) There are two main primitives for message passing - send and receive. Messages have a header and body and can be of fixed or variable size.
3) Message passing can be blocking or non-blocking, where blocking waits for the message and non-blocking continues after sending or receiving.
Move Message Passing Interface Applications to the Next LevelIntel® Software
Explore techniques to reduce and remove message passing interface (MPI) parallelization costs. Get practical examples and examples of performance improvements.
In this paper we describe paradigms for building and designing parallel computing machines. Firstly we
elaborate the uniqueness of MIMD model for the execution of diverse applications. Then we compare the
General Purpose Architecture of Parallel Computers with Special Purpose Architecture of Parallel
Computers in terms of cost, throughput and efficiency. Then we describe how Parallel Computer
Architecture employs parallelism and concurrency through pipelining. Since Pipelining improves the
performance of a machine by dividing an instruction into a number of stages, therefore we describe how
the performance of a vector processor is enhanced by employing multi pipelining among its processing
elements. Also we have elaborated the RISC architecture and Pipelining in RISC machines After comparing
RISC computers with CISC computers we observe that although the high speed of RISC computers is very
desirable but the significance of speed of a computer is dependent on implementation strategies. Only CPU
clock speed is not the only parameter to move the system software from CISC to RISC computers but the
other parameters should also be considered like instruction size or format, addressing modes, complexity of
instructions and machine cycles required by instructions. Considering all parameters will give performance
gain . We discuss Multiprocessor and Data Flow Machines in a concise manner. Then we discuss three
SIMD (Single Instruction stream Multiple Data stream) machines which are DEC/MasPar MP-1, Systolic
Processors and Wavefront array Processors. The DEC/MasPar MP-1 is a massively parallel SIMD array
processor. A wide variety of number representations and arithmetic systems for computers can be
implemented easily on the DEC/MasPar MP-1 system. The principal advantages of using such 64×64
SIMD array of 4-bit processors for the implementation of a computer arithmetic laboratory arise out of its
flexibility. After comparison of Systolic Processors with Wave front Processors we found that both of the
Systolic Processors and Wave front Processors are fast and implemented in VLSI. The major drawback of
Systolic Processors is the problem of availability of inputs when clock ticks because of propagation delays
in connection buses. The Wave front Processors combine the Systolic Processor architecture with Data
Flow machine architecture. Although the Wave front processors use asynchronous data flow computing
structure, the timing in the interconnection buses, at input and at output is not problematic..
Network simulator 2 a simulation tool for linuxPratik Joshi
The document describes using the Network Simulator 2 (NS2) tool to simulate network scenarios. NS2 is an open-source discrete event network simulator for Linux. The document outlines installing and configuring NS2, including applying a patch to add support for the Stream Control Transmission Protocol (SCTP). It then describes two simulation scenarios using NS2: one monitors SCTP traffic between two nodes transferring FTP data, the other looks at web traffic over six nodes using TCP. Graphs of the SCTP simulation show transmitted packets and bandwidth utilization.
This document contains teaching material on distributed systems operating systems from the book "Distributed Systems: Concepts and Design". It discusses key concepts around processes, threads, communication, and operating system architecture to support distributed applications and middleware. The material is made available for teaching purposes and cannot be used without permission.
What is C# used for? Like other general-purpose programming languages, C# can be used to create a number of different programs and applications: mobile apps, desktop apps, cloud-based services, websites, enterprise software and games. Lots and lots of games.
C# (pronounced see sharp)[b] is a general-purpose, high-level multi-paradigm programming language. C# encompasses static typing, strong typing, lexically scoped, imperative, declarative, functional, generic, object-oriented (class-based), and component-oriented programming disciplines.[16]
The C# programming language was designed by Anders Hejlsberg from Microsoft in 2000 and was later approved as an international standard by Ecma (ECMA-334) in 2002 and ISO/IEC (ISO/IEC 23270) in 2003. Microsoft introduced C# along with .NET Framework and Visual Studio, both of which were closed-source. At the time, Microsoft had no open-source products. Four years later, in 2004, a free and open-source project called Mono began, providing a cross-platform compiler and runtime environment for the C# programming language. A decade later, Microsoft released Visual Studio Code (code editor), Roslyn (compiler), and the unified .NET platform (software framework), all of which support C# and are free, open-source, and cross-platform. Mono also joined Microsoft but was not merged into .NET.
Here are the key points about simulating routing using a hypercube topology in OMNeT++:
- A hypercube network topology connects processors in a cube-like structure, where each processor is connected to other processors that differ in exactly one bit position.
- In OMNeT++, the hypercube network can be modeled as a compound module with submodules representing each processor node.
- The number of nodes is a power of 2 (2^n). Each node is connected to n other nodes, where n is the dimension of the hypercube.
- Connections between nodes are represented by gates and connections in the NED file. Each node has n output gates to connect to other nodes.
-
The document provides an introduction to parallel programming concepts including shared memory and distributed memory architectures. It discusses multi-threading for shared memory systems where threads within a process share the same memory address space. The document also outlines Foster's methodology for designing parallel programs which includes partitioning work, determining communication needs, aggregating tasks, and mapping tasks to processes. OpenMP pragmas and directives are introduced for writing multi-threaded parallel programs in C using shared memory. Key concepts around scope and data access in OpenMP programs are also summarized.
2018-11-06: Unfortunately, LinkedIn/Slideshare disabled the update functionality and, thus, I had to upload an updated version of this introduction to OMNeT++ as new presentation. It is available here: https://www.slideshare.net/christian.timmerer/an-introduction-to-omnet-54
This document discusses processes and threads in distributed systems. It begins by defining key terms like process, thread, and context. It then explains that threads allow blocking calls without blocking the entire process, making them attractive for distributed systems. The document provides examples of how multithreading can improve client and server performance by hiding network latency and enabling simple scaling to multiprocessors. Overall, multithreading is popular for distributed systems because it facilitates organization and parallelism while allowing the use of blocking calls.
The document discusses the .NET framework and Common Language Runtime (CLR). It explains that CLR provides a common execution environment for all .NET languages. When code is compiled, it is converted to an intermediate language (IL) rather than native machine code, allowing it to run on multiple platforms. The runtime just-in-time (JIT) compiles IL to native code during execution. This allows portability and language interoperability.
This document discusses how to write shared libraries. It begins with a brief history of shared libraries, noting that they allow code to be reused across processes by loading it into memory once. It then discusses some of the challenges with early binary formats not being designed for shared libraries, and how Linux initially used a.out but later switched to ELF to address limitations. The document will cover rules for properly using shared libraries to optimize resource usage and structure programs.
The document provides information about the .NET framework including:
- The use of the Common Language Specification (CLS) which defines standards for languages to work under the .NET umbrella.
- The .NET Framework Class Library (FCL) which provides common functions like string manipulation, data structures, IO streams, security, threading and more.
- Basic building blocks of the .NET framework including namespaces, assemblies, and their uses.
- Hardware and software requirements for the .NET framework.
- Popular .NET compatible languages like C#, VB.NET, Jscript.NET and more.
The document describes a virtual lab simulation for analyzing routing protocols in mobile ad hoc networks (MANETs). It includes:
1) Objectives to design and simulate MANETs using NS-2 and analyze protocols like DSR and AODV.
2) A description of how NS-2 allows visually simulating and designing MANETs to see packet movement and how packets are transferred between nodes.
3) Instructions for performing a MANET simulation experiment in the virtual lab, including setting up a network with four nodes and observing how routes are disrupted and rebuilt when nodes move.
The document discusses parallel computing techniques using threads. It describes domain decomposition, where a problem is divided into independent tasks that can be executed concurrently by threads. Matrix multiplication is provided as an example, where each element of the resulting matrix is computed independently by a thread. Functional decomposition, where a problem is broken into distinct computational functions, is also introduced. Programming models for threads in Java, .NET and POSIX are overviewed.
1-Information sharing
2-Computation speedup
3-Modularity
4-Convenience
5-allows exchanged data and informations
Two IPC Models
1. Shared memory- is an OS provided abstraction which allows a memory region to be simultaneously accessed by multiple programs with an intent to provide communication among them. One process will create an area in RAM which other processes can accessed
2. Message passing - is a form of communication used in interprocess communication. Communication is made by the sending of messages to recipients. Each process should be able to name the other processes. The producer typically uses send() system call to send messages, and the consumer uses receive()system call to receive messages
Shared memory
Faster than message passing
After establishing shared memory, treated as routine memory accesses
Message passing
Useful for exchanging smaller amounts of data
Easy to implement, but more time-consuming task of kernel intervention
Bounded-Buffer Problem Producer Process
do {
...
produce an item in nextp
...
wait(empty);
wait(mutex);
...
add nextp to buffer
...
signal(mutex);
signal(full);
} while (true);
Bounded-Buffer Problem Consumer Process
do {
wait(full);
wait(mutex);
...
remove an item from buffer to nextc
...
signal(mutex);
signal(empty);
...
consume the item in nextc
...
} while (true);
client-server model, the client sends out requests to the server, and the server does some processing with the request(s) received, and returns a reply (or replies)to the client.
Since Socket can be described as end-points for communication. we could imagine the client and server hosts being connected by a pipe through which data-flow takes place.
1-sockets use a client-server while Server waits for incoming client requests by listening to a specified port.
2-After receiving a request, the server accepts a connection from the client socket to complete the connection
3-then Remote procedure call (RPC) abstracts procedure call mechanism for use between systems with network connections
4-and pipes acts as a conduit allowing two processes to communicate
A process is different than a program
- Program is static code and static data
- Process is Dynamic instance of code and data
-Program becomes process when executable file loaded into memory
No one-to-one mapping between programs and processes
-can have multiple processes of the same program
-one program can invoke multiple process
Execution of program started via GUI mouse clicks and command line entry of its name
The process state transition
As a process executes, The process is being created, then The process is waiting to be assigned to a processor therefore, Instructions are being executed then The process is waiting for some event to occur,thereafter The process has finished exec ...
The document discusses the .NET platform and framework. It provides an overview of the key components of .NET including the Common Language Runtime (CLR) environment that executes programs, the Framework Class Library (FCL) base classes and libraries, and support for multiple programming languages. It also describes concepts like application domains, marshaling objects across boundaries, and how programs are compiled to Microsoft Intermediate Language (MSIL) and executed.
Wissbi is an open source toolset for building distributed event processing pipelines easily. It provides basic commands like wissbi-sub and wissbi-pub that allow receiving and sending messages. Filters can be written in any language and run in parallel as daemon processes configured through files. This allows constructing complex multi-stage data workflows. The ecosystem also includes tools like a log collector and metric collector that use Wissbi for transport. It aims to minimize operating effort through a simple design that relies mainly on filesystem operations and standard Unix tools and commands.
The document discusses Microsoft's .NET framework. It defines .NET as a new platform for developing and running software applications that features ease of development of web services and interoperability between programming languages. It then goes on to describe key concepts in .NET including the Common Language Runtime (CLR), assemblies, application domains, garbage collection, and serialization.
This presentation compares threads in Win32 and POSIX systems. It discusses that threads are lighter weight than processes, sharing resources within a process. Win32 threads interface is at a higher level than POSIX threads. While synchronization methods like mutexes are similar, events are specific to Win32, and POSIX uses semaphores and condition variables. Critical sections in Win32 are faster than POSIX mutexes but only for intra-process use. The document provides references for further reading on threads and porting Windows IPC applications to Linux.
1. Evaluation of resource discovery protocol in ad-hoc-sharing networks
Submitted to
Dr. Kavitha Ranganathan
Associate Professor at IIM Ahmedabad
By
Ines Khandelwal
ines.khandelwal@gmail.com
Swayam Tibrewal
swayam2607@gmail.com
2. Objective:
Implementation of Scalable Broadcast Algorithm for file sharing in a Mobile Ad Hoc Network.
Software Used
Omnet++https://omnetpp.org/
Version : OMNeT++ 5.0b3 released Wednesday, 09 December 2015 15:09
Omnet++ is a discrete event simulator, which is an extensible modular component based C++
Simulation library and framework for building network simulators.
We chose Omnet because it is one of the most popular Open Source Open Source Network
Simulators used for MANETs and VANETs related simulations. As a trend we found that most
papers before 2008 used ns-2 (no longer under active development) as the simulator and after
that most of the papers preferred Omnet++ or the simulators built on top of it or the ones such
as MIXIM and MANET-INET which were later deprecated and became part of INET Framework.
Oment's biggest advantage is the presence of libraries such as INET which is built on top of
Omnet++ especially for wireless and mobile Networks.
INEThttps://inet.omnetpp.org/
Version: INET 3.2 Released Thursday, 17 December 2015 15:03
Operating System:
Ubuntu 14.04 LTS
Windows 10
However for debugging, only Ubuntu was used due to trouble with debugging on Windows. The
final implementation was however tested on both Machines
3. Installation Directions and Guidelines can be found in the respective sites. However if the
supplied zip is used, it will not be needed.
Scalable Broadcasting Algorithm
Source: On the Reduction of Broadcast Redundancy in Mobile Ad Hoc Networks (Wei Peng
Xi-Cheng Lu)
1) For source s, it just broadcasts messages to all its neighbors and ignores duplicate messages
received later.
2) For any other node, say u, when it receives a broadcast message m from node r, it performs
the following operations:
a) If N(u) is a subset of N(r) U {r}, then no rebroadcast need be performed and the duplications
received later will be dropped.
b) Or else, if the message is received firstly, then let C(u,m)=N(r) U {r}, and schedule a
rebroadcast by delaying the rebroadcast operation for a random period. In this period,
any successive duplicates will be discarded, and at the mean time the information of the nodes
covered by the transmissions will be recorded in the broadcast cover set. That is, if m is a
duplicate, then let C(u, m)=C(u, m) U N(r) U{ r}, and discard m.
c) After the delay period is expired, if N(u) is a subset of C(u, m), then cancel the rebroadcast; or
else, rebroadcast the message m. The duplicate messages received later will be ignored.
File Sharing Network
The aim is to simulate a Mobile Ad Hoc Network where a node needs a particular file, not
available to it.
In this process, it first searches for the file in the network and must first tabulate the routes to
all possible replicas. The route to these replicas must then be stored in a certain data structure
which must be a property of only that node and then depending on the calculation of stability
metric, we must decide on the route from where the node should receive the file.
Note: The implemented module is henceforth referred to as IIM Model.
Implementation
The IIM Model has been built on top of AODV.
In Spite of our primary work being that of broadcasting protocol, which was successfully
implemented using SBA; since routes back to a node must be stored, the routing protocols had
to be considered.
4. These ideas were borrowed from the AODV Routing Protocol which is already implemented in
the INET Framework.
The entire code has been written on top of AODV Framework. The original AODV Files were
taken renamed to IIM and then they were modified. Also by making changes in linkings in the
INET Namespace, we were able to ensure that IIM was registered as a proper protocol in the
INET Codebase.
INET Usage
INET maintains a separate structure for all layers as per the OSI Framework. For easy
references, all commands to be typed in the Terminal are in Bold Letters.
Building Source Files
Since IIM is implemented as a part of main INET Framework , the files must be built again and
again whenever a change is made to the files.
For this navigate to inet root Directory and in Terminal type
make makefiles
This command generates the makefile corresponding to the source files.
Then type
make
This command builds the source and if the files are built for the first time, it takes 20-25
minutes. In this scenario, it generates a libINET.so (library INET), a library which the simulations
will use. This library is created in the “inet/src” directory
Also do not try to modify the makefile after it has been created.
src Directory
The src Directory (“inet/src”) has the library and all the major sources.
Navigate to (“inet/src/inet/routing/iim”) to look at the source files for IIM Protocol
1. IIM.ned which defines the IIM simple module with its gates and
parameters, and contains its documentation as a comment (so that it can later
be extracted into HTML by the opp_neddoc tool)
2. IIMControlPacket.msg which defines the architeture of messages taht will besent
to nodes in IIM
3. IIMRouting.h which contains the simple module class (class IIMRouting : public
cSimpleModule) with its corresponding changes.
5. 4. IIMRouting.cc which contains the implementation (member functions) of the
IIMRouting class
5. IIMRouteData.h and IIMRouteData.cc which define a few parameters for the
Route taken by a packet.
Navigate to (“inet/src/inet/node/iim”) to look at the Router used in the IIM Simulation.
examples Directory
Navigate to (“inet/examples/iim”) to look at the files used in the simulation
There are two major files in this directory:
1. IIMNetwork.ned This file defines the simulation variables and parameters
2. omnetpp.ini : This file defines all the simulation parameters
Debugging
Using debuggers in INET has no proper documentation available. However the following trick
works. Instead of generating the library which occurs by default, an EXE file corresponding to
INET sources can be generated by tweaking the process of making makefiles. However for using
that the simulations need to be done differently
Tweak for makefiles:
For this navigate to inet root Directory and in Terminal type
make makefiles-exe
This command generates the makefile corresponding to the source files.
Then type
make
This command builds the source and if the files are built for the first time, it takes 20-25
minutes. In this scenario, it generates a INET (executable file).
For a few errors, this executable can then be tested using gdb and valgrind to analyze what is
happening. However it works only for segmentation faults and not for any other kind of faults
which may be causing the simulation to crash. In case of certain other faults, the log files of
simulation must be analyzed.
To run the executable all files in the example directory must either be given as a parameter or
shifted to the src directory.
File Matching Function
7. requests. This feature is removed in the IIM Model by forcing the node to send periodic Hello
messages to all its neighbours and the node now keeps tracks of all its neighbours by
generating new HELLO Messages after a certain fixed interval.
These two aspects ensure that the Routing table of RFC 3561 Specification is tweaked to act as
Neighbour Table too.
The stability metric was to be calculated on the basis of time at which nodes are created and
the routing table allows us to specify the time at which the route is created. This is inbuilt in
AODV as the routes become invalid if the Current SimTime- Time of Last Update rises above a
certain limit. These times which are a property of “route” stored in the Routing Table can be
used to determine using the same “Current SimTime- Time of Last Update” parameter the
stability of the route.
The IIM Model is extensible in this direction but this feature has not been implemented.
SBA Algorithm
Implementation Strategy:
1. We have a global variable of nodes the message has passed through. This is implemnted
as a set of visited Nodes which is available to all the nodes and is updated when the
broadcast message looking for files reaches them.
2. Most algorithms try to ensure that we are able to prevent duplicates by maintaining a
copy of which message was received or broadcasted. For instance if a neighbor of the
source broadcasts, it is natural that the source node will receive the message as well and
must handle it as well.
3. Once a node has broadcasted, all its neighbours and the node itself have already
been accounted for in the global set of visited nodes. Hence for the same simulation
run, that node cannot broadcast again. However there could be time complexities
associated in the process of comparing the list of visited nodes with the neighbour set
which could not been taken into account if the algorithm is scaled up.
4. Alternately a strategy of marking nodes as visited and setting them to stop or ignore all
messages after they have broadcasted or decided not to broadcast received subsequently
also poses problems if the simulation needs to be extended beyond resource discovery.
There can be more messages than those for resource allocation which cannot be ignored
5. Now if the node is the originator it’s neighbors are added and it always broadcasts.
6. Now we reach a Node A, and B and C are its neighbours.
8. 7.
1. B and C are already covered. Then this node simply deletes the packet
in question and won’t rebroadcast.
2. At least one of B and C is not there. There has to be a random delay.
After the delay there, are only two possibilities, A broadcasts or not. During the
delay, new nodes are covered accounted for.
i. To implement the delay, the node schedules a self message called
RADTimer which is scheduled after a certain random delay.
ii. This self message instructs the node to, after a certain delay proceed as it
did before, compare the global covered set and see if it should rebroadcast
or not.
Expanding Ring SBA
Here you first restrict yourself to say n hops and then n+m, n+2m hops and so on. Say each hop
takes one unit of time. We use the parameter TTL or Time to live as a measure of the number of
hops for which a message should survive.
Then the TTL or Time to Live of a message can be changed accordingly.
Initially TTL = m
On encountering a new node, decrease TTL by one, Omnet does this automatically
When TTL =0 and no reply has been sent back TTL = n.
Terminating Condition: Either all nodes are covered or a reply was received.
So if you are the originator send TTL of m and decrement it. If the TTL value becomes 0 then
increase it by n in subsequent runs.
Simulation Working
Hello messages occur throughout the simulation as per certain parameters defined in the .NED
file.
Broadcast Messages are generated by the node which wants to find the file.
Even though these are broadcast messages yet when they go back to their originator, they must
take back two things, the stability metric of the route and the route to that node in the routing
table
Eventually multiple routes will come and the protocol must choose the one with least hops as the
most optimal route.