This document surveys three network simulation tools: NS, OPNET, and OMNeT++. It provides an overview of NS, including its input topologies, output validation methods, supported protocols, limitations, and future development towards parallel distributed simulation. The document also briefly introduces OPNET as a commercial tool and its key features such as graphical modeling, discrete event simulation, and data analysis tools.
As the complexity of the scan algorithm is dependent on the number of design registers, large SoC scan
designs can no longer be verified in RTL simulation unless partitioned into smaller sub-blocks. This paper
proposes a methodology to decrease scan-chain verification time utilizing SCE-MI, a widely used
communication protocol for emulation, and an FPGA-based emulation platform. A high-level (SystemC)
testbench and FPGA synthesizable hardware transactor models are developed for the scan-chain ISCAS89
S400 benchmark circuit for high-speed communication between the host CPU workstation and the FPGA
emulator. The emulation results are compared to other verification methodologies (RTL Simulation,
Simulation Acceleration, and Transaction-based emulation), and found to be 82% faster than regular RTL
simulation. In addition, the emulation runs in the MHz speed range, allowing the incorporation of software
applications, drivers, and operating systems, as opposed to the Hz range in RTL simulation or submegahertz
range as accomplished in transaction-based emulation. In addition, the integration of scan
testing and acceleration/emulation platforms allows more complex DFT methods to be developed and
tested on a large scale system, decreasing the time to market for products.
FAULT MODELING OF COMBINATIONAL AND SEQUENTIAL CIRCUITS AT REGISTER TRANSFER ...VLSICS Design
This document summarizes research on modeling faults at the register transfer level (RTL) for digital circuit testing. It proposes a new RTL fault model that models stuck-at faults by inserting buffers for each bit in the variables of the RTL code. Fault simulation is performed on faulty circuits generated from the RTL code to determine fault coverage. Results on combinational and sequential circuits show the RTL fault coverage obtained matches closely with gate-level fault coverage obtained through logic synthesis and gate-level fault simulation. The proposed RTL fault model provides a way to estimate fault coverage earlier in the design cycle compared to traditional gate-level fault simulation.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensuresfirst time silicon success that meets time to market needs of the industry. For any test-environment the bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its reusability in any USB3.0 LTSSM digital core.
The document introduces two MATLAB-based LTE simulators for link and system level simulations. The source codes are available under an academic license, allowing researchers to reproduce wireless communications research. The link level simulator models physical layer aspects like channel estimation and MIMO detection. The system level simulator focuses on network issues like scheduling and interference. Together the simulators enable the investigation and comparison of algorithms in a standardized LTE environment.
The document proposes a design automation tool that represents a system from the behavioral level down to the transaction level for virtual bus-based platforms. It aims to allow for rapid system exploration and speed up the design process. Previous works are discussed that did not consider communication issues or allow automatic generation. The proposed tool includes representations of control and data flow, a design flow from block to system level, and methodologies for reducing computation and communication with translations to transaction level models.
The development of embedded applications (such as Wireless Sensor Network protocols) often
requires a shift to formal specifications. To insure the reliability and the performance of the
WSNs, such protocols must be designed following some methods reducing error rate. Formal
methods (as Automata, Petri nets, algebra, logics, etc.) were largely used in the specification of
these protocols, their analysis and their verification. After that, their implementation is an
important phase to deploy, test and use those protocols in real environments. The main
objective of the current paper is to formalize the transformation from high-level specification (in
Timed Automata) to low-level implementation (in NesC language and TinyOs system) and to
automate such transformation. The proposed transformation approach defines a set of rules that
allow the passage between these two levels. We implemented our solution and we illustrated the
proposed approach on a protocol case study for the "humidity" and "temperature" sensing in
WSNs applications.
Accelerating system verilog uvm based vip to improve methodology for verifica...VLSICS Design
In this paper we present the development of Acceleratable UVCs from standard UVCs in System Verilog
and their usage in UVM based Verification Environment of Image Signal Processing designs to increase
run time performance. This paper covers development of Acceleratable UVCs from standard UVCs for
internal control and data buses of ST imaging group by partitioning of transaction-level components and
cycle-accurate signal-level components between the software simulator and hardware accelerator
respectively. Standard Co-Emulation API: Modeling Interface (SCE-MI) compliant, transaction-level
communications link between test benches running on a host system and Emulation machine is established.
Accelerated Verification IPs are used at UVM based Verification Environment of Image Signal Processing
designs both with simulator and emulator as UVM acceleration is an extension of the standard simulationonly
UVM and is fully backward compatible with it. Acceleratable UVCs significantly reduces development
schedule risks while leveraging transaction models used during simulation.
In this paper, we discuss our experiences on UVM based methodology adoption on TestBench-Xpress
(TBX) based technology step by step. We are also doing comparison between the run time performance
results from earlier simulator-only environment and the new, hardware-accelerated environment. Although
this paper focuses on Acceleratable UVC’s development and their usage for image signal processing
designs. Same concept can be extended for non-image signal processing designs.
MODIFIED MICROPIPLINE ARCHITECTURE FOR SYNTHESIZABLE ASYNCHRONOUS FIR FILTER ...VLSICS Design
This paper proposes a novel asynchronous architecture for a finite impulse response (FIR) filter that is synthesizable and can be implemented using standard synchronous design tools and flows. The architecture is based on a modified micropipeline approach using a four-phase bundled data protocol. An extra control element is added to prevent tokens from propagating uncontrolled, ensuring samples move through pipeline stages properly. Edge-triggered flip-flops replace level-sensitive latches to avoid data corruption. The design is modeled in SystemVerilog and implemented on an FPGA. Testing shows it functions correctly compared to a synchronous FIR implementation, with reduced latency but slightly more area. This approach allows for easier adoption of asynchronous circuits in digital signal processing.
As the complexity of the scan algorithm is dependent on the number of design registers, large SoC scan
designs can no longer be verified in RTL simulation unless partitioned into smaller sub-blocks. This paper
proposes a methodology to decrease scan-chain verification time utilizing SCE-MI, a widely used
communication protocol for emulation, and an FPGA-based emulation platform. A high-level (SystemC)
testbench and FPGA synthesizable hardware transactor models are developed for the scan-chain ISCAS89
S400 benchmark circuit for high-speed communication between the host CPU workstation and the FPGA
emulator. The emulation results are compared to other verification methodologies (RTL Simulation,
Simulation Acceleration, and Transaction-based emulation), and found to be 82% faster than regular RTL
simulation. In addition, the emulation runs in the MHz speed range, allowing the incorporation of software
applications, drivers, and operating systems, as opposed to the Hz range in RTL simulation or submegahertz
range as accomplished in transaction-based emulation. In addition, the integration of scan
testing and acceleration/emulation platforms allows more complex DFT methods to be developed and
tested on a large scale system, decreasing the time to market for products.
FAULT MODELING OF COMBINATIONAL AND SEQUENTIAL CIRCUITS AT REGISTER TRANSFER ...VLSICS Design
This document summarizes research on modeling faults at the register transfer level (RTL) for digital circuit testing. It proposes a new RTL fault model that models stuck-at faults by inserting buffers for each bit in the variables of the RTL code. Fault simulation is performed on faulty circuits generated from the RTL code to determine fault coverage. Results on combinational and sequential circuits show the RTL fault coverage obtained matches closely with gate-level fault coverage obtained through logic synthesis and gate-level fault simulation. The proposed RTL fault model provides a way to estimate fault coverage earlier in the design cycle compared to traditional gate-level fault simulation.
COVERAGE DRIVEN FUNCTIONAL TESTING ARCHITECTURE FOR PROTOTYPING SYSTEM USING ...VLSICS Design
Time and efforts for functional testing of digital logic is big chunk of overall project cycle in VLSI industry. Progress of functional testing is measured by functional coverage where test-plan defines what needs to be covered, and test-results indicates quality of stimulus. Claiming closer of functional testing requires that functional coverage hits 100% of original test-plan. Depending on the complexity of the design, availability of resources and budget, various methods are used for functional testing. Software simulations using various logic simulators, available from Electronic Design Automation (EDA) companies, is primary method for functional testing. The next level in functional testing is pre-silicon verification using Field Programmable Gate Array (FPGA) prototype and/or emulation platforms for stress testing the Design Under Test (DUT). With all the efforts, the purpose is to gain confidence on maturity of DUT to ensuresfirst time silicon success that meets time to market needs of the industry. For any test-environment the bottleneck, in achieving verification closer, is controllability and observability that is quality of stimulus to unearth issues at early stage and coverage calculation. Software simulation, FPGA prototype, or emulation, each method has its own limitations, be it test-time, ease of use, or cost of software, tools and
hardware-platform. Compared to software simulation, FPGA prototyping and emulation methods pose greater challenges in quality stimulus generation and coverage calculation. Many researchers have identified the problems of bug-detection / localization, but very few have touched the concept of quality stimulus generation that leads to better functional coverage and thereby uncover hidden bugs in FPGA prototype verification setup. This paper presents a novel approach to address above-mentioned issues by embedding synthesizable active-agent and coverage collector into FPGA prototype. The proposed architecture has been experimented for functional and stress testing of Universal Serial Bus (USB) Link Training and Status State Machine (LTSSM) logic module as DUT in FPGA prototype. The proposed solution is fully synthesizable and hence can be used in both software simulation as well as in prototype system. The biggest advantage is plug and play nature of this active-agent component, that allows its reusability in any USB3.0 LTSSM digital core.
The document introduces two MATLAB-based LTE simulators for link and system level simulations. The source codes are available under an academic license, allowing researchers to reproduce wireless communications research. The link level simulator models physical layer aspects like channel estimation and MIMO detection. The system level simulator focuses on network issues like scheduling and interference. Together the simulators enable the investigation and comparison of algorithms in a standardized LTE environment.
The document proposes a design automation tool that represents a system from the behavioral level down to the transaction level for virtual bus-based platforms. It aims to allow for rapid system exploration and speed up the design process. Previous works are discussed that did not consider communication issues or allow automatic generation. The proposed tool includes representations of control and data flow, a design flow from block to system level, and methodologies for reducing computation and communication with translations to transaction level models.
The development of embedded applications (such as Wireless Sensor Network protocols) often
requires a shift to formal specifications. To insure the reliability and the performance of the
WSNs, such protocols must be designed following some methods reducing error rate. Formal
methods (as Automata, Petri nets, algebra, logics, etc.) were largely used in the specification of
these protocols, their analysis and their verification. After that, their implementation is an
important phase to deploy, test and use those protocols in real environments. The main
objective of the current paper is to formalize the transformation from high-level specification (in
Timed Automata) to low-level implementation (in NesC language and TinyOs system) and to
automate such transformation. The proposed transformation approach defines a set of rules that
allow the passage between these two levels. We implemented our solution and we illustrated the
proposed approach on a protocol case study for the "humidity" and "temperature" sensing in
WSNs applications.
Accelerating system verilog uvm based vip to improve methodology for verifica...VLSICS Design
In this paper we present the development of Acceleratable UVCs from standard UVCs in System Verilog
and their usage in UVM based Verification Environment of Image Signal Processing designs to increase
run time performance. This paper covers development of Acceleratable UVCs from standard UVCs for
internal control and data buses of ST imaging group by partitioning of transaction-level components and
cycle-accurate signal-level components between the software simulator and hardware accelerator
respectively. Standard Co-Emulation API: Modeling Interface (SCE-MI) compliant, transaction-level
communications link between test benches running on a host system and Emulation machine is established.
Accelerated Verification IPs are used at UVM based Verification Environment of Image Signal Processing
designs both with simulator and emulator as UVM acceleration is an extension of the standard simulationonly
UVM and is fully backward compatible with it. Acceleratable UVCs significantly reduces development
schedule risks while leveraging transaction models used during simulation.
In this paper, we discuss our experiences on UVM based methodology adoption on TestBench-Xpress
(TBX) based technology step by step. We are also doing comparison between the run time performance
results from earlier simulator-only environment and the new, hardware-accelerated environment. Although
this paper focuses on Acceleratable UVC’s development and their usage for image signal processing
designs. Same concept can be extended for non-image signal processing designs.
MODIFIED MICROPIPLINE ARCHITECTURE FOR SYNTHESIZABLE ASYNCHRONOUS FIR FILTER ...VLSICS Design
This paper proposes a novel asynchronous architecture for a finite impulse response (FIR) filter that is synthesizable and can be implemented using standard synchronous design tools and flows. The architecture is based on a modified micropipeline approach using a four-phase bundled data protocol. An extra control element is added to prevent tokens from propagating uncontrolled, ensuring samples move through pipeline stages properly. Edge-triggered flip-flops replace level-sensitive latches to avoid data corruption. The design is modeled in SystemVerilog and implemented on an FPGA. Testing shows it functions correctly compared to a synchronous FIR implementation, with reduced latency but slightly more area. This approach allows for easier adoption of asynchronous circuits in digital signal processing.
Network simulator 2 a simulation tool for linuxPratik Joshi
The document describes using the Network Simulator 2 (NS2) tool to simulate network scenarios. NS2 is an open-source discrete event network simulator for Linux. The document outlines installing and configuring NS2, including applying a patch to add support for the Stream Control Transmission Protocol (SCTP). It then describes two simulation scenarios using NS2: one monitors SCTP traffic between two nodes transferring FTP data, the other looks at web traffic over six nodes using TCP. Graphs of the SCTP simulation show transmitted packets and bandwidth utilization.
This paper presents an implementation of an IPv6 stack within the network simulator NS-3. The implementation adds support for key IPv6 features like neighbor discovery and multihoming. It describes the architecture of NS-3 and how it currently only supports IPv4. Then it discusses the key components and mechanisms of IPv6, followed by details of the authors' implementation of IPv6 support in NS-3, including neighbor discovery. It presents simulation scenarios demonstrating IPv6 features like multihoming and dual stack operation.
This document compares the performance of four network simulators (ns-2, ns-3, OMNET++, and GloMoSiM) in simulating a MANET routing protocol to identify the optimal simulator. It discusses each simulator and selects AODV as the routing protocol to compare CPU utilization, memory usage, computation time, and scalability of the simulators.
Network Analyzer and Report Generation Tool for NS-2 using TCL ScriptIRJET Journal
This document describes a tool called the ARGT (Analyzer and Report Generation Tool) for NS-2 that allows users to generate TCL script files to model network scenarios in a flexible way. The tool provides a graphical user interface where users can create wired or wireless network topologies by adding nodes and links. It allows configuration of network protocols and applications. The tool then generates a TCL script that can be run directly in NS-2 to simulate the network and produce output files. The document evaluates the tool's ability to analyze simulation results for metrics like throughput, delay, and jitter. It finds that the ARGT is an improvement over previous tools as it integrates TCL script generation, simulation, and performance analysis into a single
A3: application-aware acceleration for wireless data networksZhenyun Zhuang
This document discusses application-aware acceleration (A3) for improving application performance over wireless networks. It presents results showing that while enhanced transport protocols improve performance for FTP, they provide little benefit for other popular applications like CIFS, SMTP, and HTTP. This is because the behavior of these applications, designed for reliable LANs, negatively impacts their performance over lossy wireless links. The document proposes A3 as a middleware solution that offsets these behavioral problems through application-specific design principles, while remaining transparent to applications.
The document discusses various network simulation tools. It begins by defining network simulation as using software to model the performance of a computer network by analyzing relationships between network entities. It then discusses several specific network simulation tools: NS2, NS3, Netki, Marionnet, OPNET, QualNet. For each tool it provides a brief description of its features, uses, and capabilities for simulating different network types and protocols without requiring physical hardware. The document provides an overview of these simulation tools and how they can be used to model and test computer networks in a virtual environment.
The goal of the project “An optic’s life” is, to predict the time when an optical transceiver will reach its real end-of-life-time based on the actual setup in the datacenter / colocation.
This document provides an overview of using the OPNET network simulation software. It discusses that OPNET is required reading for the TCM-250 course and can only be accessed in the school's lab. The document then covers basic probability concepts and terminology needed to understand network simulations, such as probability distribution functions and how they are used to model things like message sizes and interarrival times. It also summarizes some of OPNET's capabilities for modeling different network types and technologies.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET- Study of Various Network SimulatorsIRJET Journal
This document provides an overview of various network simulators. It discusses the concepts and uses of network simulation. Several popular network simulators are described, including NS2, NS3, OPNET, OMNeT++, NETSIM and QualNet. For each simulator, the key features, programming languages, advantages and limitations are summarized. The document concludes that network simulators allow testing of networks and protocols in a cost-effective manner compared to physical test beds.
The document provides an overview of NS, an event-driven network simulator developed at UC Berkeley. NS simulates networks and protocols using a combination of C++ and OTcl. It implements common network components like routers, queues, and protocols. The simulator is controlled via an OTcl interface. A user writes an OTcl script to set up a network topology by connecting components, schedule events to transmit packets, and analyze results. NS separates data and control paths for efficiency, exposing C++ components through OTcl for configuration. This allows flexible and extensible simulation of local and wide area networks.
Network simulation involves using software to model the performance of a computer network by analyzing relationships between network components like links, switches, routers and nodes. A network simulator specifically predicts network performance by creating a virtual model of the network that can be manipulated to evaluate how the network would perform under different conditions. A network emulator takes the additional step of allowing real applications to run over the virtual network to assess performance and optimize decision making. Common open-source network simulators include NS-2, NS-3, Netkit and Marionnet, while commercial options include OPNET and QualNet. NS-3 is currently one of the best network simulators as it provides an open-source platform for modeling wired, wireless and mobile networks along with extensive
This project aims to analyze and emulate anomaly detection techniques for low-rate TCP denial of service attacks using the DETERLab testbed. The researchers plan to design an extensive anomaly checkpoint detection methodology. They propose a modified likelihood ratio algorithm to detect changes in network traffic statistics. The algorithm will be tested on legitimate and attack traffic in DETERLab while analyzing detection statistics and congestion windows. Results will help evaluate the ability to rapidly detect attacks while limiting false alarms.
Application-Aware Acceleration for Wireless Data Networks: Design Elements an...Zhenyun Zhuang
This document discusses an approach called Application-Aware Acceleration (A3) to improve application performance over wireless networks. It finds that while transport layer protocols improve performance for FTP, they provide little benefit for other applications like CIFS, SMTP, and HTTP due to the applications' behaviors. A3 addresses this by using principles like transaction prediction, prioritized fetching, and redundant transmissions to offset applications' typical problems when used over wireless networks. The document presents the motivation and design of A3, and evaluates its effectiveness through emulations and a proof-of-concept prototype using NetFilter.
This document is a final report submitted by Ambreen Zafar for a course on advanced computer networks. It summarizes her simulation of routing misbehavior in mobile ad hoc networks (MANETs) using the NS-2 network simulator. The simulation categorized misbehaving nodes and used watchdog and path rater techniques to identify them and help routing protocols avoid these nodes. The simulation found that these techniques increased throughput by 17-27% in the presence of 40% misbehaving nodes, while increasing overhead transmissions from 9-12% up to 17-24%.
This document contains instructions for conducting network simulation experiments using the NCTUns simulator. It discusses setting up NCTUns, drawing network topologies, editing node properties, running simulations, and performing post-analysis. Experiment 1 involves simulating a 3-node point-to-point network with duplex links, varying the bandwidth, and measuring the number of dropped packets. The steps provided outline how to draw the topology in NCTUns and configure the nodes before running the simulation.
Network simulators are software programs that model and predict the behavior of networks without requiring an actual physical network. They allow testing of new network designs and protocols. NS2 is a popular open source network simulator that is discrete event-driven, object-oriented, and uses C++ for the backend simulation and OTCL for the frontend configuration. It allows users to easily simulate network topologies, specify nodes and links, and test different network protocols. The output is visualized through a nam animation window and performance data can be analyzed using graphs generated by the XGraph utility. NS2 has helped researchers simulate and study networks simply and overcome challenges in the field.
IRJET- Performance Improvement of Wireless Network using Modern Simulation ToolsIRJET Journal
This document summarizes a research study that used the ns-3 network simulator to analyze the performance of two routing protocols - Optimized Link State Routing (OLSR) and Adhoc On-demand Distance Vector (AODV) - in a wireless ad hoc network under different conditions. The study varied parameters like packet size, number of nodes, and hello interval (the frequency at which routing information is broadcast) and measured metrics like throughput, delay, jitter, packet delivery ratio, packet loss, and congestion window. The results showed how the performance of the two protocols was impacted by changes to these parameters. The goal was to better understand congestion control and avoidance in wireless ad hoc networks through simulation.
VEGAS: Better Performance than other TCP Congestion Control Algorithms on MANETsCSCJournals
The document analyzes the performance of six TCP congestion control algorithms (BIC, Cubic, Compound, Vegas, Reno, and Westwood) on mobile ad hoc networks (MANETs) using network simulator 2 (NS2). Simulation results show that the Vegas algorithm provided better and more stable throughput than the other algorithms over the entire simulation time, both with and without node mobility. While BIC achieved the highest throughput after 75 seconds, Vegas was the only algorithm that maintained almost constant throughput from the start to end of the 200 second simulations. Therefore, the document concludes that Vegas is the most suitable algorithm for MANET scenarios.
Network simulator 2 a simulation tool for linuxPratik Joshi
The document describes using the Network Simulator 2 (NS2) tool to simulate network scenarios. NS2 is an open-source discrete event network simulator for Linux. The document outlines installing and configuring NS2, including applying a patch to add support for the Stream Control Transmission Protocol (SCTP). It then describes two simulation scenarios using NS2: one monitors SCTP traffic between two nodes transferring FTP data, the other looks at web traffic over six nodes using TCP. Graphs of the SCTP simulation show transmitted packets and bandwidth utilization.
This paper presents an implementation of an IPv6 stack within the network simulator NS-3. The implementation adds support for key IPv6 features like neighbor discovery and multihoming. It describes the architecture of NS-3 and how it currently only supports IPv4. Then it discusses the key components and mechanisms of IPv6, followed by details of the authors' implementation of IPv6 support in NS-3, including neighbor discovery. It presents simulation scenarios demonstrating IPv6 features like multihoming and dual stack operation.
This document compares the performance of four network simulators (ns-2, ns-3, OMNET++, and GloMoSiM) in simulating a MANET routing protocol to identify the optimal simulator. It discusses each simulator and selects AODV as the routing protocol to compare CPU utilization, memory usage, computation time, and scalability of the simulators.
Network Analyzer and Report Generation Tool for NS-2 using TCL ScriptIRJET Journal
This document describes a tool called the ARGT (Analyzer and Report Generation Tool) for NS-2 that allows users to generate TCL script files to model network scenarios in a flexible way. The tool provides a graphical user interface where users can create wired or wireless network topologies by adding nodes and links. It allows configuration of network protocols and applications. The tool then generates a TCL script that can be run directly in NS-2 to simulate the network and produce output files. The document evaluates the tool's ability to analyze simulation results for metrics like throughput, delay, and jitter. It finds that the ARGT is an improvement over previous tools as it integrates TCL script generation, simulation, and performance analysis into a single
A3: application-aware acceleration for wireless data networksZhenyun Zhuang
This document discusses application-aware acceleration (A3) for improving application performance over wireless networks. It presents results showing that while enhanced transport protocols improve performance for FTP, they provide little benefit for other popular applications like CIFS, SMTP, and HTTP. This is because the behavior of these applications, designed for reliable LANs, negatively impacts their performance over lossy wireless links. The document proposes A3 as a middleware solution that offsets these behavioral problems through application-specific design principles, while remaining transparent to applications.
The document discusses various network simulation tools. It begins by defining network simulation as using software to model the performance of a computer network by analyzing relationships between network entities. It then discusses several specific network simulation tools: NS2, NS3, Netki, Marionnet, OPNET, QualNet. For each tool it provides a brief description of its features, uses, and capabilities for simulating different network types and protocols without requiring physical hardware. The document provides an overview of these simulation tools and how they can be used to model and test computer networks in a virtual environment.
The goal of the project “An optic’s life” is, to predict the time when an optical transceiver will reach its real end-of-life-time based on the actual setup in the datacenter / colocation.
This document provides an overview of using the OPNET network simulation software. It discusses that OPNET is required reading for the TCM-250 course and can only be accessed in the school's lab. The document then covers basic probability concepts and terminology needed to understand network simulations, such as probability distribution functions and how they are used to model things like message sizes and interarrival times. It also summarizes some of OPNET's capabilities for modeling different network types and technologies.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET- Study of Various Network SimulatorsIRJET Journal
This document provides an overview of various network simulators. It discusses the concepts and uses of network simulation. Several popular network simulators are described, including NS2, NS3, OPNET, OMNeT++, NETSIM and QualNet. For each simulator, the key features, programming languages, advantages and limitations are summarized. The document concludes that network simulators allow testing of networks and protocols in a cost-effective manner compared to physical test beds.
The document provides an overview of NS, an event-driven network simulator developed at UC Berkeley. NS simulates networks and protocols using a combination of C++ and OTcl. It implements common network components like routers, queues, and protocols. The simulator is controlled via an OTcl interface. A user writes an OTcl script to set up a network topology by connecting components, schedule events to transmit packets, and analyze results. NS separates data and control paths for efficiency, exposing C++ components through OTcl for configuration. This allows flexible and extensible simulation of local and wide area networks.
Network simulation involves using software to model the performance of a computer network by analyzing relationships between network components like links, switches, routers and nodes. A network simulator specifically predicts network performance by creating a virtual model of the network that can be manipulated to evaluate how the network would perform under different conditions. A network emulator takes the additional step of allowing real applications to run over the virtual network to assess performance and optimize decision making. Common open-source network simulators include NS-2, NS-3, Netkit and Marionnet, while commercial options include OPNET and QualNet. NS-3 is currently one of the best network simulators as it provides an open-source platform for modeling wired, wireless and mobile networks along with extensive
This project aims to analyze and emulate anomaly detection techniques for low-rate TCP denial of service attacks using the DETERLab testbed. The researchers plan to design an extensive anomaly checkpoint detection methodology. They propose a modified likelihood ratio algorithm to detect changes in network traffic statistics. The algorithm will be tested on legitimate and attack traffic in DETERLab while analyzing detection statistics and congestion windows. Results will help evaluate the ability to rapidly detect attacks while limiting false alarms.
Application-Aware Acceleration for Wireless Data Networks: Design Elements an...Zhenyun Zhuang
This document discusses an approach called Application-Aware Acceleration (A3) to improve application performance over wireless networks. It finds that while transport layer protocols improve performance for FTP, they provide little benefit for other applications like CIFS, SMTP, and HTTP due to the applications' behaviors. A3 addresses this by using principles like transaction prediction, prioritized fetching, and redundant transmissions to offset applications' typical problems when used over wireless networks. The document presents the motivation and design of A3, and evaluates its effectiveness through emulations and a proof-of-concept prototype using NetFilter.
This document is a final report submitted by Ambreen Zafar for a course on advanced computer networks. It summarizes her simulation of routing misbehavior in mobile ad hoc networks (MANETs) using the NS-2 network simulator. The simulation categorized misbehaving nodes and used watchdog and path rater techniques to identify them and help routing protocols avoid these nodes. The simulation found that these techniques increased throughput by 17-27% in the presence of 40% misbehaving nodes, while increasing overhead transmissions from 9-12% up to 17-24%.
This document contains instructions for conducting network simulation experiments using the NCTUns simulator. It discusses setting up NCTUns, drawing network topologies, editing node properties, running simulations, and performing post-analysis. Experiment 1 involves simulating a 3-node point-to-point network with duplex links, varying the bandwidth, and measuring the number of dropped packets. The steps provided outline how to draw the topology in NCTUns and configure the nodes before running the simulation.
Network simulators are software programs that model and predict the behavior of networks without requiring an actual physical network. They allow testing of new network designs and protocols. NS2 is a popular open source network simulator that is discrete event-driven, object-oriented, and uses C++ for the backend simulation and OTCL for the frontend configuration. It allows users to easily simulate network topologies, specify nodes and links, and test different network protocols. The output is visualized through a nam animation window and performance data can be analyzed using graphs generated by the XGraph utility. NS2 has helped researchers simulate and study networks simply and overcome challenges in the field.
IRJET- Performance Improvement of Wireless Network using Modern Simulation ToolsIRJET Journal
This document summarizes a research study that used the ns-3 network simulator to analyze the performance of two routing protocols - Optimized Link State Routing (OLSR) and Adhoc On-demand Distance Vector (AODV) - in a wireless ad hoc network under different conditions. The study varied parameters like packet size, number of nodes, and hello interval (the frequency at which routing information is broadcast) and measured metrics like throughput, delay, jitter, packet delivery ratio, packet loss, and congestion window. The results showed how the performance of the two protocols was impacted by changes to these parameters. The goal was to better understand congestion control and avoidance in wireless ad hoc networks through simulation.
VEGAS: Better Performance than other TCP Congestion Control Algorithms on MANETsCSCJournals
The document analyzes the performance of six TCP congestion control algorithms (BIC, Cubic, Compound, Vegas, Reno, and Westwood) on mobile ad hoc networks (MANETs) using network simulator 2 (NS2). Simulation results show that the Vegas algorithm provided better and more stable throughput than the other algorithms over the entire simulation time, both with and without node mobility. While BIC achieved the highest throughput after 75 seconds, Vegas was the only algorithm that maintained almost constant throughput from the start to end of the 200 second simulations. Therefore, the document concludes that Vegas is the most suitable algorithm for MANET scenarios.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
1. Page 1 of 15
This document aims to survey three of the currently available and widely used
tools, in the field of simulation and modelling of communication systems. The surveys
have been written by different team members, hence he difference in style between each
tool. Unfortunately the amount of information available is not constant for all of the three
tools, which resulted in the non homogenous flow of the document. As questions
answered for one tool, will most likely not be the same as the any of the other tools.
The survey is divided into three parts as mentioned before, first we shall examine
the Network Simulator (NS) with its current version two, preceding to a look at OPNET a
very successful commercial too unlike the other two simulators. And finally OMNET
Network Simulator Version 2:
NS is a publicly available tool for network simulations, built by various
researches including LBL, Xerox PARC, UCB, and USC/ISI, and many other
contributors as a variant of the “Real Network Simulator”, which is “a network simulator
originally intended for studying the dynamic behavior of flow and congestion control
schemes in packet-switched data networks” [1:Page 1].
NS is intended to simulate and research computer networks, using a discrete event
simulation technique. NS is currently being developed by D.A.R.P.A. (Defense
Advanced Research Projects Agency), through the SAMAN project, which aims at
making network protocols and operations more susceptible to failure so that protocols
and network operations would be analyzed under extreme conditions.
The following survey will attempt to cover the basics of NS without going in
great details by answering the following in their stated order. What is the input topology
of the simulator (to be analyzed), how is the output of the simulator validated, what
protocols does the simulator support, does the public contribute to this research, and how
is this contribution performed, does the simulator have any limitations, does the simulator
support a parallel distributed environment, and finally what is the future of the simulator
Topology [1:]
Topology is defining an object’s location with respect to another reference point. We
are interested in this report in the geography of cyberspace either of ISP’s or even
complete images of the current cyberspace.
NS accept converted topologies from the following generators:
Inet Topology generator
o an Autonomous System (AS) level Internet topology generator
GT-ITM(Georgia Tech Internetwork Topology Models) topology generator
o A tool that generates graphs modeling the topological structure of
internetworks
Tiers topology generator
2. Page 2 of 15
o A random topology generator by Mathew Doar
BRITE(Boston university representative internet topology generator)
o A tool by Boston University, the project aims in creating a universal
topology generator [4].
By hand
o Used for simple and small topologies, mostly aimed at learning the basics
of NS, rather than studying specific systems.
Conversion of the previously mentioned topologies is done through separate scripts that
are not incorporated in the simulator.
Validation
“Ns is not a polished and finished product, but the result of an on-going effort of
research and development. In particular, bugs in the software are still being discovered
and corrected. Users of NS are responsible for verifying for themselves that their
simulations are not invalidated by bugs” [1:]
The program mainly tests TCP congestion control algorithms; it is also worth
mentioning that the current version of the simple validation tools is backward compatible
with ns-1.
Other validation methods include:
THE SACK TCP TESTS
Comparisons of Tahoe, Reno, and SACK TCP
NS Simulator Tests for Random Early Detection (RED) Gateways
Since NS is a public project, it relays on an extensive bug report forum for
verification and modification purposes. It also supports an extensive database of
demos to illustrate the correct usage of each supported protocol.
Supported protocols
Almost all variants of TCP are supported by NS, several forms of multicast, wired
networking, several ad hoc routing protocols and propagation models (but not cellular
phones), data diffusion, satellite, etc. following is a list presented as it is found on the Ns
web site, The list features the layer, protocol, and test library supporting the protocol for
verification purposes, if found.
o Application-level:
o HTTP, web caching and invalidation, TcpApp (test-suite-webcache.tcl)
o telnet and ftp sources (test-suite-simple.tcl)
o Constant-Bit-Rate (CBR) sources (test-suite-cbq.tcl)
o On/Off sources (test-suite-intserv.tcl)
o Transport protocols (UDP, TCP, RTP, SRM):
o basic TCP behavior (test-suite-simple.tcl, test-suite-v1{,a}.tcl)
o Tahoe, Reno, New-Reno, and SACK TCP under different losses (test-
suite-tcpVariants.tcl)
o FACK TCP (limited validation in test-suite-tcpVariants.tcl)
o TCP vegas (test-suite-vegas-v1.tcl)
3. Page 3 of 15
o New-Reno TCP (test-suite-newreno.tcl)
o SACK TCP (test-suite-sack{,-v1,v1a})
o full TCP (test-suite-full.tcl), partial validation only.
o TCP initial window behavior (test-suite-tcp.tcl)
o rate-based pacing TCP (test-suite-rbp.tcl)
o RFC-2001 (Reno) TCP behavior (test-suite-rfc2001.tcl)
o RTP (in test-suite-friendly.tcl, not yet added to "validate")
o SRM (in test-suite-srm.tcl)
o Routing:
o algorithmic routing (test-suite-algo-routing)
o hierarchical routing (test-suite-hier-routing.tcl)
o lan routing and broadcast (test-suite-lan.tcl)
o manual routing (test-suite-manual-routing.tcl)
o centralized multicast, DM multicast, not detailedDM, not multicast over
LAN (test-suite-mcast.tcl)
o routing dynamics (test-suite-routed.tcl)
o detailed simulation using virtual classifier (test-suite-vc.tcl)
o mixed-mode session-levels simulation (test-suite-mixmode.tcl)
o session-level simulation (test-suite-session.tcl)
o Router Mechanisms (scheduling, queue management, admissions control,
etc.):
o FQ (Fair Queueing), SFQ (Stochastic Fair Queuing), DRR (Deficit Round
Robin), FIFO (with drop-tail and RED queue management) (test-suite-
schedule.tcl)
o CBQ (both in v1 and v2 mode) (test-stuite-cbq{,-v1,-v1a})
o RED queue management (test-suite-red{,-v1,-v1a})
o ECN behavior (and TCP interactions) (test-suite-ecn.tcl)
o admission control algorithms: MS, HB, ACTP, ACTO, parameter-based
(in test-suite-intserv.tcl)
o Link-layer mechanisms:
o LANs, with CSMA/CD MAC protocols (in test-suite-lan.tcl)
o snoop
o Other:
o Error Modules (e.g., in test-suite-ecn.tcl, test-suite-tcp-init-win.tcl, test-
suite-session.tcl, and test-suite-srm.tcl)
o Invalidated protocols:
o Fack and Asym TCP
o RTCP
o RTP
o LANs with CSMA/CA MAC protocols (tcl/ex/mac-test.tcl), with
MultihopMac (mac-multihop.cc)and with 802.11 (mac-802_11.cc)
o RLM (Receiver Layered Multicast) (tcl/ex/test-rlm.tcl)
o token bucket filters (tcl/ex/test-tbf.tcl)
o trace-generated sources (tcl/ex/tg.tcl)
o delay-adaptive receivers (tcl/ex/test-rcvr.tcl)
4. Page 4 of 15
o delay modules (delaymodel.cc)
o IVS (ivs.cc)
o emulation mode
Contributions
The network simulator have been around for quite a while, and have been
established as an industry standard on the same level as other software (Opnet, and
Omnet, etc).
Contributions (in the form of code and add-ons to the simulator) to the project are
made by various researchers to serve their own research need, and in effect that of the
development of the NS project. Most Contributions however are not tested or validated
by the VINT group, and are left to their respected developers to maintain and support.
All contributions made to the project can be found at:
http://www.isi.edu/nsnam/ns/ns-contributed.html however since we are not actually using
NS, but rather surveying its abilities and functionality, a study of the contributed code
will not be included in this survey, as it would be out of scope.
Limitations
Since a model is a simplified representation of a real system created with the
intention of experimentations, Boundaries, rules and even deficiencies arise. A certain
and most annoying deficiency faced by the developers is the enormous processing power
and memory capacity required as the network size increases. This particular limitation is
almost present in all network simulators present today. Other than the simulation size
limitation, certain technical limitations are described to be
o TCP one-way
o the lack of a dynamic window advertisement
o segments and ACK calculations are in packets
o No SYN/FIN connection establishment/teardown.
TCP one-way, ECN (Explicit Congestion Notification)
Sender doesn’t check if receiver is ECN compliant
o TCP two-way(full TCP)
o no dynamic window advertisement,
o no 2MSL-wait or persist states
o no urgent data or RESET segments
Since this survey is done without actually using the tool, but rather through
studying its documentation and supporting materials, it’s predictable that the actual use of
NS would shed the light on more limitations related to the design of the software itself.
Another limitation is the lack of a GUI built in the package, several open source scripts
attempt to draft a GUI over the tool, and however this only result in decreased
functionality as the tool wasn’t designed with such an option. It is although worth
5. Page 5 of 15
mentioning the presence of external tools used for viewing networks and threads (NAM:
Network Animator).
Parallel distributed NS
As mentioned earlier when discussing the limitations faced by NS, design of large
networks using NS proved to be very hard and almost impossible. The reasons for that as
stated by the simulator developers are extensive CPU usage and large memory
requirements.
These limitations lead to the development of a parallel distributed version of the
network simulator. The research is lead by Dr.George Riley and Alfred Park faculty
members of the Georgia Institute of Technology, College of Computing. The objective of
the PDNS(parallel distributed NS) is to provide means of simulating a network topology
on a distributed system of about 8~16 workstations.
The workstations are connected through a “Myrinet” network(packet-
communication and switching technology used to interconnect clusters of computers), or
a standard Ethernet network using the TCP/IP protocol stack.
Appraoch
Keeping and using the current NS tools was important to make use of future
developments in the NS project, therefore a federated simulation approach was used,
where separate installations of the NS software are running separatly on different
workstations, each simulating a portion or a subnetwork of the system under
consideration, with a conservative(blocking based) approach of synchronization. The NS
federate approach is assumed not to need a state saving method added to the currently
avaialble NS code. The software is developed with the PADS (Parralel and Distributed
Simmulation) research group at the georgia institute of technology to provide support for
the following Parallel and distributed modes of operation:
o global virtual time management
o group data communications
o message buffer management.
communication interconnects being used are
o shared memory
o Myrinet networks
o TCP/IP networks
Although modification to the NS tool was higly undesirable, however it was necessary to
change and later on standarize those changes for future development of the NS tool.
Changes to the tool are grouped in the following categories
o Modifications to the ns event processing infrastructure
o Extensions to the NS TCL script syntax for describing simulations
6. Page 6 of 15
Status and future development of NS
PDNS has been succesfully tested on a total of 128 processors, and a sum of
500,000 nodes network topology. A set of different projects are currently being
developed, one is a scenario generator. The function of such a tool would be to translate
the output of a topology generator into NS format and will come with a scenario
library ,that will have different scenarios to thourougly test the network at hand. Last but
not least a facility to inject live trafic from NS and to emulate(i.e: translate live traffic
into NS) a network is available, it is usually considered a different project, therefore it
was not covered in this survay of NS.
In conclusion NS has been in development in the georgia institue of technology
for many years. It has been used as a reliable simulaiton and validation tool by many
agencies and organizations, proving its high standars. Although the lack of a built in GUI,
the fact that the project is open to the public, made it possible for other researches to
contribute to it, adding more functionallity to the simulator. The future of NS is clear to
be in the path of parallel and distributed simulations, as the parallel formalism aids in
solving large scale networks, which are more and more becoming the normal tybe of
networks in an ever developing and changing industry.
OPNET:
A commercial tool by MIL3, Inc., OPNET (Optimized Network Engineering Tools)
is an engineering system capable of simulating large communication networks with
detailed protocol modeling and performance analysis. Its features include graphical
specification of models, a dynamic, event-scheduled Simulation Kernel, integrated data
analysis tools and hierarchical, object based modeling. “It is a network simulation tool
that allows the definition of a network topology, the nodes, and the links that go towards
making up a network. The processes that may happen in a particular node can be user
defined, as can the properties of the transmission links. A simulation can then be
executed, and the results analyzed for any network element in the simulated network” [4].
Key features
The key features of OPNET are that, it provides powerful tools that assist the user
in the design phase of a modeling and simulation project, i.e., the building of models, the
execution of a simulation and the analysis of the output data. OPNET employs a
hierarchical structure to modeling, that is, each level of the hierarchy describes different
aspects of the complete model being simulated. It has a detailed library of models that
provide support for existing protocols and allow researchers and developers to either
modify these existing models or develop new models of their own. Furthermore, OPNET
models can be compiled into executable code. An executable discrete-event simulation
can be debugged or simply executed, resulting in output data. OPNET has three main
types of tools - the Model Development tool, the Simulation Execution tool and the
Results Analysis tool. These three types of tools are used together to model, simulate and
analyze a network.
7. Page 7 of 15
The Model Development Tool
The model development tools consist of the Network Editor, the Node Editor, the
Process Editor and the Parameter Editor. The Network Editor is used to design the
network models, with different nodes connected by point-to-point links, radio links, etc.,
and may consist of none or more subnets. The Node Editor is used to place the models of
the nodes used into the network. A node in OPNET consists of modules, such as a packet
generator, connected to other modules such as processors and packet sinks, by packet
streams and statistic lines. The Process Editor is used to define the processes that run
inside these modules. The processes themselves are designed using State Transition
Diagrams along with some textual specifications using Proto-C, an OPNET variant on the
C language. The Parameter Editor allows the definition of parameters used in the input
for the node modules and process models, such as the packet format, probability density
functions, etc [2].
The Simulation Execution Tool
The simulation execution tools consist of the Probe Editor and the Simulation
Tool. The Probe Editor is used to place probes at various points of interest in the network
model. These probes can be used to monitor any of the statistics computed during
simulation. The Simulation Tool allows the user to specify a sequence of simulations,
along with any input and output options, and many different runtime options.
The Results Analysis Tool
The results analysis tools consist of the Analysis Tool and the Filter Editor. The
Analysis Tool will display the results from a simulation or series of simulations as graphs.
The Filter Editor is used to define filters to mathematically process, reduce, or combine
statistical data [2].
Model design Methodology
OPNET defines a model using a hierarchical structure - at the top there is the
network level, which is constructed from the node level, which in turn is made from the
process level. The network level, node level and process level designs are implemented
using the Network Editor, Node Editor and Process Editor respectively. The Network
level contains one Top Level Network. This Top Level Network may consist of none or
more subnets, and into these subnets there may be any number of further subnets. In this
way OPNET can easily represent the hierarchical structure of network such as routing
networks, which may consist of Tier-1 network, inside which there is the Tier-2 network
of nodes, inside which Tier-3 nodes are connected and so on.
In the Node level the processes that happen inside a node such as a packet is
generated and received by a process, which does error checking on the packet and then
forwards it to other processes, which may do their own processing on it or discard it. The
8. Page 8 of 15
Process level allows the designer to create the processes required for use in the process
models. The processes are defined using state transition diagrams along with some
additional textual specifications using Proto-C which is a version of C specialized for
protocols and distributed algorithms [3]. Both the state diagrams and the Proto-C code
together form what OPNET terms a 'Finite State Machine'. A process model can also
spawn other, child process models [3].
The Network Editor
The Network Editor is used to specify the physical topology of a communications
network, which defines the position and interconnection of communicating entities, i.e.,
nodes and links [2]. The specific capabilities of each node are defined in the underlying
node and process models. Each model consists of a set of parameters that can be set to
customize the node's behavior and the nodes can be fixed, mobile or satellite. Data can be
transferred between the nodes using Simplex or duplex links that connect them. There
can also be a bus link that provides a broadcast medium for an arbitrary number of
attached devices [3]. Links can also be customized to simulate the actual communication
channels. The network models can be very complex due to their size. This complexity is
eliminated by an abstraction known as a subnet work as described earlier.
The Node Editor
Communication devices created and interconnected at the network level need to
be specified in the node domain using the Node Editor. Node models are expressed as
interconnected modules. These modules can be grouped into two distinct categories [2].
The first set is modules that have predefined characteristics and built-in parameter, for
example, packet generators and links, etc. The second group consists of programmable
modules, which rely on process model specifications, for example, processors and queues.
All nodes are defined via block structured data flow diagrams. Each programmable block
in a Node Model has its functionality defined by a Process Model. Packets are transferred
between modules using packet streams. Statistic wires could be used to convey numeric
signals [3].
The Process Editor
Process models, created using the Process Editor, are used to describe the logic
flow and behavior of processor and queue modules. Communication between processes is
supported by interrupts, which are a part of the library kernels available for proto–C. The
OPNET Process Editor uses a powerful state-transition diagram approach to support
specification of any type of protocol, resource, application, algorithm, or queuing policy.
States and transitions graphically define the progression of a process in response to
events. Within each state, general logic can be specified using a library of predefined
functions and even the full flexibility of the C language. Even process themselves may
create child processes to perform sub-tasks [3].
Running a simulation
9. Page 9 of 15
After defining all the models of the network system, we can run a simulation in
order to study system performance and behavior using the simulation execution tools
described earlier. OPNET simulations are obtained by executing a simulation program,
which is an executable file in the host computer's file system. In fact, OPNET provides a
number of options for running simulations, including internal and external execution, and
the ability to configure attributes that affect the simulation's behavior [3]. OPNET
simulations can be run independently from the OPNET graphical tool by using the
op_runsim utility program. However, you can also run simulations from the Simulation
Tool within OPNET, which offers the convenience of a graphical interface. The
Simulation Tool provides the following services [3]:
o Specification of simulation sequences consisting of an ordered list of
simulations and associated attribute values
o Execution of simulation sequences
o Storage of simulation sequences in files for later use.
The Probe Editor that is a part of the simulation execution tools as mentioned
earlier, is used to specify which data to collect. Most OPNET models contain objects that
are capable of generating vast amounts of output data during simulations. The output can
be statistical or animated by pre-defined or user-customized animations. The selection of
the various types of output formats can be done using the probed editor. A probe is
defined for each source of data that the user wishes to enable. Probes are grouped into a
probe list which, allowing them to be collectively applied to a model when a simulation is
executed [3]. Several different probe types are provided by OPNET in order to capture
different types of output data [2]. The statistic probe can be applied to predefined,
standard statistics sometimes application-specific visualization monitoring characteristics
such as bit error rates or throughput. TheAutomatic Animation Probe is used to generate
animation sequences for a simulation. TheCustom Animation Probe supports the creation
of custom animations. The actual specification of the animation's characteristics is
defined within the user's code. Coupled Statistic Probe generates output data as the
statistic probe does but, in addition, a primary module and a coupled module need to be
defined. Some statistical data is generated at the primary module. This data is only
generated when changes to the statistic are due to interactions with the coupled module
[2].
Data observation and collection
After the simulations have been executed, the Results Analysis tools that consist
of the Analysis tool and Filter tool are used to observe and collect the data. Simulations
can be used to generate a number of different forms of output, as described above. These
forms include several types of numerical data, animation, and detailed traces provided by
the OPNET debugger [3]. OPNET simulations support open interfaces to the C language,
and the host computer's operating system, therefore simulation developers may generate
proprietary forms of output ranging from messages printed in the console window, to
generation of ASCII or binary files, and even live interactions with other programs [3].
However, the most commonly used forms of output data are those that are directly
10. Page 10 of 15
supported by Simulation Kernel interfaces for collection, and by existing tools for
viewing and analysis. These include animation data and numerical statistics [3].
Animation data is generated either by using automatic animation probes or by developing
custom animations with the kernel procedures of the Simulation Kernel's Anim package,
as well as to view the animations [2]. Similarly, statistic data is generated by setting
statistic probes, and/or by the kernel procedures of the Kernel's Stat package. OPNET's
Analysis Tool can then be used to view and manipulate the statistical data [2].
The service provided by the Analysis Tool is to display information in the form of
graphs. Graphs are presented within rectangular areas called analysis panels. A number of
different operations can be used to create analysis panels, all of which have as their basic
purpose to display a new set of data, or to transform an existing one [2]. An analysis
panel consists of a plotting area, with two numbered axes, generally referred to as the
abscissa axis (horizontal), and the ordinate axis (vertical) [2]. The user can also extract
data from simulation output files and display this data in various forms. The Analysis
Tool also supports several mechanisms for numerically processing the data and
generating new data sets that can also be plotted such as computing probability density
functions and cumulative distribution functions, as well as generating histograms. The
data presented in the Analysis Tool, can also come in use by filters that are constructed
from a pre-defined set of filter elements in the Filter Editor [3].
Filter models are represented as block diagrams consisting of interconnected filter
elements. Filter elements may be either built-in numeric processing elements, or
references to other filter models. Thus, filter models are hierarchical, in that they may be
composed of other filter models [2].
The Graphical User Interface
The GUI for OPNET first presents several options to the user, such as a new
project model, node model, process model etc. For example, the user can create a new
project model or edit an existing one. The GUI then brings up a world atlas, where they
can view the existing network model, add new node models to the existing network or
create a new network. There are several options to run existing project models, and
options for the different kinds of output as described above. By double-clicking on a node
model inside the network model, the user can access the process model inside that node.
Inside the node model, there are options for creating packet streams, processors, servers,
queues, sinks etc. These are basically, process models, which can be user-defined via
state-machines. The properties for each of these models can be set or the user can also
pick models from an existing library. Inside the process models, there are state-machines,
which consist of proto-c code, which can be edited in order to change their functionality.
Conclusion
11. Page 11 of 15
OPNET is a powerful discrete-event simulation tool that is widely used in the
industry because of it’s features and the huge set of options embedded in it, that assist in
the simulation and modeling of large networks. It consists of libraries of existing
simulated models of real-life equipment used in communication networking such as
routers, switches and those used in wireless networks. These libraries are used to
implement different protocols with varying input, output and behavior. OPNET has a
broad portfolio for modeling, design, simulation and real-time assurance in context with
detailed insight into infrastructure requirements and is an ideal tool for modeling and
simulation.
OMNET:
1. INTRODUCTION
OMNeT++ stands for Objective Modular Network Testbed in C++. It is a
discrete event simulation tool designed to simulate computer networks, multi-
processors and other distributed systems. Its applications can be extended for modelling
other systems as well. It has become a popular network simulation tool in the scientific
community as well as in industry over the years. The principal author is András Varga,
with occasional contributions from a number of people. [5]
2. COMPONENTS OF OMNET++[5]:
simulation kernel library
compiler for the NED topology description language (nedc)
graphical network editor for NED files (GNED)
GUI for simulation execution, links into simulation executable (Tkenv)
command-line user interface for simulation execution (Cmdenv)
graphical output vector plotting tool (Plove)
utilities (random number seed generation tool, makefile creation tool, etc.)
documentation, sample simulations, contributed material, etc.
3. PLATFORMS OF OMNET++[5]:
OMNeT++ works well on multiple platforms. It was first developed on Linux.
Omnet++ runs on most Unix systems and Windows platforms ( works best on NT4.0,
W2K or XP).
The best platforms used are:
Solaris, Linux (or other Unix-like systems) with GNU tools
Win32 and Cygwin32 (Win32 port of gcc)
Win32 and Microsoft Visual C++
4. LICENSING FOR OMNET++:
12. Page 12 of 15
OMNeT++ is free for any non-profit use. The author must be contacted if it is
used in a commercial project. The GNU General Public License can be chosen on
Omnet++. [5]
5. SIMULATION MODELING IN OMNET++[5]
The following are types of modeling that can be used:
communication protocols
computer networks and traffic modeling
multi-processor and distributed systems
administrative systems
... and any other system where the discrete event approach is suitable. “
5.1 Library Modules[5]
Object libraries can be made using simple modules. The best simple modules to be
used for library modules are the ones that implement:
Physical/Data-link protocols: Ethernet, Token Ring, FDDI, LAPB etc.
Higher layer protocols: IP, TCP, X.25 L2/L3, etc.
Network application types: E-mail, NFS, X, audio etc.
Basic elements: message generator, sink, concentrator/simple hub, queue etc.
Modules that implement routing algorithms in a multiprocessor or network
...
5.2 Network Moduling
A model network consists of “nodes” connected by “links. The nodes representing
blocks, entities, modules, etc, while the link representing channels, connections, etc. The
structure of how fixed elements (i.e nodes) in a network are interconnected together is
called topology. [5]
Omnet++ uses NED language, thus allowing for a more user friendly and
accessible environment for creation and editing. It can be created with any text-
processing tool (perl, awk, etc). It has a human-readable textual topology. It also uses the
same format as that of a graphical editor. It also supports submodule testing. Omnet++
allows for the creation of a driver entity to build a network at run-time by program. [5]
Organization of Network Simulation[5]:
Omnet++ follows a hierarchical module structure allowing for different levels of
organization.
Physical Layer:
1. Top-level network
2. Subnetwork (site)
13. Page 13 of 15
3. LAN
4. node
Topology within a node:
1. OSI layers. The Data-Link, Network, Transport, Application layers are of greater
importance.
2. Applications/protocols within a layer.
5.3 Network Description (NED)
Modular description of networks is given in NED language. The network
description consists of a number of component descriptions such as channels, simple and
compound module types. These component descriptions can be used in various network
descriptions. Thus, it is possible for the user to customize his or her personal library of
network descriptions.
The files containing the network descriptions should end with a .ned suffix. The
NEDC compiler translates the network descriptions into C++ code. Then, it is compiled
by the C++ compiler and linked into executable simulation. [5]
Components of a NED description[4]
A NED description can contain the following components, in arbitrary number or
order:
import statements
channel definitions
simple and compound module declarations
system module declarations
6. USER INTERFACES
The Omnet++ user interface is used with the simulation execution. Omnet++’s
design allows the inside of model to be seen by the user. It also allows the user to initiate
and terminate simulations, as well as change variable inside simulation models. These
features are handy during the development and debugging phase of modules in a project.
Graphical interface is a user friendly option in Omnet++ allows access to the internal
workings of the model. [5]
The interaction of the user interface and the simulation kernel is through a well
defined interfaces. Without changing the simulation kernel, it is possible to implement
several types of user interfaces. Also without changing the model file, the simulation
model can run under different interfaces. The user would test and debug the simulation
with a powerful graphical user interface, and finally run it with a simple and fast user
interface that supports batch execution. [5]
14. Page 14 of 15
The user interfaces are a form of interchangeable libraries. When linking into a
created simulation executable, the user can choose the interface libraries they would like
to use. [5]
Currenly, two user interfaces are supported[5]:
Tkenv: Tk-based graphical, windowing user interface (X-Window, Win95,
WinNT etc..)
Cmdenv: command-line user interface for batch execution
Simulation is tested and debugged under Tkenv, while the Cmdenv is used for actual
simulation experiments since it supports batch execution.
6.1 Tkenv
Tkenv is a portable graphical windowing user interface. Tracing, debugging, and
simulation execution is supported by Tkenv. It has the ability to provide a detailed picture
of the state of the simulation at any point during the execution. This feature makes Tkenv
a good candidate in the development stage of a simulation or for presentations. A
snapshot of a Tkenv interface is shown in figure 1. [5]
Important feaures in Tkenv[5]:
separate window for each module's text output
scheduled messages can be watched in a window as simulation progresses
event-by-event execution
execution animation
labelled breakpoints
inspector windows to examine and alter objects and variables in the model
graphical display of simulation results during execution. Results can be displayed
as histograms or time-series diagrams.
simulation can be restarted
snapshots (detailed report about the model: objects, variables etc.)
It is recommended for testing and debugging when used with gdb or xxgdb. Tkenv
provides a good environment for experimenting with the model during executions and
verification of the correct operation during the simulation program. This is possible since
we are able to display simulation results during execution.
6.2 Cmdenv
Cmdenv is designed primarily for batch execution. It is a portable and small command
line interface that is fast. It compiles and runs on all platforms. Cmdenv uses simply
executes all simulation runs that are described in the configuration file.
Figure 1. Example of a Tkenv User Interface in Omnet++.
15. Page 15 of 15
7. Expected Performance of Omnet++
One of the most important factors in any simulation the is the programming
language. The common languages used are C/C++ based. Omnet performance is of a
particular interest since it reduces the overhead costs associated with GUI simulation
library debugging and tracing. The drawback found in Omnet++ was its simulations were
1.3 slower than it C counterpart.
Reference:
[1] The Network Simulator- ns-2. available: http://www.isi.edu/nsnam/ns/, last updated:
19/06/2003
[2] OPNET online manual
[3] Network Simulations with OPNET, Chang
[4] http://www.ee.ucl.ac.uk/dcs/commercial/opnet/opnet.html
[5] OMNET++ User Manual: http://whale.hit.bme.hu/omnetpp/