2018-11-06: Unfortunately, LinkedIn/Slideshare disabled the update functionality and, thus, I had to upload an updated version of this introduction to OMNeT++ as new presentation. It is available here: https://www.slideshare.net/christian.timmerer/an-introduction-to-omnet-54
Scalable Service-Oriented Middleware over IPDai Yang
ABSTRACT
Due to the increased amount of communication in cars, a reliable and easy to use middleware system for automotive applications becomes a popular research field. In this paper, we review a recent approach: the Scalable Service-Oriented Middleware over IP (SOME/IP). We present current tech- nologies and how SOME/IP differs from them. We point out how SOME/IP is ordered into the ISO/OSI layer model and discuss its service orientation. We also present the ad- vantages and disadvantages of SOME/IP. In the end, we analyze its timing behavior and whether it is suitable for automotive software or not.
High Performance Storage Devices in the Linux KernelKernel TLV
Agenda:
In this talk we will present the Linux kernel storage layers and dive into blk-mq, a scalable, parallel block layer for high performance block devices, and how it is used to unleash the performance of NVMe, flash and beyond.
Speaker:
Evgeny Budilovsky, Kernel Developer at E8 Storage
https://www.linkedin.com/company/e8-storage
Scalable Service-Oriented Middleware over IPDai Yang
ABSTRACT
Due to the increased amount of communication in cars, a reliable and easy to use middleware system for automotive applications becomes a popular research field. In this paper, we review a recent approach: the Scalable Service-Oriented Middleware over IP (SOME/IP). We present current tech- nologies and how SOME/IP differs from them. We point out how SOME/IP is ordered into the ISO/OSI layer model and discuss its service orientation. We also present the ad- vantages and disadvantages of SOME/IP. In the end, we analyze its timing behavior and whether it is suitable for automotive software or not.
High Performance Storage Devices in the Linux KernelKernel TLV
Agenda:
In this talk we will present the Linux kernel storage layers and dive into blk-mq, a scalable, parallel block layer for high performance block devices, and how it is used to unleash the performance of NVMe, flash and beyond.
Speaker:
Evgeny Budilovsky, Kernel Developer at E8 Storage
https://www.linkedin.com/company/e8-storage
In this talk we discuss the mechanisms of utilizing the eBPF language to perform hardware accelerated network packet manipulation and filtering. P4 programs can be compiled into eBPF scripts for offload in the Linux kernel using the Traffic Classifier (TC) subsystem. We demonstrate how, using eBPF as an intermediate language, it has been possible to extend the TC to either Just In Time (JIT) compile eBPF code to x86 assembler for software offload or to IXP byte code for execution in a trusted hardware environment within the Netronome Agilio intelligent server adapter. We finish by encouraging the audience to experiment with their own eBPF applications within the TC hardware accelerated system. The TC kernel patches are available on the Linux Kernel Networking mailing list as a Request For Comment (RFC) contribution.
Dinan Gunawardena, Director, Software Engineering, Netronome
Dinan Gunawardena is a Software Director focusing on running the driver team at Netronome. Previously, Dinan founded a software startup and was a Senior Research Engineer within the Operating Systems and Networking Group at Microsoft Research for 12 years, shipping technology in several versions of Microsoft Windows and the Bing Search Engine. Dinan has received over 20 patents and is a Chartered Software Engineer. Dinan has a Masters in Computer Science from University of Cambridge and a M.B.A. from WBS.
Jakub Kicinski, Software Engineering, Netronome
Jakub Kicinski is a Software Engineer specializing in the Linux Kernel drivers for Netronome SmartNICs. Jakub has previously worked as an intern for Intel Corporation. Jakub is also a researcher with expertise in Linux kernel. Experience in application development on complex multi-CPU and FPGA platforms. He is interested in high-performance software exploiting hardware capabilities and is passionate about networking. Jakub has a Masters in Computer Science from Gdansk University of Technology.
When we desire a communication between two applications possibly running on different machines, we need sockets. This presentation aims to provide knowledge of basic socket programming to undergraduate students. Basically, this presentation gives the importance of socket in the area of networking and Unix Programming. The presentation of Topic (Sockets) has designed according to the Network Programming Subject, B.Tech, 6th Semester syllabus of Punjab Technical University Kapurthala, Punjab.
Overview of UDP protocol.
UDP (User Datagram Protocol) is a simple extension of the Internet Protocol services. It basically provides simple packet transport service without any quality of service functions.
Unlike TCP, UDP is connection-less and packet-based. Application PDUs (application packets) sent over a UDP socket are delivered to the receiving host application as is without fragmentation.
UDP is mostly used by applications with simple request-response communication patterns like DNS, DHCP, RADIUS, RIP or RPC.
Since UDP does provide any error recovery such as retransmission of lost packets, the application protocols have to take care of these situations.
This tutorial consists of three main parts. In the first part, we provide a detailed overview of the HTML5 standard and show how it can be used for adaptive streaming deployments. In particular, we focus on the HTML5 video, media extensions, and multi-bitrate encoding, encapsulation and encryption workflows, and survey well-established streaming solutions. Furthermore, we present experiences from the existing deployments and the relevant de jure and de facto standards (DASH, HLS, CMAF) in this space. In the second part, we focus on omnidirectional (360) media from creation to consumption. We survey means for the acquisition, projection, coding and packaging of omnidirectional media as well as delivery, decoding and rendering methods. Emerging standards and industry practices are covered as well. The last part presents some of the current research trends, open issues that need further exploration and investigation, and various efforts that are underway in the streaming industry.
In this talk we discuss the mechanisms of utilizing the eBPF language to perform hardware accelerated network packet manipulation and filtering. P4 programs can be compiled into eBPF scripts for offload in the Linux kernel using the Traffic Classifier (TC) subsystem. We demonstrate how, using eBPF as an intermediate language, it has been possible to extend the TC to either Just In Time (JIT) compile eBPF code to x86 assembler for software offload or to IXP byte code for execution in a trusted hardware environment within the Netronome Agilio intelligent server adapter. We finish by encouraging the audience to experiment with their own eBPF applications within the TC hardware accelerated system. The TC kernel patches are available on the Linux Kernel Networking mailing list as a Request For Comment (RFC) contribution.
Dinan Gunawardena, Director, Software Engineering, Netronome
Dinan Gunawardena is a Software Director focusing on running the driver team at Netronome. Previously, Dinan founded a software startup and was a Senior Research Engineer within the Operating Systems and Networking Group at Microsoft Research for 12 years, shipping technology in several versions of Microsoft Windows and the Bing Search Engine. Dinan has received over 20 patents and is a Chartered Software Engineer. Dinan has a Masters in Computer Science from University of Cambridge and a M.B.A. from WBS.
Jakub Kicinski, Software Engineering, Netronome
Jakub Kicinski is a Software Engineer specializing in the Linux Kernel drivers for Netronome SmartNICs. Jakub has previously worked as an intern for Intel Corporation. Jakub is also a researcher with expertise in Linux kernel. Experience in application development on complex multi-CPU and FPGA platforms. He is interested in high-performance software exploiting hardware capabilities and is passionate about networking. Jakub has a Masters in Computer Science from Gdansk University of Technology.
When we desire a communication between two applications possibly running on different machines, we need sockets. This presentation aims to provide knowledge of basic socket programming to undergraduate students. Basically, this presentation gives the importance of socket in the area of networking and Unix Programming. The presentation of Topic (Sockets) has designed according to the Network Programming Subject, B.Tech, 6th Semester syllabus of Punjab Technical University Kapurthala, Punjab.
Overview of UDP protocol.
UDP (User Datagram Protocol) is a simple extension of the Internet Protocol services. It basically provides simple packet transport service without any quality of service functions.
Unlike TCP, UDP is connection-less and packet-based. Application PDUs (application packets) sent over a UDP socket are delivered to the receiving host application as is without fragmentation.
UDP is mostly used by applications with simple request-response communication patterns like DNS, DHCP, RADIUS, RIP or RPC.
Since UDP does provide any error recovery such as retransmission of lost packets, the application protocols have to take care of these situations.
This tutorial consists of three main parts. In the first part, we provide a detailed overview of the HTML5 standard and show how it can be used for adaptive streaming deployments. In particular, we focus on the HTML5 video, media extensions, and multi-bitrate encoding, encapsulation and encryption workflows, and survey well-established streaming solutions. Furthermore, we present experiences from the existing deployments and the relevant de jure and de facto standards (DASH, HLS, CMAF) in this space. In the second part, we focus on omnidirectional (360) media from creation to consumption. We survey means for the acquisition, projection, coding and packaging of omnidirectional media as well as delivery, decoding and rendering methods. Emerging standards and industry practices are covered as well. The last part presents some of the current research trends, open issues that need further exploration and investigation, and various efforts that are underway in the streaming industry.
Model Simulation, Graphical Animation, and Omniscient Debugging with EcoreToo...Benoit Combemale
You have your shiny new modeling language up and running thanks to the Eclipse Modeling Technologies and you built a powerful graphical editor with Sirius to support it. But how can you see what is going on when a model is executed? Don't you need to debug your design in some way? Wouldn't you want to see your editors being animated directly within your modeling environment based on execution traces or simulator results?
In this talk, we will present Sirius Animator, an add-on to Sirius that provides you a tool- supported approach to complement a modeling language with an execution semantics and a graphical description of an animation layer. The execution semantics is defined thanks to ALE, an Action Language for EMF integrated into Ecore Tools to modularly implement the bodies of your EOperations, and the graphical description of the animation layer is defined thanks to Sirius. From both inputs, Sirius Animator automatically provides an advanced and extensible environment for model simulation, animation and debugging, on top of the graphical editor of Sirius and the debug UI of Eclipse. To illustrate the overall approach, we will demonstrate the ability to seamlessly extend Arduino Designer, in order to provide an advanced debugging environment that includes graphical animation, forward/backward step-by-step, breakpoint definition, etc.
LinuxKit, a toolkit for building custom minimal, immutable Linux distributions.
Secure defaults without compromising usability
Everything is replaceable and customisable
Immutable infrastructure applied to building Linux distributions
Completely stateless, but persistent storage can be attached
Easy tooling, with easy iteration
Built with containers, for running containers
Designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes
Designed from the experience of building Docker Editions, but redesigned as a general-purpose toolkit
Designed to be managed by external tooling, such as Infrakit or similar tools
Includes a set of longer-term collaborative projects in various stages of development to innovate on kernel and userspace changes, particularly around security
HPC DAY 2017 | Prometheus - energy efficient supercomputingHPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Prometheus - energy efficient supercomputing
Marek Magrys | Manager of Mass Storage Departament, ACC Cyfronet AGH-UST
HPC DAY 2017 | Altair's PBS Pro: Your Gateway to HPC ComputingHPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Altair's PBS Pro: Your Gateway to HPC Computing
Dr. Jochen Krebs | Director Enterprise Sales Central & Eastern Europe at Altaire
«When systems are not just dozens of subsystems, but dozens of engineering teams, even our best and most experienced engineers routinely guess wrong about the root cause of poor end-to-end performance» — that’s what think in Google.
Latency tracing approach helps Google and many other companies to control stability and performance as well as helps to find root causes of performance degradation even in huge and complex distributed systems.
I’ll tell about what is latency tracing, how that helps you, and how you can implement it in your project. Finally I will show live demo using such tools as Dynatrace and Zipkin.
examples: https://github.com/kslisenko/java-performance
http://javaday.org.ua/kanstantsin-slisenka-profiling-distributed-java-applications/
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
asp.net using c# notes sem 5 ( we-it tutorials ).
Review of .NET frameworks, Introduction to C#, Variables and expressions, flow controls, functions, debugging and error handling, OOPs with C#, Defining classes and class members.
Assembly, Components of Assembly, Private and Shared Assembly, Garbage Collector, JIT compiler. Namespaces Collections, Delegates and Events. Introduction to ASP.NET 4: Microsoft.NET framework, ASP.NET lifecycle. CSS: Need of CSS, Introduction to CSS, Working with CSS with visual developer.
ASP.NET server controls: Introduction, How to work with button controls, Textboxes, Labels, checkboxes and radio buttons, list controls and other web server controls, web.config and global.asax files. Programming ASP.NET web pages: Introduction, data types and variables, statements, organizing code, object oriented basics.
Validation Control: Introduction, basic validation controls, validation techniques, using advanced validation controls. State Management: Using view state, using session state, using application state, using cookies and URL encoding. Master Pages: Creating master pages, content pages, nesting master pages, accessing master page controls from a content page. Navigation: Introduction to use the site navigation, using site navigation controls.
Databases: Introduction, using SQL data sources, GridView Control, DetailsView and FormView Controls, ListView and DataPager controls, Using object datasources. ASP.NET Security: Authentication, Authorization, Impersonation, ASP.NET provider model
LINQ: Operators, implementations, LINQ to objects,XML,ADO.NET, Query Syntax. ASP.NET Ajax: Introducing AJAX, Working of AJAX, Using ASP.NET AJAX
server controls. JQuery: Introduction to JQuery, JQuery UI Library, Working of JQuery
INET is an internet framework for OMNeT++ simulator.
This is a guide to start INET.
INET for Dummies.
INET Tutorial
If this guide is hard for you to follow, please comment or give any suggestion.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
VEED: Video Encoding Energy and CO2 Emissions Dataset for AWS EC2 instancesAlpen-Adria-Universität
Video streaming constitutes 65 % of global internet traffic, prompting an investigation into its energy consumption and CO2 emissions. Video encoding, a computationally intensive part of streaming, has moved to cloud computing for its scalability and flexibility. However, cloud data centers’ energy consumption, especially video encoding, poses environmental challenges. This paper presents VEED, a FAIR Video Encoding Energy and CO2 Emissions Dataset for Amazon Web Services (AWS) EC2 instances. Additionally, the dataset also contains the duration, CPU utilization, and cost of the encoding. To prepare this dataset, we introduce a model and conduct a benchmark to estimate the energy and CO2 emissions of different Amazon EC2 instances during the encoding of 500 video segments with various complexities and resolutions using Advanced Video Coding (AVC)
and High-Efficiency Video Coding (HEVC). VEED and its analysis can provide valuable insights for video researchers and engineers to model energy consumption, manage energy resources, and distribute workloads, contributing to the sustainability of cloud-based video encoding and making them cost-effective. VEED is available at Github.
Addressing climate change requires a global decrease in greenhouse gas (GHG) emissions. In today’s digital landscape, video streaming significantly influences internet traffic, driven by the widespread use of mobile devices and the rising popularity of streaming plat-
forms. This trend emphasizes the importance of evaluating energy consumption and the development of sustainable and eco-friendly video streaming solutions with a low Carbon Dioxide (CO2) footprint. We developed a specialized tool, released as an open-source library called GREEM , addressing this pressing concern. This tool measures video encoding and decoding energy consumption and facilitates benchmark tests. It monitors the computational impact on hardware resources and offers various analysis cases. GREEM is helpful for developers, researchers, service providers, and policy makers interested in minimizing the energy consumption of video encoding and streaming.
Optimal Quality and Efficiency in Adaptive Live Streaming with JND-Aware Low ...Alpen-Adria-Universität
In HTTP adaptive live streaming applications, video segments are encoded at a fixed set of bitrate-resolution pairs known as bitrate ladder. Live encoders use the fastest available encoding configuration, referred to as preset, to ensure the minimum possible latency in video encoding. However, an optimized preset and optimized number of CPU threads for each encoding instance may result in (i) increased quality and (ii) efficient CPU utilization while encoding. For low latency live encoders, the encoding speed is expected to be more than or equal to the video framerate. To this light, this paper introduces a Just Noticeable Difference (JND)-Aware Low latency Encoding Scheme (JALE), which uses random forest-based models to jointly determine the optimized encoder preset and thread count for each representation, based on video complexity features, the target encoding speed, the total number of available CPU threads, and the target encoder. Experimental results show that, on average, JALE yield a quality improvement of 1.32 dB PSNR and 5.38 VMAF points with the same bitrate, compared to the fastest preset encoding of the HTTP Live Streaming (HLS) bitrate ladder using x265 HEVC open-source encoder with eight CPU threads used for each representation. These enhancements are achieved while maintaining the desired encoding speed. Furthermore, on average, JALE results in an overall storage reduction of 72.70%, a reduction in the total number of CPU threads used by 63.83%, and a 37.87% reduction in the overall encoding time, considering a JND of six VMAF points.
In the context of rising environmental concerns, this paper introduces VEEP, an architecture designed to predict energy consumption and CO2 emissions in cloud-based video encoding. VEEP combines video analysis with machine learning (ML)-based energy prediction and real-time carbon intensity, enabling precise estimations of CPU energy usage and CO2 emissions during the encoding process. It is trained on the Video Complexity Dataset (VCD) and encoding results from various AWS EC2 instances. VEEP achieves high accuracy, indicated by an 𝑅2-score of 0.96, a mean absolute error (MAE) of 2.41 × 10−5, and a mean squared error (MSE) of 1.67 × 10−9. An important finding is the potential to reduce emissions by up to 375 times when comparing cloud instances and their locations. These results highlight the importance of considering environmental factors in cloud computing.
In today’s dynamic streaming landscape, where viewers access content on various devices and en- counter fluctuating network conditions, optimizing video delivery for each unique scenario is impera- tive. Video content complexity analysis, content-adaptive video coding, and multi-encoding methods are fundamental for the success of adaptive video streaming, as they serve crucial roles in delivering high-quality video experiences to a diverse audience. Video content complexity analysis allows us to comprehend the video content’s intricacies, such as motion, texture, and detail, providing valuable insights to enhance encoding decisions. By understanding the content’s characteristics, we can effi- ciently allocate bandwidth and encoding resources, thereby improving compression efficiency without compromising quality. Content-adaptive video coding techniques built upon this analysis involve dy- namically adjusting encoding parameters based on the content complexity. This adaptability ensures that the video stream remains visually appealing and artifacts are minimized, even under challenging network conditions. Multi-encoding methods further bolster adaptive streaming by offering faster encoding of multiple representations of the same video at different bitrates. This versatility reduces computational overhead and enables efficient resource allocation on the server side. Collectively, these technologies empower adaptive video streaming to deliver optimal visual quality and uninter- rupted viewing experiences, catering to viewers’ diverse needs and preferences across a wide range of devices and network conditions. Embracing video content complexity analysis, content-adaptive video coding, and multi-encoding methods is essential to meet modern video streaming platforms’ evolving demands and create immersive experiences that captivate and engage audiences. In this light, this dissertation proposes contributions categorized into four classes:
Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Video...Alpen-Adria-Universität
Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.
Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Vid...Alpen-Adria-Universität
Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.
Optimizing Video Streaming for Sustainability and Quality: The Role of Prese...Alpen-Adria-Universität
HTTP Adaptive Streaming (HAS) methods divide a video into smaller segments, encoded at multiple pre-defined bitrates to construct a bitrate ladder. Bitrate ladders are usually optimized per title over several dimensions, such as bitrate, resolution, and framerate. This paper adds a new dimension to the bitrate ladder by considering the energy consumption of the encoding process. Video encoders often have multiple pre-defined presets to balance the trade-off between encoding time, energy consumption, and compression efficiency. Faster presets disable certain coding tools defined by the codec to reduce the encoding time at the cost of reduced compression efficiency. Firstly, this paper evaluates the energy consumption and compression efficiency of different x265 presets for 500 video sequences. Secondly, optimized presets are selected for various representations in a bitrate ladder based on the results to guarantee a minimal drop in video quality while saving energy. Finally, a new per-title model, which optimizes the trade-off between compression efficiency and energy consumption, is proposed. The experimental results show that decreasing the VMAF score by 0.15 and 0.39 while choosing an optimized preset results in encoding energy savings of 70% and 83%, respectively.
Energy-Efficient Multi-Codec Bitrate-Ladder Estimation for Adaptive Video Str...Alpen-Adria-Universität
With the emergence of multiple modern video codecs, streaming service providers are forced to encode, store, and transmit bitrate ladders of multiple codecs separately, consequently suffering from additional energy costs for encoding, storage, and transmission.
To tackle this issue, we introduce an online energy-efficient Multi-Codec Bitrate ladder Estimation scheme (MCBE) for adaptive video streaming applications. In MCBE, quality representations within the bitrate ladder of new-generation codecs (e.g., HEVC, AV1) that lie below the predicted rate-distortion curve of the AVC codec are removed. Moreover, perceptual redundancy between representations of the bitrate ladders of the considered codecs is also minimized based on a Just Noticeable Difference (JND) threshold. Therefore, random forest-based models predict the VMAF of bitrate ladder representations of each codec. In a live streaming session where all clients support the decoding of AVC, HEVC, and AV1, MCBE achieves impressive results, reducing cumulative encoding energy by 56.45%, storage energy usage by 94.99%, and transmission energy usage by 77.61% (considering a JND of six VMAF points). These energy reductions are in comparison to a baseline bitrate ladder encoding based on current industry practice.
Machine Learning Based Resource Utilization Prediction in the Computing Conti...Alpen-Adria-Universität
This paper presents UtilML, a novel approach for tackling resource utilization prediction challenges in the computing continuum. UtilML leverages Long-Short-Term Memory (LSTM) neural networks, a machine learning technique, to forecast resource utilization accurately. The effectiveness of UtilML is demonstrated through its evaluation of data extracted from a real GPU cluster in a computing continuum infrastructure comprising more than 1800 computing devices. To assess the performance of UtilML, we compared it with two related approaches that utilize a Baseline-LSTM model. Furthermore, we analyzed the LSTM results against User-Predicted values provided by GPU cluster owners for task deployment with estimated allocation values. The results indicate that UtilML outperformed user predictions by 2% to 27% for CPU utilization prediction. For memory prediction, UtilML variants excelled, showing improvements of 17% to 20% compared to user predictions.
The exponential growth of computer game streaming has led to the development of Quality of Experience (QoE) metrics to evaluate user satisfaction and enjoyment during online gameplay and live streaming. Adaptive Bitrate (ABR) streaming is a recent technology that has been suggested to improve QoE. This method enhances the streaming experience, upholds visual quality, minimizes stall events, and boosts player retention. It achieves this by estimating network bottlenecks and selecting appropriate versions of the content that best match the available bandwidth rather than adjusting encoding parameters. To investigate the correlation between quality switching and stall events, a subjective test was conducted separately and comparatively with 71 participants. For more detailed and in-depth research, video games were analyzed with the Video Complexity Analyzer (VCA) tool and divided into three categories of different genres, camera view, and temporal complexity heatmap from the two sets of normal and action scenes. This study seeks to shed light on three unresolved issues pertinent to QoE in game streaming: (i) the user preferences towards quality switching and stall events across varied scenes and games, (ii) the user inclinations towards either a single, prolonged stall event or multiple, shorter stall events, and (iii) the impact of conspicuous quality switching on the user’s QoE. Results from the study provided valuable insights, both qualitatively and quantitatively. The study found a marked preference among users for quality switching over stall events across all types of game streaming, irrespective of the scene’s intensity. Furthermore, it was observed that multiple short-stall events were generally favored over a single long-stall event in streaming first-person shooting games. Interestingly, approximately half of the participants remained oblivious to quality switching during their game viewing sessions, and among those who noticed a change in quality, the alteration did not significantly impact their perceived QoE.
Network-Assisted Delivery of Adaptive Video Streaming Services through CDN, S...Alpen-Adria-Universität
Multimedia applications, mainly video streaming services, are currently the dominant source of network load worldwide. In recent Video-on-Demand (VoD) and live video streaming services, traditional streaming delivery techniques have been replaced by adaptive solutions based on the HTTP protocol. Current trends toward high-resolution (e.g., 8K) and/or low- latency VoD and live video streaming pose new challenges to end-to-end (E2E) bandwidth demand and have stringent delay requirements. To do this, video providers typically rely on Content Delivery Networks (CDNs) to ensure that they provide scalable video streaming services. To support future streaming scenarios involving millions of users, it is necessary to increase the CDNs’ efficiency. It is widely agreed that these requirements may be satisfied by adopting emerging networking techniques to present Network-Assisted Video Streaming (NAVS) methods. Motivated by this, this thesis goes one step beyond traditional pure client- based HAS algorithms by incorporating (an) in-network component(s) with a broader view of the network to present completely transparent NAVS solutions for HAS clients.
Over the last recent years, video streaming traffic has become the dominating service over mobile networks. The two main reasons for the growth of video streaming traffic are the improved capabilities of mobile devices and the emergence of HTTP Adaptive Streaming (HAS). Hence, there is a demand for new technologies to cope with the increasing traffic load while improving clients’ Quality of Experience (QoE). The network plays a crucial role in the video streaming process. One of the key technologies on the network side is Multi-access Edge Computing (MEC), which has several key characteristics: computing power, storage, proximity to the clients and access to network and player metrics. Thus, it is possible to deploy mechanisms at the MEC node that assist video streaming.
This thesis investigates how MEC capabilities can be leveraged to support video streaming delivery, specifically to improve the QoE, reduce latency or increase storage and bandwidth savings.
In the last decades, video streaming has been developing significantly. Among cur- rent technologies, HTTP Adaptive Streaming (HAS) is considered the de-facto approach in multimedia transmission over the internet. In HAS, the video is split into temporal segments with the same duration (e.g., 4s), each of which is then encoded into different quality versions and stored at servers. The end user sends requests to the server to retrieve segments with specific quality versions determined by an Adaptive Bitrate (ABR) algorithm for the purpose of adapting the throughput fluctuation. Though the majority of HAS-based media services function well even under throughput restrictions and variations, there are still significant challenges for multimedia systems, especially the tradeoff among the increasing content complexity, various time-related requirements, and Quality of Experience (QoE). Content complexity encompasses the increased demands for data, such as high-resolution videos and high frame rates, as well as novel content formats, such as virtual reality (VR) and augmented reality (AR). Time-related requirements include – but are not limited to – start-up delay and end-to-end latency. QoE can be defined as the level of satisfaction or frustration experienced by the user of an application or service. Optimizing for one aspect usually negatively impacts at least one of the other two aspects. This thesis tackles critical open research questions in the context of HAS that significantly impact the QoE at the client side.
VE-Match: Video Encoding Matching-based Model for Cloud and Edge Computing In...Alpen-Adria-Universität
The considerable surge in energy consumption within data centers can be attributed to the exponential rise in demand for complex computing workflows and storage resources. Video streaming applications are both compute and storage-intensive and account for the majority of today’s internet services. In this work, we designed a video encoding application consisting of codec, bitrate, and resolution set for encoding a video segment. Then, we propose VE-Match, a matching-based method to schedule video encoding applications on both Cloud and Edge resources to optimize costs and energy consumption. Evaluation results on a real computing testbed federated between Amazon Web Services (AWS) EC2 Cloud instances and the Alpen-Adria University (AAU) Edge server reveal that VE-Match achieves lower costs by 17%-78% in the cost-optimized scenarios compared to the energy-optimized and tradeoff between cost and energy. Moreover, VE-Match improves the video encoding energy consumption by 38%-45% and gCO2 emission by up to 80 % in the energy-optimized scenarios compared to the cost-optimized and tradeoff between cost and energy.
Energy Consumption in Video Streaming: Components, Measurements, and StrategiesAlpen-Adria-Universität
The rapid growth of video streaming usage is a significant source of energy consumption, driven by improved internet connections and service offerings, the quick development of video entertainment, the deployment of Ultra High-Definition, Virtual and Augmented Reality, as well as an increasing number of video surveillance and IoT applications. To address this challenge, it is essential to understand the various components involved in energy consumption during video streaming, ranging from video encoding to decoding and displaying the video on the end user’s screen. Then, it is critical to measure energy consumption for each component accurately and conduct an in-depth analysis to develop energy-efficient strategies that optimize video streaming [1, 2, 3]. These components are classified into three categories [4]: (i) data centers, which include encoding, packaging, and storage on cloud data centers; (ii) networks, which include core network and access networks; and (iii) end-user devices which involve decoding, players, hardware, etc.
In addition to identifying the primary components of video streaming that affect energy consumption, it is important to conduct a comprehensive analysis of the entire video streaming. It is also essential to balance energy optimization and service quality to ensure that energyefficient strategies are implemented without sacrificing the quality of video streaming services.
This talk aims to provide insights into the components of video streaming that contribute to energy consumption and highlight the challenges associated with measuring their energy usage. I will also introduce the tools that can be used for energy measurements for those components and the possible and associated strategies that lie within energy efficiency. By accurately measuring energy consumption, digital media companies can effectively monitor and control their energy usage, ultimately leading to cost savings and improved sustainability.
Exploring the Energy Consumption of Video Streaming: Components, Challenges, ...Alpen-Adria-Universität
The rapid growth of video streaming usage is a significant source of energy consumption, driven by improved internet connections and service offerings, the quick development of video entertainment, the deployment of Ultra High-Definition, Virtual and Augmented Reality, as well as an increasing number of video surveillance and IoT applications. However, it is essential to note that these advancements come at the cost of energy consumption. To address this challenge, it is essential to understand the various components involved in energy consumption during video streaming, ranging from video encoding to decoding and displaying the video on the end user’s screen. Then, it is critical to accurately measure energy consumption for each component and conduct an in-depth analysis to develop energy-efficient strategies that optimize video streaming. I categorize these components into three categories: (i) data centers, (ii) networks, and (iii) end-user devices.
In this talk, my objective is to provide insights into the components of video streaming that contribute to energy consumption and highlight the challenges associated with measuring their energy usage. I will also introduce the tools that can be used for energy measurements for those components and the possible and associated strategies that lie within energy efficiency. By accurately measuring energy consumption, digital media companies can effectively monitor and control their energy usage, ultimately leading to cost savings and improved sustainability.
Video Coding Enhancements for HTTP Adaptive Streaming Using Machine LearningAlpen-Adria-Universität
Video is evolving into a crucial tool as daily lives are increasingly centered around visual communication. The demand for better video content is constantly rising, from entertainment to business meetings. The delivery of video content to users is of utmost significance. HTTP adaptive streaming, in which the video content adjusts to the changing network circumstances, has become the de-facto method for delivering internet video.
As video technology continues to advance, it presents a number of challenges, one of which is the large amount of data required to describe a video accurately. To address this issue, it is necessary to have a powerful video encoding tool. Historically, these efforts have relied on hand-crafted tools and heuristics. However, with the recent advances in machine learning, there has been increasing exploration into using these techniques to enhance video coding performance.
This thesis proposes eight contributions that enhance video coding performance for HTTP adaptive streaming using machine learning.
Optimizing QoE and Latency of Live Video Streaming Using Edge Computing a...Alpen-Adria-Universität
Nowadays, HTTP Adaptive Streaming (HAS) has become the de-facto standard for delivering video over the Internet. More users have started generating and delivering high-quality live streams (usually 4K resolution) through popular online streaming platforms, resulting in a rise in live streaming traffic. Typically, the video contents are generated by streamers and watched by many audiences, geographically distributed in various locations far away from the streamers. The resource limitation in the network (e.g., bandwidth) is a challenging issue for network and video providers to meet the users’ requested quality. This dissertation leverages edge computing capabilities and in-network intelligence to design, implement, and evaluate approaches to optimize Quality of Experience (QoE) and end-to-end (E2E) latency of live HAS. In addition, improving transcoding performance and optimizing the cost of running live HAS services and the network’s backhaul utilization are considered. Motivated by the mentioned issue, the dissertation proposes five contributions in two classes: optimizing resource utilization and light-weight transcoding.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
National Security Agency - NSA mobile device best practices
An Introduction to OMNeT++ 5.1
1. An Introduction to OMNeT++ 5.1
based on Documention from http://www.omnetpp.org/
Christian Timmerer
October 2017
Change Log:
• Oct’17: Update to 5.1
• Oct’15: Update to 5.0
• Oct’14: Update to 4.5
• Oct’13: Update to 4.3
• Oct’12: Update to 4.2
• Oct’11: Minor corrections and updates
• Oct’10: Update to 4.1
• Oct’09: Update to 4.0 incl. IDE
• Oct’08: Initial version based on OMNet++ 3.3
Alpen-Adria-Universität Klagenfurt (AAU) Faculty of Technical Sciences (TEWI)
Institute of Information Technology (ITEC) Multimedia Communication (MMC)
http://blog.timmerer.com @timse7 mailto:christian.timmerer@itec.aau.at
Chief Innovation Officer (CIO) at bitmovin GmbH
http://www.bitmovin.com christian.timmerer@bitmovin.com
2. Outline
• Introduction
• OMNeT++ in a Nutshell
– What does OMNeT++ provide then?
– OK. What does an OMNeT++ simulation model look like?
– How do I run this thing?
– OK! Now, how do I program a model in C++?
• Working with OMNeT++: Flow Chart
• TicToc Tutorial
– Getting started
– Enhancing the 2-node TicToc
– Turning into a real network
– Adding statistics collection
– Visualizing the results with the OMNeT++ IDE
• Conclusions
2October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC
3. Introduction
• Discrete event simulator
– Hierarchically nested modules
– Modules communicate using messages through channels
• Basic machinery and tools to write simulations
– Does not provide any components specifically for computer network
simulations, queuing network simulations, system architecture simulations or
any other area
• Written in C++
– Source code publicly available
– Simulation model for Internet, IPv6, Mobility, etc. available
• Free for academic use
– Commercial version: OMNESTTM
• Pros
– Well structured, highly modular, not limited to network protocol simulations
(e.g., like ns2)
• Cons
– Relatively young and only few simulation models – not young anymore,
mature, comes with IDE, good documentation, and simulation models are
availableOctober 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 3
4. OMNet++ in a Nutshell:
What does OMNeT++ provide then?
• C++ class library
– Simulation kernel
– Utility classes (for random number generation, statistics collection, topology
discovery etc.)
use to create simulation components (simple modules and channels)
• Infrastructure to assemble simulations
– Network Description Language (NED)
– .ini files
• Runtime user interfaces
– Tkenv
– Cmdenv
• Eclipse-based simulation IDE for designing, running and evaluating
simulations
• Extension interfaces for real-time simulation, emulation, MRIP, parallel
distributed simulation, database connectivity and so on
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 4
5. OMNet++ in a Nutshell:
OK. What does an OMNeT++ simulation model look like?
• OMNet++ provides component-based
architecture
• Models assembled from reusable
components = modules (can be combined in
various way like LEGO blocks)
• Modules can be connected with each other
through gates, and combined to form
compound modules
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 5
// Ethernet CSMA/CD MAC
simple EtherMAC {
parameters:
string address; // others omitted for brevity
gates:
input phyIn; // to physical layer or the network
output phyOut; // to physical layer or the network
input llcIn; // to EtherLLC or higher layer
output llcOut; // to EtherLLC or higher layer
}
// Host with an Ethernet interface
module EtherStation {
parameters: ...
gates: ...
input in; // for conn. to switch/hub, etc
output out;
submodules:
app: EtherTrafficGen;
llc: EtherLLC;
mac: EtherMAC;
connections:
app.out --> llc.hlIn;
app.in <-- llc.hlOut;
llc.macIn <-- mac.llcOut;
llc.macOout --> mac.llcIn;
mac.phyIn <-- in;
mac.phyOut --> out;
}
network EtherLAN {
... (submodules of type EtherStation, etc) ...
}
6. OMNet++ in a Nutshell:
How do I run this thing?
• Building the simulation program
– opp_makemake -deep; make
– -deep cover the whole source tree under the
make directory
– -X exclude directories
– -I required in case subdirectories are used
• Run executable
– Edit omnetpp.ini and
start ./<exec>
– ./<exec> <name1>.ini <name2.ini>
• By default, all but also graphical
user interface, Tkenv (or Cmdenv)
– -u all
– -u Tkenv oder -u Cmdenv
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 6
[General]
network = EtherLAN
*.numStations = 20
**.frameLength = normal(200,1400)
**.station[0].numFramesToSend = 5000
**.station[1-5].numFramesToSend = 1000
**.station[*].numFramesToSend = 0
7. OMNet++ in a Nutshell:
OK! Now, how do I program a model in C++?
• Simple modules are C++ classes
– Subclass from cSimpleModule
– Redefine a few virtual member functions
– Register the new class with OMNet++ via Define_Module()
macro
• Modules communicate via messages (also timers=timeout
are self-messages)
– cMessage or subclass thereof
– Messages are delivered to handleMessage(cMessage *msg)
– Send messages to other modules: send(cMessage *msg, const
char *outGateName)
– Self-messages (i.e., timers can be scheduled):
scheduleAt(simtime_t time, cMessage *msg) and
cancelEvent(cMessage *msg) to cancel the timer
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 7
8. OMNet++ in a Nutshell:
OK! Now, how do I program a model in C++? (cont’d)
• cMessage data members
– name, length, kind, etc.
– encapsulate(cMessage *msg), decapsulate() to facilitate
protocol simulations
• .msg file for user-defined message types
– OMNet++ opp_msgc to generate C++ class: _m.h and
_m.cc
– Support inheritance, composition, array members, etc.
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 8
message NetworkPacket {
fields:
int srcAddr;
int destAddr;
}
9. OMNet++ in a Nutshell:
OK! Now, how do I program a model in C++? (cont’d)
• To simulate network protocol stacks, control_info may carry
auxiliary information to facilitate communication between
protocol layers
– cMessage's setControlInfo(cPolymorpic *ctrl) and
removeControlInfo()
• Other cSimpleModule virtual member functions
– initialize(), finish(), handleParameterChange(const char
*paramName)
• Reading NED parameters
– par(const char *paramName)
– Typically done in initialize() and values are stored in data
members of the module class
– Returns cPar object which can be casted to the appropriate type
(long, double, int, etc.) or by calling the object’s doubleValue()
etc. methods
– Support for XML: DOM-like object tree
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 9
10. October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 10
Working with OMNet++: Flow Chart
6. Simulation results are written into output vector and output scalar files. You can use the Analysis Tool in
the Simulation IDE to visualize them. Result files are text-based, so you can also process them with R,
Matlab or other tools.
5. Build the simulation program and run it. You'll link the code with the OMNeT++ simulation kernel and one
of the user interfaces OMNeT++ provides. There are command line (batch) and interactive, graphical user
interface
4. Provide a suitable omnetpp.ini to hold OMNeT++ configuration and parameters to your model. A config
file can describe several simulation runs with different parameters.
3. The active components of the model (simple modules) have to be programmed in C++, using the
simulation kernel and class library.
2. Define the model structure in the NED language. You can edit NED in a text editor or in the graphical
editor of the Eclipse-based OMNeT++ Simulation IDE.
1. An OMNeT++ model is build from components (modules) which communicate by exchanging messages.
Modules can be nested, that is, several modules can be grouped together to form a compound module.
When creating the model, you need to map your system into a hierarchy of communicating modules.
11. TicToc Tutorial
• Short tutorial to OMNeT++ guides you through an
example of modeling and simulation
• Showing you along the way some of the
commonly used OMNeT++ features
• Based on the Tictoc example simulation:
samples/tictoc – much more useful if you actually
carry out at least the first steps described here
• http://www.omnetpp.org/doc/omnetpp/tictoc-
tutorial/
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 11
12. Getting Started
• Starting point
– "network" that consists of two nodes (cf. simulation of telecommunications
network)
– One node will create a message and the two nodes will keep passing the same
packet back and forth
– Nodes are called “tic” and “toc”
• Here are the steps you take to implement your first simulation from
scratch:
1. Create a working directory called tictoc, and cd to this directory
2. Describe your example network by creating a topology file
3. Implement the functionality
4. Create the Makefile
5. Compile and link the simulation
6. Create the omnetpp.ini file
7. Start the executable using graphical user interface
8. Press the Run button on the toolbar to start the simulation
9. You can play with slowing down the animation or making it faster
10. You can exit the simulation program …
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 12
13. Describe your Example Network by creating a Topology File
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 13
// This file is part of an OMNeT++/OMNEST simulation example.
// Copyright (C) 2003 Ahmet Sekercioglu
// Copyright (C) 2003-2008 Andras Varga
// This file is distributed WITHOUT ANY WARRANTY. See the file
// `license' for details on this and other legal matters.
simple Txc1 {
gates:
input in;
output out;
}
// Two instances (tic and toc) of Txc1 connected both ways.
// Tic and toc will pass messages to one another.
network Tictoc1 {
submodules:
tic: Txc1;
toc: Txc1;
connections:
tic.out --> { delay = 100ms; } --> toc.in;
tic.in <-- { delay = 100ms; } <-- toc.out;
}
Tictoc1 is a network assembled from
two submodules tic and toc (instances
from a module type called Txc1).
Connection: tic’s output gate (out) to
toc’s input gate and vice versa with
propagation delay 100ms.
Txc1 is a simple module type, i.e., atomic on
NED level and implemented in C++.
Txc1 has one input gate named in, and one
output gate named out .
14. Implement the Functionality
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 14
#include <string.h>
#include <omnetpp.h>
using namespace omnetpp;
class Txc1 : public cSimpleModule {
protected: // The following redefined virtual function holds the algorithm.
virtual void initialize() override;
virtual void handleMessage(cMessage *msg) override;
};
Define_Module(Txc1); // The module class needs to be registered with OMNeT++
void Txc1::initialize() {
// Initialize is called at the beginning of the simulation. To bootstrap the tic-toc-tic-toc process, one of the modules needs
// to send the first message. Let this be `tic‘. Am I Tic or Toc?
if (strcmp("tic", getName()) == 0) {
// create and send first message on gate "out". "tictocMsg" is an arbitrary string which will be the name of the message object.
cMessage *msg = new cMessage("tictocMsg");
send(msg, "out");
}
}
void Txc1::handleMessage(cMessage *msg) {
// The handleMessage() method is called whenever a message arrives at the module.
// Here, we just send it to the other module, through gate `out'.
// Because both `tic' and `toc' does the same, the message will bounce between the two.
send(msg, "out");
}
Txc1 simple module represented
as C++ class Txc1; subclass from
cSimpleModule.
Need to redefine two methods
from cSimpleModule: initialize()
and handleMessage().
Create cMessage and send
it to out on gate out.
Send it back.
15. Create the Makefile, Compile, and Link the Simulation
• Create Makefile: opp_makemake
– This command should have now created a
Makefile in the working directory tictoc.
• Compile and link the simulation: make
If you start the executable now, it will complain
that it cannot find the file omnetpp.ini
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 15
16. Create the omnetpp.ini File
• … tells the simulation program which network you want to simulate
• … you can pass parameters to the model
• … explicitly specify seeds for the random number generators etc.
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 16
# This file is shared by all tictoc simulations.
# Lines beginning with `#' are comments
[General]
# nothing here
[Config Tictoc1]
network = Tictoc1 # this line is for Cmdenv, Tkenv will still let you choose from a dialog
…
[Config Tictoc4]
network = Tictoc4
tictoc4.toc.limit = 5
…
[Config Tictoc7]
network = Tictoc7
# argument to exponential() is the mean; truncnormal() returns values from
# the normal distribution truncated to nonnegative values
tictoc7.tic.delayTime = exponential(3s)
tictoc7.toc.delayTime = truncnormal(3s,1s)
17. Start the Executable, Run, Play, Exit
• ./tictoc shall start the simulation environment
• Hit Run button to start the actual simulation
• Play around, e.g., fast forward, stop, etc.
• Hit File->Exit to close the simulation
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 17
18. Enhancing the 2-node TicToc
• Step 2: Refining the graphics, and adding
debugging output
• Step 3: Adding state variables
• Step 4: Adding parameters
• Step 5: Using inheritance
• Step 6: Modeling processing delay
• Step 7: Random numbers and parameters
• Step 8: Timeout, cancelling timers
• Step 9: Retransmitting the same message
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 18
19. Step 2: Refining the Graphics, and Adding Debugging Output
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 19
20. Step 2: Refining the
Graphics, and Adding
Debugging Output (cont’d)
Debug messages
EV << "Sending initial messagen";
EV << "Received message `" <<
msg->name() << "', sending it out
againn";
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 20
Open separate output windows for tic and
toc by right-clicking on their icons and
choosing Module output from the menu –
useful for large models.
21. Step 3: Adding State Variables
#include <stdio.h> #include <string.h> #include <omnetpp.h> using namespace omnetpp;
class Txc3 : public cSimpleModule {
private:
int counter; // Note the counter here
protected:
virtual void initialize() override;
virtual void handleMessage(cMessage *msg) override;
};
Define_Module(Txc3);
void Txc3::initialize() {
counter = 10;
/* The WATCH() statement below will let you examine the variable under Tkenv. After doing a few steps in the simulation,
double-click either `tic' or `toc', select the Contents tab in the dialog that pops up, and you'll find "counter" in the list. */
WATCH(counter);
if (strcmp("tic", getName()) == 0) {
EV << "Sending initial messagen“;
cMessage *msg = new cMessage("tictocMsg");
send(msg, "out");
}
}
void Txc3::handleMessage(cMessage *msg) {
counter--; // Increment counter and check value.
if (counter==0) { /* If counter is zero, delete message. If you run the model, you'll find that the simulation will stop at this point
with the message "no more events“. */
EV << getName() << "'s counter reached zero, deleting messagen“;
delete msg;
} else { EV << getName() << "'s counter is " << counter << ", sending back messagen"; send(msg, "out"); }
}
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 21
22. Step 4: Adding Parameters
• Module parameters have to be declared in the NED file: numeric, string, bool, or xml
simple Txc4 {
parameters:
// whether the module should send out a message on initialization
bool sendMsgOnInit = default(false);
int limit = default(2); // another parameter with a default value
@display("i=block/routing");
gates:
• Modify the C++ code to read the parameter in initialize(), and assign it to the counter.
counter = par("limit");
• Now, we can assign the parameters in the NED file or from omnetpp.ini. Assignments in the NED
file take precedence.
network Tictoc4 {
submodules:
tic: Txc4 {
parameters:
sendMsgOnInit = true;
limit = 8; // tic's limit is 8
@display("i=,cyan"); }
toc: Txc4 { // note that we leave toc's limit unbound here
parameters:
sendMsgOnInit = false;
@display("i=,gold"); }
connections:
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 22
We'll turn the "magic number" 10 into a
parameter
23. Step 4: Adding Parameters (cont’d)
• … and the other in omnetpp.ini:
– Tictoc4.toc.limit = 5
• Note that because omnetpp.ini supports wildcards, and
parameters assigned from NED files take precedence
over the ones in omnetpp.ini, we could have used
– Tictoc4.t*c.limit = 5
• or
– Tictoc4.*.limit = 5
• or even
– **.limit = 5
• with the same effect. (The difference between * and **
is that * will not match a dot and ** will.)
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 23
24. Step 5: Using Inheritance
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 24
• Take a closer look: tic and toc differs only in their
parameter values and their display string => use
inheritance
25. Step 6: Modeling Processing Delay
• Hold messages for some time before sending it back timing is
achieved by the module sending a message to itself (i.e., self-
messages)
• We "send" the self-messages with the scheduleAt() function
scheduleAt(simTime()+1.0, event);
• handleMessage(): differentiate whether a new message has arrived
via the input gate or the self-message came back (timer expired)
if (msg==event) or if (msg->isSelfMessage())
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 25
26. Step 6: Modeling Processing Delay (cont’d)
void Txc6::handleMessage(cMessage *msg) {
// There are several ways of distinguishing messages, for example by message
// kind (an int attribute of cMessage) or by class using dynamic_cast
// (provided you subclass from cMessage). In this code we just check if we
// recognize the pointer, which (if feasible) is the easiest and fastest
// method.
if (msg==event) {
// The self-message arrived, so we can send out tictocMsg and NULL out
// its pointer so that it doesn't confuse us later.
EV << "Wait period is over, sending back messagen";
send(tictocMsg, "out");
tictocMsg = NULL;
} else {
// If the message we received is not our self-message, then it must
// be the tic-toc message arriving from our partner. We remember its
// pointer in the tictocMsg variable, then schedule our self-message
// to come back to us in 1s simulated time.
EV << "Message arrived, starting to wait 1 sec...n";
tictocMsg = msg;
scheduleAt(simTime()+1.0, event);
}
}
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 26
27. Step 7: Random Numbers and Parameter
• Change the delay from 1s to a random value :
// The "delayTime" module parameter can be set to values like
// "exponential(5)" (tictoc7.ned, omnetpp.ini), and then here
// we'll get a different delay every time.
simtime_t delay = par("delayTime");
EV << "Message arrived, starting to wait " << delay << " secs...n";
tictocMsg = msg;
scheduleAt(simTime()+delay, event);
• "Lose" (delete) the packet with a small (hardcoded) probability:
if (uniform(0,1) < 0.1) {
EV << ""Losing" messagen";
bubble("message lost");
delete msg;
}
• Assign seed and the parameters in omnetpp.ini:
[General]
seed-0-mt=532569 # or any other 32-bit value
Tictoc7.tic.delayTime = exponential(3s)
Tictoc7.toc.delayTime = truncnormal(3s,1s)
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 27
28. Step 8: Timeout, Cancelling Timers
• Transform our model into a stop-and-wait simulation
void Tic8::handleMessage(cMessage *msg) {
if (msg==timeoutEvent) {
// If we receive the timeout event, that means the packet hasn't
// arrived in time and we have to re-send it.
EV << "Timeout expired, resending message and restarting timern";
cMessage *msg = new cMessage("tictocMsg");
send(msg, "out");
scheduleAt(simTime()+timeout, timeoutEvent);
} else { // message arrived
// Acknowledgement received -- delete the stored message
// and cancel the timeout event.
EV << "Timer cancelled.n";
cancelEvent(timeoutEvent);
// Ready to send another one.
cMessage *msg = new cMessage("tictocMsg");
send(msg, "out");
scheduleAt(simTime()+timeout, timeoutEvent);
}
}
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 28
29. Step 9: Retransmitting the Same Message
• Keep a copy of the original packet so that we can
re-send it without the need to build it again:
– Keep the original packet and send only copies of it
– Delete the original when toc's acknowledgement
arrives
– Use message sequence number to verify the model
• generateNewMessage() and sendCopyOf()
– avoid handleMessage() growing too large
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 29
30. Step 9: Retransmitting the Same Message (cont’d)
cMessage *Tic9::generateNewMessage() {
// Generate a message with a different name every time.
char msgname[20];
sprintf(msgname, "tic-%d", ++seq);
cMessage *msg = new cMessage(msgname);
return msg;
}
void Tic9::sendCopyOf(cMessage *msg) {
// Duplicate message and send the copy.
cMessage *copy = (cMessage *) msg->dup();
send(copy, "out");
}
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 30
32. Turning it into a real network
• Step 10: More than two nodes
• Step 11: Channels and inner type definitions
• Step 12: Using two-way connections
• Step 13: Defining our message class
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 32
33. Step 10: More Than Two Nodes
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 33
34. Step 10: More Than Two Nodes (cont’d)
• tic[0] will generate the message to be sent around
– done in initialize()
– getIndex() function which returns the index of the module
• forwardMessage(): invoke from handleMessage() whenever a message arrives
void Txc10::forwardMessage(cMessage *msg) {
// In this example, we just pick a random gate to send it on.
// We draw a random number between 0 and the size of gate `out[]'.
int n = gateSize("out”);
int k = intuniform(0,n-1);
EV << "Forwarding message " << msg << " on port out[" << k << "]n";
send(msg, "out", k);
}
• When the message arrives at tic[3], its handleMessage() will delete the message
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 34
35. Step 11: Channels and Inner Type Definitions
• Network definition is getting quite complex and long, e.g., connections
section
• Note: built-in DelayChannel (import ned.DelayChannel)
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 35
36. Step 12: Using Two-Way Connections
• Each node pair is connected with two connections =>
OMNeT++ 4 supports 2-way connections
• The new connections section would look like this:
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 36
37. Step 12: Using Two-Way Connections (cont’d)
• We have modified the gate names => some
modifications to the C++ code.
• Note: The special $i and $o suffix after the gate
name allows us to use the connection's two
direction separately.
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 37
38. Step 13: Defining our Message Class
• Destination address is no longer hardcoded tic[3] – add destination address to
message
• Subclass cMessage: tictoc13.msg
message TicTocMsg13 {
fields:
int source;
int destination;
int hopCount = 0;
}
• opp_msgc is invoked and it generates tictoc13_m.h and tictoc13_m.cc
• Include tictoc13_m.h into our C++ code, and we can use TicTocMsg13 as any other
class
#include "tictoc13_m.h"
TicTocMsg13 *msg = new TicTocMsg13(msgname);
msg->setSource(src);
msg->setDestination(dest);
return msg;
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 38
39. Step 13: Defining our Message Class (cont’d)
• Use dynamic cast instead of plain C-style cast ((TicTocMsg13 *)msg) which is not safe
void Txc13::handleMessage(cMessage *msg) {
TicTocMsg13 *ttmsg = check_and_cast<TicTocMsg13 *>(msg);
if (ttmsg->getDestination()==getIndex()) {
// Message arrived.
EV << "Message " << ttmsg << " arrived after " << ttmsg->getHopCount() << " hops.n";
bubble("ARRIVED, starting new one!");
delete ttmsg;
// Generate another one.
EV << "Generating another message: ";
TicTocMsg13 *newmsg = generateMessage();
EV << newmsg << endl;
forwardMessage(newmsg);
} else {
// We need to forward the message.
forwardMessage(ttmsg);
}
}
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 39
40. Step 13: Defining our Message Class (cont’d)
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 40
41. Adding statistics collection
• Step 14: Displaying the number of packets
sent/received
• Step 15: Adding statistics collection
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 41
42. Step 14: Displaying the Number of Packets Sent/Received
• Add two counters to the module class: numSent and numReceived
• Set to zero and WATCH'ed in the initialize() method
• Use the Find/inspect objects dialog (Inspect menu; it is also on the toolbar)
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 42
43. Step 14: Displaying the Number of Packets Sent/Received (cont’d)
• This info appears above the module icons using the
t= display string tag
if (ev.isGUI())
updateDisplay();
void Txc11::updateDisplay(){
char buf[40];
sprintf(buf, "rcvd: %ld sent: %ld", numReceived, numSent);
getDisplayString().setTagArg("t",0,buf);
}
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 43
44. Step 15: Adding statistics collection
• OMNeT++ simulation kernel: omnetpp.ini
• Example: average hopCount a message has to
travel before reaching its destination
– Record in the hop count of every message upon arrival
into an output vector (a sequence of (time,value)
pairs, sort of a time series)
– Calculate mean, standard deviation, minimum,
maximum values per node, and write them into a file
at the end of the simulation
– Use off-line tools to analyse the output files
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 44
45. Step 15: Adding statistics collection (cont’d)
• Output vector object (which will record the data into omnetpp.vec) and a
histogram object (which also calculates mean, etc)
class Txc15 : public cSimpleModule {
private:
long numSent;
long numReceived;
cLongHistogram hopCountStats;
cOutVector hopCountVector;
• Upon message arrival, update statistics within handleMessage()
hopCountVector.record(hopcount);
hopCountStats.collect(hopcount);
• hopCountVector.record() call writes the data into Tictoc15-0.vec (will be
deleted each time the simulation is restarted)
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 45
46. Step 15: Adding statistics collection (cont’d)
• Scalar data (histogram object) have to be recorded manually, in the finish()
function
void Txc15::finish() {
// This function is called by OMNeT++ at the end of the simulation.
EV << "Sent: " << numSent << endl;
EV << "Received: " << numReceived << endl;
EV << "Hop count, min: " << hopCountStats.getMin() << endl;
EV << "Hop count, max: " << hopCountStats.getMax() << endl;
EV << "Hop count, mean: " << hopCountStats.getMean() << endl;
EV << "Hop count, stddev: " << hopCountStats.getStddev() << endl;
recordScalar("#sent", numSent);
recordScalar("#received", numReceived);
hopCountStats.recordAs("hop count");
}
• recordScalar() calls in the code below write into the Tictoc15-0.sca file
• Tictoc15-0.sca not deleted between simulation runs; new data are just appended –
allows to jointly analyze data from several simulation runs
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 46
47. Step 15: Adding statistics collection (cont’d)
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 47
You can also view the data during simulation. In the module inspector's Contents page
you'll find the hopCountStats and hopCountVector objects, and you can open their
inspectors (double-click). They will be initially empty -- run the simulation in Fast (or even
Express) mode to get enough data to be displayed. After a while you'll get something like
this:
48. Step 16: Statistic collection without modifying your model
• OMNeT++ 4.1 provides an additional mechanism to record values
and events
– Any model can emit 'signals' that can carry a value or an object
– The model writer just have to decide what signals to emit, what data
to attach to them and when to emit them
• The end user can attach 'listeners' to these signals that can process
or record these data items
– This way the model code does not have to contain any code that is
specific to the statistics collection
– The end user can freely add additional statistics without even looking
into the C++ code
• We can safely remove all statistic related variables from our module
– No need for the cOutVector and cLongHistogram classes either
– Need only a single signal that carries the hopCount of the message at
the time of message arrival at the destination
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 48
49. Step 16: Statistic collection without modifying your model (cont’d)
• Define our signal: arrivalSignal as identifier
• We must register all signals before using them
• Emit our signal, when the message has arrived to the
destination node. finish() method can be deleted!
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 49
50. Step 16: Statistic collection without modifying your model (cont’d)
• Declare signals in the NED file
• Configuration via ini file
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 50
51. Visualizing the results with the OMNeT++ IDE
• OMNet++ IDE can be used to analyze the results.
• Our last model records the hopCount of a message each
time the message reaches its destination.
• The following plot shows these vectors for nodes 0 and 1.
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 51
52. Visualizing the results with the OMNeT++ IDE (cont’d)
• If we apply a mean operation we can see how the
hopCount in the different nodes converge to an average:
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 52
53. Visualizing the results with the OMNeT++ IDE (cont’d)
• Mean and the maximum of the hopCount of the messages
for each destination node
– based on the scalar data recorded at the end of the simulation.
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 53
54. Visualizing the results with the OMNeT++ IDE (cont’d)
• Histogram of hopCount's distribution.
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 54
55. Visualizing the results with the OMNeT++ IDE (cont’d)
• OMNeT++ simulation kernel can record the message
exchanges during the simulation => event log file =>
analyzed with Sequence Chart tool
• E.g., message is routed between the different nodes in
the network
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 55
56. Conclusions
• OMNeT++: discrete event simulation system
• OMNeT++ is a
– public-source,
– component-based,
– modular and open-architecture simulation environment
– with strong GUI support and
– an embeddable simulation kernel
• OMNeT++ 5.1 IDE (Eclipse)
– http://www.omnetpp.org/doc/omnetpp/UserGuide.pdf
• http://www.omnetpp.org/documentation
October 2015 C. Timmerer - AAU/TEWI/ITEC/MMC 56