This document discusses integrating multiple autonomous simulators in a simulation environment. It begins by introducing simulation and its uses. It then discusses the motivation for new simulation platforms that can integrate different domain-specific simulators. Existing integration approaches like DIS and HLA are discussed along with their limitations. The document proposes a Reflective Architecture for Integrated Simulation Environments (RAISE) to address challenges in managing complexity, ensuring correctness, and achieving scalability. Key components of RAISE include metamodels, a meta-synchronizer, and synchronization strategies. The document concludes by presenting results from a case study integrating three disparate simulators.
The focus of the presentation is to develop a framework and platform that supports the integration of multiple models, simulations, and data. My aim is to develop methods to integrate a set of simulated environments to make it possible to combine various independent simulators, developed by different domain experts. This would make it possible for researchers to build complex, multi-domain simulations by integrating existing and well-established simulators, so they can explore different alternatives and conduct low cost experiments.
Ring spinning produces yarn in a package form called cops. Since cops from ringframes are not suitable for further processing, the winding process serves to achieve additional objectives made necessary by the requirements of the subsequent processing stages.
The focus of the presentation is to develop a framework and platform that supports the integration of multiple models, simulations, and data. My aim is to develop methods to integrate a set of simulated environments to make it possible to combine various independent simulators, developed by different domain experts. This would make it possible for researchers to build complex, multi-domain simulations by integrating existing and well-established simulators, so they can explore different alternatives and conduct low cost experiments.
Ring spinning produces yarn in a package form called cops. Since cops from ringframes are not suitable for further processing, the winding process serves to achieve additional objectives made necessary by the requirements of the subsequent processing stages.
Real World Testbeds Emulation for Mobile Ad-hoc NetworksKishan Patel
It focuses on creating an original computer environment, which can be time-consuming and difficult to achieve, and also it is very costly because of its ability to maintain a closer connection to the authenticity object.
Presentation by Luca Berardinelli, Antinisca Di Marco and Flavia Di Paolo at the 2nd Awareness Workshop on Challenges for Achieving Self-awareness in Autonomic Systems @ SASO 2012, Lyon, France
http://link.springer.com/chapter/10.1007%2F978-3-540-88875-8_108
This paper introduces an approach for abstracting access to functionality in Pervasive Computing systems where very different types of devices co-exist. Tiny, resource-poor 8-bit based wireless embedded sensor nodes use highly fragmented programming, with code distributed over possibly hundreds of nodes. More powerful devices as mobile, handled devices, laptops or even server use coarse-grained distribution. The Implicit Middleware approach provides a way to both unify and simplify middleware for Pervasive Computing systems, by means of transparently distributing functionality in the system and making them context aware. The approach ensures optimized run-time behavior and adaptation to the system landscape. We also present an implementation using the XMLVM representation for code generation, and an evaluation running on PCs, J2ME CLDC 1.0 compatible 32Bit sensor nodes and 8Bit-MCU based nodes with an optimized light-weight VM.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
IEEE 2014 DOTNET CLOUD COMPUTING PROJECTS Scalable analytics for iaa s cloud ...IEEEMEMTECHSTUDENTPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Automatically inferring structure correlated variable set for concurrent atom...ijseajournal
The at
omicity of correlated variables is
quite tedious and error prone for programmers to explicitly infer
and specify in
the multi
-
threaded program
. Researchers have studied the automatic discovery of atomic set
programmers intended, such as by frequent itemset
mining and rules
-
based filter. However, due to the
lack
of inspection of inner structure
,
some
implicit sematics
independent
variables intended by user are
mistakenly
classified to be correlated
. In this paper, we present a
novel
simplification method for
program
dependency graph
and the corresponding graph
mining approach to detect the set of variables correlated
by logical structures in the source code. This approach formulates the automatic inference of the correlated
variables as mining frequent subgra
ph of
the simplified
data and control flow dependency graph. The
presented simplified
graph representation of program dependency is not only robust for coding style’s
varieties, but also essential to recognize the logical correlation. We implemented our me
thod and
compared it with previous methods on the open source programs’ repositories. The experiment results
show that our method has less false positive rate than previous methods in the development initial stage. It
is concluded that our presented method
can provide programmers in the development with the sufficient
precise correlated variable set for checking atomicity
Advanced Software Engineering course - Guest Lecture
Weaving Models
This presentation has been developed in the context of the Advanced Software Engineering course at the DISIM Department of the University of L’Aquila (Italy).
http://www.di.univaq.it/malavolta
In this paper we present an approach of Model Versioning and Model Repository in context of Living
Models view. The idea of Living Models is a step forward from Model Based Software Development
(MBSD) in a sense that there is tight coupling between various artifacts of software development process.
These artifacts include System Models, Test Models, Executable artifacts etc. We explore the issues of
storage (import/export) of model elements into repository, inputs of cross link information, version
management and system analysis. The modeling environment in which these issues will be discussed is a
heterogeneous modeling environment, where different models types and different modeling tools are used
in the development process. An overview of the tool architecture is also presented..
Model-Based Systems Engineering (MBSE) is an ambiguous concept that means many things to many different people. The purpose of this presentation is to “de-mystify” MBSE, with the intent of moving the sub-discipline forward. Model-Based Systems Engineering was envisioned to manage the increasing complexity within systems and System of Systems (SoS). This presentation defines MBSE as the formalized application of modeling (static and dynamic) to support system design and analysis, throughout all phases of the system lifecycle, and through the collection of modeling languages, structures, model-based processes, and presentation frameworks used to support the discipline of systems engineering in a model-based or model-driven context. Using this definition, the components of MBSE (modeling languages, processes, structures, and presentation frameworks) are defined. The current state of MBSE is then evaluated against a set of effective measures. Finally, this presents a vision for the future direction of MBSE.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
More Related Content
Similar to Middleware Solutions for Simulation & Modeling
Real World Testbeds Emulation for Mobile Ad-hoc NetworksKishan Patel
It focuses on creating an original computer environment, which can be time-consuming and difficult to achieve, and also it is very costly because of its ability to maintain a closer connection to the authenticity object.
Presentation by Luca Berardinelli, Antinisca Di Marco and Flavia Di Paolo at the 2nd Awareness Workshop on Challenges for Achieving Self-awareness in Autonomic Systems @ SASO 2012, Lyon, France
http://link.springer.com/chapter/10.1007%2F978-3-540-88875-8_108
This paper introduces an approach for abstracting access to functionality in Pervasive Computing systems where very different types of devices co-exist. Tiny, resource-poor 8-bit based wireless embedded sensor nodes use highly fragmented programming, with code distributed over possibly hundreds of nodes. More powerful devices as mobile, handled devices, laptops or even server use coarse-grained distribution. The Implicit Middleware approach provides a way to both unify and simplify middleware for Pervasive Computing systems, by means of transparently distributing functionality in the system and making them context aware. The approach ensures optimized run-time behavior and adaptation to the system landscape. We also present an implementation using the XMLVM representation for code generation, and an evaluation running on PCs, J2ME CLDC 1.0 compatible 32Bit sensor nodes and 8Bit-MCU based nodes with an optimized light-weight VM.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
IEEE 2014 DOTNET CLOUD COMPUTING PROJECTS Scalable analytics for iaa s cloud ...IEEEMEMTECHSTUDENTPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Automatically inferring structure correlated variable set for concurrent atom...ijseajournal
The at
omicity of correlated variables is
quite tedious and error prone for programmers to explicitly infer
and specify in
the multi
-
threaded program
. Researchers have studied the automatic discovery of atomic set
programmers intended, such as by frequent itemset
mining and rules
-
based filter. However, due to the
lack
of inspection of inner structure
,
some
implicit sematics
independent
variables intended by user are
mistakenly
classified to be correlated
. In this paper, we present a
novel
simplification method for
program
dependency graph
and the corresponding graph
mining approach to detect the set of variables correlated
by logical structures in the source code. This approach formulates the automatic inference of the correlated
variables as mining frequent subgra
ph of
the simplified
data and control flow dependency graph. The
presented simplified
graph representation of program dependency is not only robust for coding style’s
varieties, but also essential to recognize the logical correlation. We implemented our me
thod and
compared it with previous methods on the open source programs’ repositories. The experiment results
show that our method has less false positive rate than previous methods in the development initial stage. It
is concluded that our presented method
can provide programmers in the development with the sufficient
precise correlated variable set for checking atomicity
Advanced Software Engineering course - Guest Lecture
Weaving Models
This presentation has been developed in the context of the Advanced Software Engineering course at the DISIM Department of the University of L’Aquila (Italy).
http://www.di.univaq.it/malavolta
In this paper we present an approach of Model Versioning and Model Repository in context of Living
Models view. The idea of Living Models is a step forward from Model Based Software Development
(MBSD) in a sense that there is tight coupling between various artifacts of software development process.
These artifacts include System Models, Test Models, Executable artifacts etc. We explore the issues of
storage (import/export) of model elements into repository, inputs of cross link information, version
management and system analysis. The modeling environment in which these issues will be discussed is a
heterogeneous modeling environment, where different models types and different modeling tools are used
in the development process. An overview of the tool architecture is also presented..
Model-Based Systems Engineering (MBSE) is an ambiguous concept that means many things to many different people. The purpose of this presentation is to “de-mystify” MBSE, with the intent of moving the sub-discipline forward. Model-Based Systems Engineering was envisioned to manage the increasing complexity within systems and System of Systems (SoS). This presentation defines MBSE as the formalized application of modeling (static and dynamic) to support system design and analysis, throughout all phases of the system lifecycle, and through the collection of modeling languages, structures, model-based processes, and presentation frameworks used to support the discipline of systems engineering in a model-based or model-driven context. Using this definition, the components of MBSE (modeling languages, processes, structures, and presentation frameworks) are defined. The current state of MBSE is then evaluated against a set of effective measures. Finally, this presents a vision for the future direction of MBSE.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
1. Interoperability of Multiple Autonomous Simulators
in Integrated Simulation Environments
Leila Jalali
jalalil@uci.edu
http://www.ics.uci.edu/~ljalali/
Prof. Nalini Venkatasubramanian, Prof. Sharad Mehrotra
University of California, Irvine
University of California, Irvine 2011 Spring SIW Leila Jalali
2. Introduction
Simulation: the process of designing a model of a real
world system and conducting experiments with this
model for our purpose: cheaper, safer, easier, and quicker
Planning and decision support- defence simulations,
emergency response simulations
Domain specific Testing and Analysis - traffic analysis, human
behaviour study: crowd dynamics or evacuation simulators,
network simulators
Immersive synthetic platforms for training
University of California, Irvine 2011 Spring SIW Leila Jalali
3. Motivation for New Simulation
Platforms
Many available simulators
Operate on specific domains
e.g fire simulators, transportation simulators
Infeasible to build complex simulations entirely from
scratch
Economic and organizational constraints
The increasingly complex requirements
Need ability to:
Bring together simulators from various modeling domains:
Metasimulations
Model and test larger and more complex scenarios
Study cause- effect relationships to integrate simulators
University of California, Irvine 2011 Spring SIW Leila Jalali
4. Simulation Integration- historical
view
Distributed Interactive Simulation (DIS) High Level Architecture (HLA)
(1990–today) Army Projects (1996-today) Defence
1975 1980 1985 1990 1995 2000
SIMulator NETworking (SIMNET)
(1983–1990) Combat Simulators
Defense Community Aggregate Level Simulation Protocol (ALSP)
(1991–1997ish) War-gaming models
Dungeons and Dragons
Adventure Board Games Multi-User Dungeon (MUD)
(Xerox PARC) Games Multi-User Video Games
Internet & Gaming Community
University of California, Irvine 2011 Spring SIW Leila Jalali
5. Limitations of current
approaches
Existing Integrated platforms, define a standard model
and require the individual simulators to conform to the
standard
It might not be always possible
The standard may not have designed to handle the new simulator
needs
Current model registration needs a lot of manual work
The approaches are costly, time consuming, easily fail, difficult to
HLA:
maintain, difficult to scale from the practitioner
─ Low level knowledge needed
─ Cost issues
─ Complexity
─ No support for semantic interoperability
─ Transparency
─ HLA is too big and mainly applied in defense
Most of other works on simulation integration provided specific services for interoperability
in a small range of cases
University of California, Irvine 2011 Spring SIW Leila Jalali
6. General Challenges
Managing Complexity of Interoperating Systems
Analysis of cause- effect relationships
Reusability: e.g. components, models
We use meta models to describe simulator-related
meta-data
Make the underlying simulator more understandable
Abstract of lower-level details of integration and interoperability
Correctness
Ensure the correctness of metasimulations
Time synchronization: timing issues and causality correctness
Data exchange: data transformations
Scalability
e.g multiple geography
University of California, Irvine 2011 Spring SIW Leila Jalali
7. Reflective Architecture for Integrated
Simulation Environments (RAISE)
Complex Applications
Data Exchange Time Synchronization
dependencies
Ontology Consistency
Translator Synchronizer Controller
Meta level
Pub/Sub Lock-table
meta-actions
Lock Manager
External
Analyzer & Adaptor Data
Sources
Meta models
Structural specification: UML diagrams, metamodels
Interactions: dependency sets, interdependent data
Observe & Extract Reflect
Base level
INLET Drillsim Fire, Earthquake LTESim
(Transportation Model) (Activity Model) (Crisis Model) (Communication Model)
University of California, Irvine 2011 Spring SIW Leila Jalali
8. Using RAISE- step by step
Reification
Extract simulators’ meta-data from base-level simulators (using the source code, interfaces, and
databases) result in metamodels/specifications and data structures at the meta-level
Analysis of metamodels
Extract the model elements and features that need to be integrated from metamodels
Discover inter-dependencies
Run Federation
Modified features of meta data structures that implement the integration are reflected to the base-
level simulators
Ensuring the correctness
Time synchronization, Data management
Parser
Database meta-models
Interfaces
Source code meta-data inter-dependencies
Run Federation: ye
Reification: Extract Analysis of metamodels: end of
Execute actions s
simulators’ meta-data Discover inter-dependencies simulation?
Communicate with metal-level
Generate meta-actions no
Generate wrapper-actions
Ensure Correctness:
Pre-processing
Time synchronization
Data Transformations
Results Analysis
University of California, Irvine 2011 Spring SIW Leila Jalali
9. Reification
Major challenge: the complexity associated with
reification
Creole as an Eclipse plug in
Examine source code dependencies and to extract the
simulator’s features.
Java simulators, not useful for complex and large
simulators
A parser using a tool for large scale code repositories
Meta-level
search
Extract the entities and attributes from a Java/Matlab
Reification Reflection
simulator
Simulator’s source code Base-level
Interfaces
of Databases
University California, Irvine 2011 Spring SIW Leila Jalali
10. Metamodel
Making the
Base level
underlying simulators
Meta level
more understandable
Abstracting out lower-
level details of
integration and
interoperability
Need to be
comprehensive and
extensible
UML and Eclipse
Modeling FrameworkLeila Jalali
University of California, Irvine 2011 Spring SIW
11. Prototype System
Implementation
Analyzer and Adaptor: to provide data transfer between simulators using data
translators
Synchronizer: to monitor and control concurrent execution of multiple
simulations
• Using concepts from serializability theory in transaction processing
• Developed three techniques: conservative, optimistic, hybrid
University of California, Irvine 2011 Spring SIW Leila Jalali
12. Synchronization in
metasimulation
Ensuring causal correctness while preserving
simulators’ autonomy
A transaction-based approach to modeling the synchronization
problem by mapping it to a problem similar to multidatabase
concurrency
A novel Hybrid Scheduling strategy for metasimulation
synchronization which adapts itself to the "right" level of
pessimism/optimism based on the state of the execution and
underlying dependencies
Relaxation model (motivated by divergence control
mechanisms and weak consistency models) which guarantee
bounded violation of consistency
Applying proposed techniques in a detailed case study using
multiple real-world simulators
University of California, Irvine 2011 Spring SIW Leila Jalali
13. Modeling Metasimulation
A metasimulation consists of a set of autonomous pre-
existing simulators S1, S2 , S3 ,…, Sn that execute
concurrently in an integrated environment
Using a transaction-based approach to modeling
metasimulations
Consider each simulator’s execution as a sequence of actions
(time steps in time stepped simulators or events in event based
simulators)
Scheduling multiple simulators actions such that dependencies
be preserved
a three tuple Si=<Ti, Di , Ai> where:
Ti : the type of the simulator
Time stepped or Event based
Di : The data items that the simulator reads or updates. For each data
item, denotes the domain of d, which is a set of values that can be
University of California, Irvine 2011 Spring SIW Leila Jalali
14. Meta-synchronizer
Metasimulation
dependencies meta-actions
MetaSynchronizer
Meta
level
wrapper wrapper
wrapper wrapper
actions . . . actions
Base level
d . . . d’
Simulator i Simulator j
Meta-synchronizer:
Upon receiving an external action from
For all dependant simulators generate meta-
action
Post to meta-action queue
Upon receiving a request
Find all meta-actions from the queue s.t.
and
Send the metactions to
Simulator’s wrapper:
At the beginning of each iteration:
t=current-time
Send a request to get meta-actions
Receive meta-actions
Generate wrapper-actions
At the end of each iteration:
University of California, Irvine
Send all external action 2011 Spring SIW executed to meta-
that have been Leila Jalali
15. Metascheduling strategies
Address the synchronization problem by
controlling the execution of the simulator's actions
to ensure the legality of resulting schedules
Conservative Scheduling: ensures the legality
of schedules by delaying the actions such that the
dependencies are preserved in the concurrent
execution of actions of different simulators
Optimistic Scheduling: we accept the fact that
violations occur, resolve the violation when it
does occur; by aborting the actions that caused
the violation
Hybrid Scheduling: Combines the benefits of
both the optimistic and conservative strategies
University of California, Irvine 2011 Spring SIW Leila Jalali
16. Relaxed Dependencies
Ideally, dependencies need to be reflected from one
simulator into another as soon as update in one simulator
becomes valid in another
In most of applications, ideal behavior results in
unnecessary synchronization overhead and loss of
concurrency among simulators.
Relax the dependencies that capture the extent to which
simulators can deviate from ideal behavior
Time (t-bound): t-bound works as the delay condition which
states how much time the consumer can use a value behind
the new update of the supplier
Value (v-distance): Let be the value of updated by and be
the value of updated by , we consider the difference between
the values of two data item using a user defined distance
function
University Number of changes (n-update):2011 Spring SIW the maximum
of California, Irvine captures Leila Jalali
17. A Case Study for simulation
integration
To validate the proposed reflective architecture
Using three disparate pre-existing simulators:
1. CFAST (Consolidated Model of Fire and Smoke
Transport): a fire simulator
Simulates the effects of fire and smoke inside a building and
Calculates the evolving distribution of smoke, fire gases and
temperature
2. Drillsim: an activity simulator
Multi-agent system that simulates human behavior in a crisis
3. LTESim: a communication simulator
Abstracts the physical layer and performs network level
simulations of 3GPP Long Term Evolution
University of California, Irvine 2011 Spring SIW Leila Jalali
18. Case study- simulators properties
Evacuation Simulator Communication Fire Simulator
Simulator
DrillSim [9] LTESim [31] CFAST [10]
Simulates a Performs network level Simulates the effects of
response activity simulations of 3GPP LTE fire and smoke inside a
evacuation Event based building
Time stepped Open source (in Time stepped
Open source (in Matlab) Black-box (no access to
Java) Parameters: num. of source)
Agent based transmit and receive Parameters: building
Parameters: health antennas, uplink delay, geometry, materials of
profile, visual distance, network layout, channel construction, fire
speed of walking, num. model, bandwidth, properties, etc.
of ongoing call, etc. frequency, receiver noise, Output: temperatures,
Output: num. of etc. pressure, gas
evacuees, injuries, etc Output: pathloss, concentrations: CO2, etc.
throughput, etc.
University of California, Irvine 2011 Spring SIW Leila Jalali
19. An Examlpe: CFAST - Drillsim Interaction
Interaction between Fire simulation and Drillsim
smoke from fire can affect someone’s health
Agents Profile : Health
Harmful conditions in
Agents Actions : Tell
each space at any time
People
CFAST Drillsim
University of California, Irvine 2011 Spring SIW Leila Jalali
21. Inter-dependencies extracted from
metamodels
1. A harmful condition in CFAST can affect an individual’s health in
Drillsim.
2. Agents in Drillsim can communicate information on the fire and its
location –increase the number of ongoing calls (people talk
about the crisis) in Drillsim.
3. Harmful conditions in CFAST can affect the evacuation process in
Drillsim, e.g. increase walking speed which maps to user speed
in LTEsim.
4. Smoke in CFAST can decrease an agent’s visual distance in
Drillsim.
5. The number of ongoing communications in Drillsim can affect
network pathloss and throughput in LTEsim.
6. Pathloss in LTEsim can be used to determine
connectivity/coverage in Drillsim.
7. Information on building layout from CFAST and Drillsim can
determine the number of transmit and receive antenna required
University of California, Irvine 2011 Spring SIW Leila Jalali
22. Experiments
(a) (b) (c)
(a) Average synchronization overhead in different simulation
phases
(b)Total execution time in different simulation phases
(c) Synchronization overhead vs. the number of
dependencies. (in (a) and (b) no. of dependencies=100)
University of California, Irvine 2011 Spring SIW Leila Jalali
23. Experiments- conclusion
Strategy CS CSR OS OSR HS HSR
Metric synch. time synch. time synch. Time synch. time synch. time synch. time
CFAST 425.374 2225.626 348.812 2149.945 340.273 2140.273 309.931 2111.844 498.283 2298.475 316.007 2118.918
DrillSim 431.265 2232.235 331.192 2133.457 312.182 2113.165 252.011 2055.888 453.592 2253.698 288.555 2089.155
LTEsim 156.035 1956.530 99.277 1901.371 4887.753 3378.743 749.009 2550.043 344.005 2144.187 221.079 2023.039
Total 1012.674 6414.391 779.281 6188.723 2230.208 7632.181 1310.951 6717.755 1295.581 6696.360 816.641 6231.112
Hybrid Scheduling exhibits superior overall
performance to other approaches
The choice of the approach is also dependent on the
simulator, e.g. for event based simulators when the
number of external events is large we need to avoid
using OS
Relaxations always help into get better results in terms
of synchronization overhead and total execution time
University of California, Irvine 2011 Spring SIW Leila Jalali
24. Thanks
jalalil@uci.edu
http://www.ics.uci.edu/~ljalali/
University of California, Irvine 2011 Spring SIW Leila Jalali
Editor's Notes
This is a very interesting research proposal that will requireknowledge in various domains: simulation, middleware technology,software engineering. and databases!
e.g. update an agent’s health in Drillsim based on the harmful condition in CFASTGeometry Transformer: different representation of coordinate systems and resolutions, Using a set of guide points in multiple geographies and determine a coordinate transform matrix