This document describes modeling and verifying a telecommunication application called Depannage using Live Sequence Charts (LSCs) and the Play-Engine tool. Depannage allows users to call for help from emergency services. The application is complex due to its distributed architecture, time constraints, and evolving components.
The authors specify Depannage using LSCs to describe component behaviors and interactions. Key components include Search, which locates emergency providers, and Users, which models user states. Universal LSCs define mandatory component behaviors, while existential LSCs define possible behaviors. The Play-Engine tool is used to simulate, animate, and formally verify the LSC specification.
The methodology captures requirements at a high level using
The document discusses linguistic structures for incorporating fault tolerance into application software. It begins by explaining that as software complexity increases, software faults have become more prevalent and impactful, necessitating fault tolerance at the application level. It then establishes a set of desirable attributes for application-level fault tolerance structures and surveys current solutions, assessing each according to these attributes. The goal is to identify shortcomings and opportunities to develop improved fault tolerance structures.
The document describes an algorithm for tolerating crash failures in distributed systems called the algorithm of mutual suspicion (AMS). AMS allows the backbone of a distributed application to tolerate failures of up to n-1 of its n components. Each node runs one agent of the backbone consisting of 3 tasks: D for the system database, I for monitoring "I'm alive" signals, and R for error recovery. The coordinator periodically sends "I'm alive" messages and assistants reply with acknowledgments. If a node does not receive the expected messages, it enters a suspicion period and may deduce a crashed component and initiate recovery actions.
Improved learning through remote desktop mirroring controlConference Papers
The document describes a Wireless Stream Management System (WSMS) that allows a moderator (teacher) to remotely manage and control wireless screen mirroring from student devices to support collaborative learning. Key features of WSMS include allowing the teacher to select any student's laptop screen to project, enabling the teacher to remotely control the student's laptop, and distributing presentation content as images to student devices. The system architecture uses various components like a Wireless Screen Sender, Receiver, Administrator and Controller. Performance tests showed the system using under 2 Mbps of bandwidth and latency under 173ms with no major CPU utilization issues.
The document discusses an approach to automatically recover pointcut expressions (PCEs) in evolving aspect-oriented software. It presents an algorithm that derives structural patterns from program elements selected by an original PCE. These patterns capture commonalities between the elements. The patterns are then applied to later versions to suggest new elements that may need to be included in an updated PCE. An evaluation of the approach on three programs found it was able to accurately infer 90% of new elements selected by PCEs in subsequent versions. The approach aims to assist developers in maintaining PCEs as software evolves.
Dynamic Framework Design for Offloading Mobile Applications to Cloudiosrjce
Mobile Cloud Computing (MCC) is an infrastructure where the data and the processing of data are
outsourced. MCC integrates cloud computing into the mobile environment and executes the applications in the
mobile device effectively by partitioning and offloading the computation intensive task to external resources
(e.g. Public Clouds). The effective offloading is mainly focused on the decision maker which tells “when to
offload” during execution time. Though prior decision making techniques has its own pros and cons, it doesn’t
support dynamic changing environment and also consumes more time and energy for training the input
instances. Also while offloading there is no security for the information to be transmitted. So, the proposed
dynamic framework is designed with efficient intelligent classifier for offloading mobile application in
dynamically changing by estimating the applications on-device and on-server performance. And to ensure
security, Steganography technique is additionally used within the proposed framework to conceal the
information
Truly dependable software systems should be built with structuring techniques able to decompose the software complexity without
hiding important hypotheses and assumptions such as those regarding
their target execution environment and the expected fault- and system
models. A judicious assessment of what can be made transparent and
what should be translucent is necessary. This paper discusses a practical
example of a structuring technique built with these principles in mind:
Reflective and refractive variables. We show that our technique offers
an acceptable degree of separation of the design concerns, with limited
code intrusion; at the same time, by construction, it separates but does
not hide the complexity required for managing fault-tolerance. In particular, our technique offers access to collected system-wide information
and the knowledge extracted from that information. This can be used
to devise architectures that minimize the hazard of a mismatch between
dependable software and the target execution environments.
This document summarizes the internship work conducted by Marta de la Cruz Martos at CITSEM within the GRyS group. The internship focused on developing algorithms to analyze energy consumption for smart grids as part of the I3RES project, which aims to integrate renewable energy sources into distributed networks using artificial intelligence. Specifically, the internship involved studying relevant technologies, participating in software component design, developing and implementing algorithms, and preparing reports. The document provides background on distributed systems and databases, describes the work conducted, and presents results and conclusions.
Improvement from proof of concept into the production environment cater for...Conference Papers
This document discusses improvements made to the Trust Engine component of an authentication platform to improve performance and scalability. The Proof of Concept system was found to not meet scalability requirements due to the database architecture requiring multiple connections to retrieve and update user data. The improvements included consolidating configuration data, combining user tables, updating the process to perform analysis in memory without database connections, limiting stored login records, and changing to a JSON data format. Performance testing showed the new system completed processes on average 99% faster.
The document discusses linguistic structures for incorporating fault tolerance into application software. It begins by explaining that as software complexity increases, software faults have become more prevalent and impactful, necessitating fault tolerance at the application level. It then establishes a set of desirable attributes for application-level fault tolerance structures and surveys current solutions, assessing each according to these attributes. The goal is to identify shortcomings and opportunities to develop improved fault tolerance structures.
The document describes an algorithm for tolerating crash failures in distributed systems called the algorithm of mutual suspicion (AMS). AMS allows the backbone of a distributed application to tolerate failures of up to n-1 of its n components. Each node runs one agent of the backbone consisting of 3 tasks: D for the system database, I for monitoring "I'm alive" signals, and R for error recovery. The coordinator periodically sends "I'm alive" messages and assistants reply with acknowledgments. If a node does not receive the expected messages, it enters a suspicion period and may deduce a crashed component and initiate recovery actions.
Improved learning through remote desktop mirroring controlConference Papers
The document describes a Wireless Stream Management System (WSMS) that allows a moderator (teacher) to remotely manage and control wireless screen mirroring from student devices to support collaborative learning. Key features of WSMS include allowing the teacher to select any student's laptop screen to project, enabling the teacher to remotely control the student's laptop, and distributing presentation content as images to student devices. The system architecture uses various components like a Wireless Screen Sender, Receiver, Administrator and Controller. Performance tests showed the system using under 2 Mbps of bandwidth and latency under 173ms with no major CPU utilization issues.
The document discusses an approach to automatically recover pointcut expressions (PCEs) in evolving aspect-oriented software. It presents an algorithm that derives structural patterns from program elements selected by an original PCE. These patterns capture commonalities between the elements. The patterns are then applied to later versions to suggest new elements that may need to be included in an updated PCE. An evaluation of the approach on three programs found it was able to accurately infer 90% of new elements selected by PCEs in subsequent versions. The approach aims to assist developers in maintaining PCEs as software evolves.
Dynamic Framework Design for Offloading Mobile Applications to Cloudiosrjce
Mobile Cloud Computing (MCC) is an infrastructure where the data and the processing of data are
outsourced. MCC integrates cloud computing into the mobile environment and executes the applications in the
mobile device effectively by partitioning and offloading the computation intensive task to external resources
(e.g. Public Clouds). The effective offloading is mainly focused on the decision maker which tells “when to
offload” during execution time. Though prior decision making techniques has its own pros and cons, it doesn’t
support dynamic changing environment and also consumes more time and energy for training the input
instances. Also while offloading there is no security for the information to be transmitted. So, the proposed
dynamic framework is designed with efficient intelligent classifier for offloading mobile application in
dynamically changing by estimating the applications on-device and on-server performance. And to ensure
security, Steganography technique is additionally used within the proposed framework to conceal the
information
Truly dependable software systems should be built with structuring techniques able to decompose the software complexity without
hiding important hypotheses and assumptions such as those regarding
their target execution environment and the expected fault- and system
models. A judicious assessment of what can be made transparent and
what should be translucent is necessary. This paper discusses a practical
example of a structuring technique built with these principles in mind:
Reflective and refractive variables. We show that our technique offers
an acceptable degree of separation of the design concerns, with limited
code intrusion; at the same time, by construction, it separates but does
not hide the complexity required for managing fault-tolerance. In particular, our technique offers access to collected system-wide information
and the knowledge extracted from that information. This can be used
to devise architectures that minimize the hazard of a mismatch between
dependable software and the target execution environments.
This document summarizes the internship work conducted by Marta de la Cruz Martos at CITSEM within the GRyS group. The internship focused on developing algorithms to analyze energy consumption for smart grids as part of the I3RES project, which aims to integrate renewable energy sources into distributed networks using artificial intelligence. Specifically, the internship involved studying relevant technologies, participating in software component design, developing and implementing algorithms, and preparing reports. The document provides background on distributed systems and databases, describes the work conducted, and presents results and conclusions.
Improvement from proof of concept into the production environment cater for...Conference Papers
This document discusses improvements made to the Trust Engine component of an authentication platform to improve performance and scalability. The Proof of Concept system was found to not meet scalability requirements due to the database architecture requiring multiple connections to retrieve and update user data. The improvements included consolidating configuration data, combining user tables, updating the process to perform analysis in memory without database connections, limiting stored login records, and changing to a JSON data format. Performance testing showed the new system completed processes on average 99% faster.
A tlm based platform to specify and verify component-based real-time systemsijseajournal
This paper is about modeling and verification languages with their pros and cons. Modeling is dynamic
part of system development process before realization. The cost and risky situations obligate designer to
model system before production and modeling gives designer more flexible and dynamic image of realized
system. Formal languages and modeling methods are the ways to model and verify systems but they have
their own difficulties in specifying systems. Some of them are very precise but hard to specify complex
systems like TRIO, and others do not support object oriented design and hardware/software co-design in
real-time systems. In this paper we are going to introduce systemC and the more abstracted method called
TLM 2.0 that solved all mentioned problems.
The document outlines a proposed framework for multi-task scheduling in distributed systems using mobile agents. The framework aims to overcome drawbacks of existing strategies and ensure efficient execution of applications on distributed systems. It involves 5 phases: 1) Creating the MRAM framework, 2) Selecting the best platform, 3) Handling task dependencies, 4) Overcoming link failures, and 5) Handling machine failures. Results show the framework reduces application turnaround time and improves reliability compared to other approaches.
THE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELSIJSEA
Computer programs often behave differently under different compilers or in different computing
environments. Relative debugging is a collection of techniques by which these differences are analysed.
Differences may arise because of different interpretations of errors in the code, because of bugs in the
compilers or because of numerical drift, and all of these were observed in the present study. Numerical
drift arises when small and acceptable differences in values computed by different systems are
integrated, so that the results drift apart. This is well understood and need not degrade the validity of the
program results. Coding errors and compiler bugs may degrade the results and should be removed. This
paper describes a technique for the comparison of two program runs which removes numerical drift and
therefore exposes coding and compiler errors. The procedure is highly automated and requires very little
intervention by the user. The technique is applied to the Weather Research and Forecasting model, the
most widely used weather and climate modelling code.
Simulation of an Organization of Spatial Intelligent Agents in the Visual C#....Reza Nourjou, Ph.D.
The document describes developing a simulator for a community of spatial intelligent agents using Visual C#.NET. It aims to simulate agent interactions and behaviors to test distributed algorithms. The methodology uses threads and delegates in C# to embed multiple agents in a simulated environment. A sample program demonstrates implementing a contract net protocol among three agents, with one agent announcing a task and the others bidding. The simulator allows agents to communicate, respond to messages, and interact with a human, providing a framework to develop and evaluate multi-agent systems using the .NET platform.
A VNF modeling approach for verification purposesIJECEIAES
Network Function Virtualization (NFV) architectures are emerging to increase networks flexibility. However, this renewed scenario poses new challenges, because virtualized networks, need to be carefully verified before being actually deployed in production environments in order to preserve network coherency (e.g., absence of forwarding loops, preservation of security on network traffic, etc.). Nowadays, model checking tools, SAT solvers, and Theorem Provers are available for formal verification of such properties in virtualized networks. Unfortunately, most of those verification tools accept input descriptions written in specification languages that are difficult to use for people not experienced in formal methods. Also, in order to enable the use of formal verification tools in real scenarios, vendors of Virtual Network Functions (VNFs) should provide abstract mathematical models of their functions, coded in the specific input languages of the verification tools. This process is error-prone, time-consuming, and often outside the VNF developers’ expertise. This paper presents a framework that we designed for automatically extracting verification models starting from a Java-based representation of a given VNF. It comprises a Java library of classes to define VNFs in a more developer-friendly way, and a tool to translate VNF definitions into formal verification models of different verification tools.
This academic article discusses the efficiency of distributed systems. It presents a model for measuring efficiency that takes into account the number of processes, nodes, and messages. The key points are:
1. The efficiency of a distributed system depends on the number of processes, nodes, and messages as well as the time taken for internal processing, sending/receiving messages, processing at nodes, and completing the task.
2. As the number of processes, nodes, and messages increases, the overall time increases and efficiency decreases.
3. An example calculation shows that efficiency is reduced from 70% to lower levels as the number of messages increases.
Towards a virtual domain based authentication on mapreduceredpel dot com
This document proposes a novel authentication solution for MapReduce (MR) models deployed in public clouds. It begins by describing the MR model and job execution workflow. It then discusses security issues with deploying MR in open environments like clouds. Next, it specifies requirements for an MR authentication service, including entity identification, credential revocation, and authentication of clients, MR components, and data. It analyzes existing MR authentication methods and finds they do not fully address the needs of cloud-based MR deployments. The paper then proposes a new "layered authentication solution" with a "virtual domain based authentication framework" to better satisfy the requirements.
FORMAL MODELING AND VERIFICATION OF MULTI-AGENTS SYSTEM USING WELLFORMED NETScsandit
Multi-agent systems are asynchronous and distributed computer systems. These characteristics make them also a discrete-event dynamic system. It is, therefore, important to analyze the behavior of such systems to ensure that they terminate correctly and satisfy other important properties. This paper presents a formal modeling and analysis of MAS, based on Well-formed Nets, in order to ensure the absence of any undesired or unexpected behavior. To validate our ontribution, we consider the timetable problem, which is a multi-agent resource allocation problem.
A metamodel-based approach for the dynamic reconfiguration of component-based...Madjid KETFI
A metamodel-based approach for the dynamic reconfiguration of component-based software
M. Ketfi and N. Belkhatir
Software Reuse: Methods, Techniques and Tools: 8th International Conference, ICSR 2004, Madrid, Spain, July 5-9, 2009. Proceedings: Lecture Notes in Computer Science 3107 Springer 2004, ISBN 3-540-22335-5 (pages 264–273).
1. Wireless sensor networks consist of spatially distributed sensor nodes that can sense their environment, process data, and communicate wirelessly. They are useful for monitoring remote structures and environmental changes.
2. In a cluster-based wireless sensor network, sensor nodes are organized into clusters with a cluster head that collects data from nodes in its cluster and sends it to the base station. Identity-based cryptography is used where a node's public key is derived from its identity information like its ID number.
3. An identity-based online/offline digital signature scheme separates the signature generation process into an offline stage and online stage. The offline stage is pre-computed and the online stage combines it with the message for the final signature
The document discusses key principles of software design including data design, architectural design, user interface design, abstraction, refinement, modularity, software architecture, control hierarchy, structural partitioning, software procedure, and information hiding. These principles provide a foundation for correctly designing software and translating analysis models into implementable designs.
Harnessing the cloud for securely outsourcing large scale systems of linear e...Muthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Harnessing the cloud for securely outsourcing large scale systems of linear e...Muthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
This document proposes an integrated cloud-based framework for collecting and processing sensory data from mobile phones to support diverse people-centric applications. The framework includes modules for user adaptation, storage, application interfaces, and mobile cloud engines. A prototype is implemented to demonstrate how the framework can reduce mobile device energy consumption while meeting application requirements such as for emergency response systems.
[Webinar Slides] Align Your Marketing Mix to Your Buyers’ Journey: Solution S...Optify
Following the successful webinar about the complete Buyers’ Journey, Erez Barak of Optify and Christine Crandell of New Business Strategies, partnered again to explore one of the most important stages in the Journey, the Solution Search.
The Buyers Journey is a set of organizing principles for aligning company functions and roles to enable, engage and establish enduring relationships with buyers. The Solution Search stage, a sub-stage of the Buyer Enablement stage, is where prospects are starting to research solutions, offering you the opportunity to be first in their mind. Companies that want to be included in the short list of alternative solutions must know how to market during this stage and what marketing tactics to use.
Watch this webinar to learn:
- An overview of the Buyers’ Journey
- The importance of the Solution Search stage of the Buyers’ Journey
- How to balance your marketing through the Solution search stage
- How to choose the right marketing tactics to be on the top of the list
Read more at: http://www.optify.net/webinars/align-marketing-mix-buyers-journey-solution-search-stage/
1) The document discusses NASA's Science on Drupal Central (NSODC) project which aims to build community and collaboration among NASA Earth science Drupal developers.
2) NSODC provides resources like code sharing and a learning center to help new Drupal users and aggregate Drupal resources.
3) The project encourages knowledge sharing through meetings, events and online forums to support common interests in Drupal, showcase skills and career growth in working with the platform.
This document provides guidance for newly diagnosed HIV positive individuals on disclosing their status to others. It recommends the following steps:
1. Forgive yourself - having HIV is not a punishment and anyone can get it, so do not blame yourself.
2. Give yourself time - there is no set timeline, some people need weeks or months to come to terms with their diagnosis while others need years, so take the time you need.
3. Choose who to tell wisely - only disclose to those you trust who will be supportive, as it could negatively impact your mental health if the reaction is not positive.
Balance is a key concept in life that refers to maintaining equilibrium amid opposing forces. It requires finding harmony and stability through moderation rather than extremes. Achieving balance often means prioritizing what's most important while making room for other responsibilities and pleasures.
The document summarizes Consorzio FOR.COM's experience with mobile learning (m-learning) through three projects between 2003-2009. It discusses evolving definitions of m-learning, lessons learned around connectivity, devices, and blended learning. FOR.COM aims to increase interactivity and flexibility of m-learning applications to work across different devices and operating systems while experimenting in new fields like just-in-time information and decision making.
Instructional design is the process of developing effective and efficient learning experiences through research and theory. It focuses on learning outcomes rather than technology. The instructional designer analyzes learning needs and designs a system to deliver required instruction. Instructional design bridges the gap between education and technology to ensure concepts are properly designed for e-learning platforms. It involves continuous assessment and uses research in cognition, education psychology, and problem solving to organize resources and enhance learning.
A tlm based platform to specify and verify component-based real-time systemsijseajournal
This paper is about modeling and verification languages with their pros and cons. Modeling is dynamic
part of system development process before realization. The cost and risky situations obligate designer to
model system before production and modeling gives designer more flexible and dynamic image of realized
system. Formal languages and modeling methods are the ways to model and verify systems but they have
their own difficulties in specifying systems. Some of them are very precise but hard to specify complex
systems like TRIO, and others do not support object oriented design and hardware/software co-design in
real-time systems. In this paper we are going to introduce systemC and the more abstracted method called
TLM 2.0 that solved all mentioned problems.
The document outlines a proposed framework for multi-task scheduling in distributed systems using mobile agents. The framework aims to overcome drawbacks of existing strategies and ensure efficient execution of applications on distributed systems. It involves 5 phases: 1) Creating the MRAM framework, 2) Selecting the best platform, 3) Handling task dependencies, 4) Overcoming link failures, and 5) Handling machine failures. Results show the framework reduces application turnaround time and improves reliability compared to other approaches.
THE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELSIJSEA
Computer programs often behave differently under different compilers or in different computing
environments. Relative debugging is a collection of techniques by which these differences are analysed.
Differences may arise because of different interpretations of errors in the code, because of bugs in the
compilers or because of numerical drift, and all of these were observed in the present study. Numerical
drift arises when small and acceptable differences in values computed by different systems are
integrated, so that the results drift apart. This is well understood and need not degrade the validity of the
program results. Coding errors and compiler bugs may degrade the results and should be removed. This
paper describes a technique for the comparison of two program runs which removes numerical drift and
therefore exposes coding and compiler errors. The procedure is highly automated and requires very little
intervention by the user. The technique is applied to the Weather Research and Forecasting model, the
most widely used weather and climate modelling code.
Simulation of an Organization of Spatial Intelligent Agents in the Visual C#....Reza Nourjou, Ph.D.
The document describes developing a simulator for a community of spatial intelligent agents using Visual C#.NET. It aims to simulate agent interactions and behaviors to test distributed algorithms. The methodology uses threads and delegates in C# to embed multiple agents in a simulated environment. A sample program demonstrates implementing a contract net protocol among three agents, with one agent announcing a task and the others bidding. The simulator allows agents to communicate, respond to messages, and interact with a human, providing a framework to develop and evaluate multi-agent systems using the .NET platform.
A VNF modeling approach for verification purposesIJECEIAES
Network Function Virtualization (NFV) architectures are emerging to increase networks flexibility. However, this renewed scenario poses new challenges, because virtualized networks, need to be carefully verified before being actually deployed in production environments in order to preserve network coherency (e.g., absence of forwarding loops, preservation of security on network traffic, etc.). Nowadays, model checking tools, SAT solvers, and Theorem Provers are available for formal verification of such properties in virtualized networks. Unfortunately, most of those verification tools accept input descriptions written in specification languages that are difficult to use for people not experienced in formal methods. Also, in order to enable the use of formal verification tools in real scenarios, vendors of Virtual Network Functions (VNFs) should provide abstract mathematical models of their functions, coded in the specific input languages of the verification tools. This process is error-prone, time-consuming, and often outside the VNF developers’ expertise. This paper presents a framework that we designed for automatically extracting verification models starting from a Java-based representation of a given VNF. It comprises a Java library of classes to define VNFs in a more developer-friendly way, and a tool to translate VNF definitions into formal verification models of different verification tools.
This academic article discusses the efficiency of distributed systems. It presents a model for measuring efficiency that takes into account the number of processes, nodes, and messages. The key points are:
1. The efficiency of a distributed system depends on the number of processes, nodes, and messages as well as the time taken for internal processing, sending/receiving messages, processing at nodes, and completing the task.
2. As the number of processes, nodes, and messages increases, the overall time increases and efficiency decreases.
3. An example calculation shows that efficiency is reduced from 70% to lower levels as the number of messages increases.
Towards a virtual domain based authentication on mapreduceredpel dot com
This document proposes a novel authentication solution for MapReduce (MR) models deployed in public clouds. It begins by describing the MR model and job execution workflow. It then discusses security issues with deploying MR in open environments like clouds. Next, it specifies requirements for an MR authentication service, including entity identification, credential revocation, and authentication of clients, MR components, and data. It analyzes existing MR authentication methods and finds they do not fully address the needs of cloud-based MR deployments. The paper then proposes a new "layered authentication solution" with a "virtual domain based authentication framework" to better satisfy the requirements.
FORMAL MODELING AND VERIFICATION OF MULTI-AGENTS SYSTEM USING WELLFORMED NETScsandit
Multi-agent systems are asynchronous and distributed computer systems. These characteristics make them also a discrete-event dynamic system. It is, therefore, important to analyze the behavior of such systems to ensure that they terminate correctly and satisfy other important properties. This paper presents a formal modeling and analysis of MAS, based on Well-formed Nets, in order to ensure the absence of any undesired or unexpected behavior. To validate our ontribution, we consider the timetable problem, which is a multi-agent resource allocation problem.
A metamodel-based approach for the dynamic reconfiguration of component-based...Madjid KETFI
A metamodel-based approach for the dynamic reconfiguration of component-based software
M. Ketfi and N. Belkhatir
Software Reuse: Methods, Techniques and Tools: 8th International Conference, ICSR 2004, Madrid, Spain, July 5-9, 2009. Proceedings: Lecture Notes in Computer Science 3107 Springer 2004, ISBN 3-540-22335-5 (pages 264–273).
1. Wireless sensor networks consist of spatially distributed sensor nodes that can sense their environment, process data, and communicate wirelessly. They are useful for monitoring remote structures and environmental changes.
2. In a cluster-based wireless sensor network, sensor nodes are organized into clusters with a cluster head that collects data from nodes in its cluster and sends it to the base station. Identity-based cryptography is used where a node's public key is derived from its identity information like its ID number.
3. An identity-based online/offline digital signature scheme separates the signature generation process into an offline stage and online stage. The offline stage is pre-computed and the online stage combines it with the message for the final signature
The document discusses key principles of software design including data design, architectural design, user interface design, abstraction, refinement, modularity, software architecture, control hierarchy, structural partitioning, software procedure, and information hiding. These principles provide a foundation for correctly designing software and translating analysis models into implementable designs.
Harnessing the cloud for securely outsourcing large scale systems of linear e...Muthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Harnessing the cloud for securely outsourcing large scale systems of linear e...Muthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
This document proposes an integrated cloud-based framework for collecting and processing sensory data from mobile phones to support diverse people-centric applications. The framework includes modules for user adaptation, storage, application interfaces, and mobile cloud engines. A prototype is implemented to demonstrate how the framework can reduce mobile device energy consumption while meeting application requirements such as for emergency response systems.
[Webinar Slides] Align Your Marketing Mix to Your Buyers’ Journey: Solution S...Optify
Following the successful webinar about the complete Buyers’ Journey, Erez Barak of Optify and Christine Crandell of New Business Strategies, partnered again to explore one of the most important stages in the Journey, the Solution Search.
The Buyers Journey is a set of organizing principles for aligning company functions and roles to enable, engage and establish enduring relationships with buyers. The Solution Search stage, a sub-stage of the Buyer Enablement stage, is where prospects are starting to research solutions, offering you the opportunity to be first in their mind. Companies that want to be included in the short list of alternative solutions must know how to market during this stage and what marketing tactics to use.
Watch this webinar to learn:
- An overview of the Buyers’ Journey
- The importance of the Solution Search stage of the Buyers’ Journey
- How to balance your marketing through the Solution search stage
- How to choose the right marketing tactics to be on the top of the list
Read more at: http://www.optify.net/webinars/align-marketing-mix-buyers-journey-solution-search-stage/
1) The document discusses NASA's Science on Drupal Central (NSODC) project which aims to build community and collaboration among NASA Earth science Drupal developers.
2) NSODC provides resources like code sharing and a learning center to help new Drupal users and aggregate Drupal resources.
3) The project encourages knowledge sharing through meetings, events and online forums to support common interests in Drupal, showcase skills and career growth in working with the platform.
This document provides guidance for newly diagnosed HIV positive individuals on disclosing their status to others. It recommends the following steps:
1. Forgive yourself - having HIV is not a punishment and anyone can get it, so do not blame yourself.
2. Give yourself time - there is no set timeline, some people need weeks or months to come to terms with their diagnosis while others need years, so take the time you need.
3. Choose who to tell wisely - only disclose to those you trust who will be supportive, as it could negatively impact your mental health if the reaction is not positive.
Balance is a key concept in life that refers to maintaining equilibrium amid opposing forces. It requires finding harmony and stability through moderation rather than extremes. Achieving balance often means prioritizing what's most important while making room for other responsibilities and pleasures.
The document summarizes Consorzio FOR.COM's experience with mobile learning (m-learning) through three projects between 2003-2009. It discusses evolving definitions of m-learning, lessons learned around connectivity, devices, and blended learning. FOR.COM aims to increase interactivity and flexibility of m-learning applications to work across different devices and operating systems while experimenting in new fields like just-in-time information and decision making.
Instructional design is the process of developing effective and efficient learning experiences through research and theory. It focuses on learning outcomes rather than technology. The instructional designer analyzes learning needs and designs a system to deliver required instruction. Instructional design bridges the gap between education and technology to ensure concepts are properly designed for e-learning platforms. It involves continuous assessment and uses research in cognition, education psychology, and problem solving to organize resources and enhance learning.
Terugblik Wikipedian-in-Residence en mogelijkheden met Wikipedia in 2015Olaf Janssen
De Koninklijke Bibliotheek en het Nationaal Archief zijn de eerste Nederlandse erfgoedinstellingen die een Wikipedian-in-Residence in huis hebben gehaald. Het afgelopen jaar heeft deze huiswikipediaan een brug gebouwd tussen de rijke collecties van de KB en het NA en het hergebruik van deze content op Wikipedia. Na een korte inleiding op de wereld achter Wikipedia laten Tim en Olaf zien hoe het project de kennis, vaardigheden en bewustwording over ‘Wiki en erfgoed’ binnen en buiten de instellingsmuren heeft vergroot en hoe de content van beide instellingen wereldwijd breder toegankelijk en herbruikbaar is gemaakt. Aan het einde blikken ze vooruit hoe de KB en het NA ook in 2015 met Wikipedia kunnen blijven samenwerken.
This document summarizes various documents related to implementing data protection and privacy policies, including key points to consider, sample agreements, security documents, an audit certificate, and a company internal policy template. The documents are available for purchase and cover topics such as database transfers, outsourcing data processing, security measures, personnel responsibilities, auditing procedures, and compliance monitoring.
Cells are the basic unit of life that work together in tissues to perform similar functions. Groups of tissues form organs that work together or alone to carry out specific body functions. Organs that work coordinately comprise organ systems to perform major functions needed to sustain the human body as a living organism.
An embryo develops in the womb over several weeks. It receives nutrients from the yolk sac at first, then the umbilical cord. While floating calmly in the uterus, the embryo grows limbs by 16 weeks and can move its hands and feet, though the mother cannot see this. By 24 weeks, the lungs are almost fully formed and the embryo could survive if born, blinking its eyes and sucking its fingers. After 9 months of development from a single cell, the fetus is ready to be born.
The document provides information on setting financial goals and maintaining good financial habits such as organizing financial records, monitoring debt-to-income ratios, understanding credit reports and credit scores, and repairing credit. It emphasizes the importance of writing down goals, tracking spending, paying bills on time, and disputing any incorrect information on credit reports to achieve financial stability and qualify for loans and credit cards.
Progress report Wikipedian-in-Residence national library & archives Netherlan...Olaf Janssen
Progress report (in Dutch) on the Wikipedian-in-Residence project of the national library and national archives of the Netherlands dd 19-2-2014
De voortgang van het Wikipedian-in-Residence project van de Koninklijke Bibliotheek en het Nationaal Archief dd 19-2-2014
The document discusses various ways that companies can access new technologies, including setting up their own companies, entering into technological alliances, mergers and acquisitions, and obtaining technologies from external sources. It also describes different types of technology providers that can help companies, such as universities, research centers, technology centers, and some companies that develop custom technologies. The document advertises paid documents on its website about how to work with technology providers to transfer technologies.
The world of B2B marketing is filled with unsung heroes. These marketing athletes are hard-working individuals who rely on all available tools and channels to fuel their organizations’ growth. In one day, they can deftly switch from keyword research to landing page design, from writing ad copy to campaign analytics, from strategic planning to coding emails. It’s time to recognize these all-around marketing superstars.
But what makes marketing athletes so successful? What tools do they use? How do they spend their time? What can other B2B marketers learn from them? A first-of-its-kind marketing athlete survey helps answer these questions.
In this webinar you’ll learn:
- What makes marketing athletes successful and how you can replicate success in your organization.
- How marketing athletes divide up their time.
- The tools and resources used to get the job done.
- The challenges they face as they strive to meet even more demanding goals.
The Geoquorum approach for implementing atomic read/write shaved memory in mobile ad hoc networks. This
problem in distributed computing is revisited in the new setting provided by the emerging mobile computing technology. A
simple solution tailored for use in ad hoc networks is employed as a vehicle for demonstrating the applicability of formal
requirements and design strategies to the new field of mobile computing. The approach of this paper is based on well
understood techniques in specification refinement, but the methodology is tailored to mobile applications and help designers
address novel concerns such as logical mobility, the invocations, specific conditions constructs
Formal Specification for Implementing Atomic Read/Write Shared Memory in Mobi...ijcsit
The Geoquorum approach for implementing atomic read/write shaved memory in mobile ad hoc networks. This
problem in distributed computing is revisited in the new setting provided by the emerging mobile computing technology. A
simple solution tailored for use in ad hoc networks is employed as a vehicle for demonstrating the applicability of formal
requirements and design strategies to the new field of mobile computing. The approach of this paper is based on well
understood techniques in specification refinement, but the methodology is tailored to mobile applications and help designers
address novel concerns such as logical mobility, the invocations, specific conditions constructs. The proof logic and
programming notation of mobile UNITY provide the intellectual tools required to carryout this task. Also, the quorum
systems are investigated in highly mobile networks in order to reduce the communication cost associated with each distributed
operation.
A Model of Local Area Network Based Application for Inter-office Communicationtheijes
This document presents a model for a local area network (LAN) based application for inter-office communication. The authors designed a messenger software to be installed on an organization's LAN to allow staff to communicate and share information efficiently. The software was developed using Java and analyzed using various UML diagrams. It uses a client-server model with the control part installed on the server and messenger part on clients. The software allows users to send short messages, memos, letters and access an electronic bulletin board. It was tested on a campus LAN network and achieved the objectives of enabling free communication between staff during office hours through a centralized and secure system.
This document describes a methodology for simulating heterogeneous embedded systems that include hardware, software, and electromechanical parts. Interfaces were developed to allow a VHDL simulator to communicate with a physical systems simulator and application programs. The interfaces use the C language links of the simulators to establish communication channels and exchange commands/parameter values between the simulated parts in a chained master/slave synchronization scheme. This unified simulation environment allows designers to validate the behavior of the entire system early in the design process before implementing any of its parts.
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
Application Of UML In Real-Time Embedded Systemsijseajournal
The UML was designed as a graphical notation for use with object-oriented systems and applications.
Because of its popularity, now it is emerging in the field of embedded systems design as a modeling
language. The UML notation is useful in capturing the requirements, documenting the structure,
decomposing into objects and defining relationships between objects. It is a notational language that is
very useful in modelling the real-time embedded systems. This paper presents the requirements and
analysis modelling of a real-time embedded system related to a control system application for platform
stabilization using COMET method of design with UML notation. These applications involve designing of
electromechanical systems that are controlled by multi-processors.
The document describes an agent-based prototype for freight train traffic management between Milano and La Spezia stations in Italy. The prototype was developed using CaseLP, an environment for developing multi-agent system prototypes based on logic programming. The prototype models the complex interactions between various entities involved in freight train scheduling and aims to provide decision support for traffic coordinators. Key aspects of the prototype include defining agent classes and behaviors to represent different system actors like terminals and coordinators, and using agent communication to model coordination between agents. The prototype was tested to evaluate the benefits of an agent-based approach for decision support in freight train traffic management.
An approach of software engineering through middlewareIAEME Publication
The document discusses middleware and its role in facilitating the construction of distributed systems. It outlines some of the key challenges in building distributed systems, such as network communication, coordination between distributed components, reliability, scalability, and heterogeneity. Middleware aims to address these challenges by providing high-level abstractions and services that conceal low-level complexities related to distribution from application developers. The document argues that middleware is important for simplifying distributed system construction and should be a key consideration in software engineering research on distributed systems.
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
1) The document describes the analysis and design of an electronic device to aid navigation for the visually impaired using embedded systems.
2) The device uses sensors like GPS, light sensors, and ultrasonic sensors along with actuators like vibration motors and audio to provide information on location, obstacles, and lighting levels to users.
3) The development of the device involved requirements analysis using structured analysis techniques like data flow diagrams and state transition diagrams, as well as object-oriented design using UML diagrams. Software development is done in C/C++.
This document summarizes four architectural patterns for context-aware systems: WCAM, Event-Control-Action, Action, and architectural pattern for context-based navigation. It discusses examples, problems addressed, solutions, structures, and benefits of each pattern. The patterns are examined to determine which can best overcome complexity and be more extensible for context-aware systems.
A Framework To Generate 3D Learning ExperienceNathan Mathis
The document discusses a framework called OpenWebTalk (OWT) that was created to generate configurable 3D learning experiences. OWT is a declarative 3D component framework based on XML documents that describe both the formal structure of the virtual world and the complex set of interaction rules that govern user interactions. This framework aims to help fast prototyping and easy building of collaborative applications. It decouples all phases of authoring, allows easy definition and composition of virtual sessions in a component-oriented fashion, and can drive and control interactions to stimulate collaboration. The framework also provides a high-performance 3D rendering engine configurable through XML.
A SYSTEMC/SIMULINK CO-SIMULATION ENVIRONMENT OF THE JPEG ALGORITHMVLSICS Design
In the past decades, many factors have been continuously increasing like the functionality of embedded systems as well as the time-to-market pressure has been continuously increasing. Simulation of an entire system including both hardware and software from early design stages is one of the effective approaches to improve the design productivity. A large number of research efforts on hardware/software (HW/SW) co-simulation have been made so far. Real-time operating systems have become one of the important components in the embedded systems. However, in order to validate function of the entire system, this system has to be simulated together with application software and hardware. Indeed, traditional methods of verification have proven to be insufficient for complex digital systems. Register transfer level test-benches have become too complex to manage and too slow to execute. New methods and verification techniques began to emerge over the past few years. Highlevel test-benches, assertion-based verification, formal methods, hardware verification languages are just a few examples of the intense research activities driving the verification domain.
A SESERV methodology for tussle analysis in Future Internet technologies - In...ictseserv
This document introduces a methodology for analyzing "tussles" that may occur between stakeholders with differing interests when new internet technologies are introduced. It defines tussles as conflicts that can arise at each stage of a technology's adoption and use. The methodology involves: 1) Identifying stakeholder roles and interests for a given functionality, 2) Identifying potential tussles between stakeholders, and 3) Assessing the impact of each tussle on stakeholders and the risk of spillover effects on other functionalities. The methodology aims to help understand how new technologies may affect stakeholders and to design technologies that allow for varying outcomes while avoiding instability and spillovers.
The European Space Agency (ESA) uses an engine to perform tests in the Ground Segment infrastructure, specially the Operational Simulator. This engine uses many different tools to ensure the development of regression testing infrastructure and these tests perform black-box testing to the C++ simulator implementation. VST (VisionSpace Technologies) is one of the companies that provides these services to ESA and they need a tool to infer automatically tests from the existing C++ code, instead of writing manually scripts to perform tests. With this motivation in mind, this paper explores automatic testing approaches and tools in order to propose a system that satisfies VST needs.
DYNAMIC AND REALTIME MODELLING OF UBIQUITOUS INTERACTIONcscpconf
This document discusses modeling real-time interaction between a user and a ubiquitous system using dynamic Petri net models. It proposes using Petri nets to model a user's activity as a set of elementary actions. Elementary actions are modeled as Petri net structures that are then composed together through techniques like sequence, parallelism, etc. to form an overall model of user-system interaction. The models can be dynamically adapted based on changes to the user's context. OWL-S ontology is used to describe the dynamic aspects of the Petri net models, especially real-time composition of models. Simulation results validate the approach of dynamically modeling user-system interaction through mutation of Petri net models.
1. The document presents a case study applying an enterprise configuration management platform, ScriptRock, to a multi-agent robotic system to improve reconfiguration times and simplify troubleshooting.
2. The robotic system consists of unmanned ground vehicles and a ground control station running various software modules. ScriptRock allows validating configurations by encoding requirements as executable tests.
3. An experiment was conducted to gauge the benefits of using ScriptRock for configuration management over existing manual methods on the robotic system. Results showed improved reconfiguration times and simplified troubleshooting.
DESIGN PATTERNS IN THE WORKFLOW IMPLEMENTATION OF MARINE RESEARCH GENERAL INF...AM Publications
This paper proposes the use of design patterns in a marine research general information platform. The development of the platform refers to a design of complicated system architecture. Creation and execution of the research workflow nodes and designing of visualization library suited for marine users play an important role in the whole software architecture. This paper studies the requirements characteristic in marine research fields and has implemented a series of framework to solve these problems based on object-oriented and design patterns techniques. These frameworks make clear the relationship in all directions between modules and layers of software, which communicate through unified abstract interface and reduce the coupling between modules and layers. The building of these frameworks is importantly significant in advancing the reusability of software and strengthening extensibility and maintainability of the system.
Monitoring and Visualisation Approach for Collaboration Production Line Envir...Waqas Tariq
In this paper, a tool, called SPMonitor, to monitor and visualize of run-time execution productive processes is proposed. SPMonitor enables dynamically visualizing and monitoring workflows running in a system. It displays versatile information about currently executed workflows providing better understanding about processes and the general functionality of the domain. Moreover, SPMonitor enhances cooperation between different stakeholders by offering extensive communication and problem solving features that allow actors concerned to react more efficiently to different anomalies that may occur during a workflow execution. The ideas discussed are validated through the study of real case related to airbus assembly lines.
The document describes the system design, implementation, and testing of an Android-based mobile application called ABNNS. It defines the design goals, subsystem decomposition, and deployment diagram. It also includes UML class, state, and deployment diagrams. The application allows users to add and view information notifications. It was tested using unit, functional, and integration testing on the emulator and devices. Constraints in development included lack of experience with Android and debugging issues. Recommendations included offering related courses earlier and providing resources to support Android application development.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
1. Modeling and Verification of a
Telecommunication Application using Live
Sequence Charts and the Play-Engine Tool
Pierre Combes1
, David Harel2
, and Hillel Kugler3
1
France Telecom Research and Development, Paris, France
Pierre.Combes@francetelecom.com
2
The Weizmann Institute of Science, Rehovot, Israel
dharel@weizmann.ac.il
3
New York University, New York, NY, USA
kugler@cs.nyu.edu
Abstract. We apply the language of live sequence charts (LSCs) and
the Play-Engine tool to a real-world complex telecommunication service.
The service, called Depannage, allows a user to make a phone call and
ask for help from a doctor, the fire brigade, a car maintenance service,
etc. This kind of service is built on top of an embedded platform, us-
ing both new and existing service components. The complexity of such
applications stems from their distributed architecture, the various time
constraints they entail, and the fact the underlying systems are rapidly
evolving, introducing new components, protocols and associated hard-
ware constraints, all of which must be taken into account. We present
the results of our work on the specification, animation and formal veri-
fication of the Depannage service, and draw some initial conclusions as
to an appropriate methodology for using a scenario-based approach in
the telecommunication domain. The complete specification of the De-
pannage application in LSCs and some animations showing simulation
and verification results are made available as supplementary material. 1
1 Introduction
The challenging complexity of telecommunication systems, together with a high
demand for rapid deployment, encourages development of innovative techniques
in order to design and deploy new applications in a quick and secure manner [2].
In the telecommunication domain, components play a crucial role. The majority
of these components is embedded in a large and complex architecture which
involves hard and soft real-time constraints and requirements. Moreover, non-
functional requirements, in particular time dependent properties, also play an
important role. A telecommunication application is always built from a set of
This research was supported by the European Commission project OMEGA (IST-
2001-33522).
1
http://cs.nyu.edu/∼kugler/Depannage/
2. embedded service components, and in the emerging architecture a challenge is
providing a ubiquitous environment for telecommunication users. This means
that the telecommunication applications should be provided in several contexts
with a high level of quality of service, and always in a comprehensive way to the
end-users. Nowadays, due to openness of the telecommunication architecture, a
multiplicity of services and service features could be provided by several teams
or companies, and must be dynamically added and updated. The consistent
use of components and service features is becoming more critical in order to
ensure that undesired behaviors do not occur [11]. The time seems ripe to go
from ad-hoc techniques for component composition toward more integrated and
formal ones. Such techniques should be based on the use of formal languages for
design and verification. The languages and design models should be readable in
order to facilitate the communication between telecommunication engineers and
specialists in formal verification. A comprehensive animation tool is also very
important in order to enhance the understanding of the model, and in order
to show verification results to engineers and clients [4]. A proposed approach
should enable quick and secure telecommunication service creation, answering
questions like how to build an architecture based on a set of components (reused
or/and shared by several services) in such a way that we can guarantee providing
complete applications respecting quality of service and safety requirements.
2 Live sequence charts and the Play-Engine
Understanding system and software behavior by looking at various “stories”
or scenarios seems a promising approach, and it has focused intensive research
efforts in the last few years. One of the most widely used languages for specifying
scenario-based requirements is that of message sequence charts (MSCs), adopted
long ago by the ITU [15], or its UML variant, sequence diagrams [14]. Sequence
charts (whether MSCs or their UML variant) possess a rather weak partial-order
semantics that does not make it possible to capture many kinds of behavioral
requirements of a system. To address this, while remaining within the general
spirit of scenario-based visual formalisms, a broad extension of MSCs has been
proposed, called live sequence charts (LSCs) [6]. Among other things, LSCs
distinguish between behaviors that must happen in the system (universal) from
those that may happen (existential). A universal chart contains a prechart, which
specifies the scenario which, if successfully executed, forces the system to satisfy
the scenario given in the actual chart body. Existential charts specify sample
interactions between the system and its environment, and must be satisfied by
at least one system run. They thus do not force the application to behave in
a certain way in all cases, but rather state that there is at least one set of
circumstances under which a certain behavior occurs. The distinction between
mandatory (hot) and provisional (cold) applies also to other LSC constructs,
e.g., conditions and locations, thus creating a rich and powerful language, which
among many other things can express forbidden behavior (“anti-scenarios”).
3. In [9, 10] a methodology for specifying and validating requirements, termed
the “play-in/play-out approach”, is described, as well as a supporting tool called
the Play-Engine. According to this approach, requirements are captured by the
user playing in scenarios using a graphical interface of the system to be developed
or using an object model diagram. The user “plays” the GUI by clicking buttons,
rotating knobs and sending messages (calling functions) to objects in an intuitive
manner. By similarly playing the GUI, the user describes the desired reactions
of the system and the conditions that may, must or may not hold. As this is
being done, the supporting tool, the Play-Engine, constructs a formal version
of the requirements in the form of LSCs. Note that it is not always necessary
to spend much time designing a fancy graphical interface. In many cases, it is
enough to use a standard object model diagram. The Play-Engine tool, supports
class diagrams and allows to work with internal objects that are not reflected in
the GUI.
Play-out is a complementary idea to play-in, which, rather surprisingly, makes
it possible to execute the requirements directly. In play-out, the user simply plays
the GUI application as he/she would have done when executing a system model,
or the final system implementation, but limiting him/herself to “end-user” and
external environment actions only. While doing this, the Play-Engine keeps track
of the actions and causes other actions and events to occur as dictated by the
universal charts in the specification. Here too, the engine interacts with the GUI
application and uses it to reflect the system state at any given moment. This
process of the user operating the GUI application and the Play-Engine causing
it to react according to the specification has the effect of working with an exe-
cutable model, but with no intra-object model having to be built or synthesized.
Smart play-out [7] is a powerful technique for executing scenario-based re-
quirements using verification methods. It can be used for driving the execution
of the system, or for checking if a given existential chart can be satisfied without
violating any of the universal charts. Smart play-out is integrated in the Play-
Engine tool and allows developers to apply formal verification methods at early
design stages in a user-friendly manner.
3 Components and System Architecture
3.1 The Telecommunication application
We apply LSCs and the Play-Engine to a telecommunication service called De-
pannage, provided by France Telecom. The Depannage service allows a user to
make a phone call and ask for the help of a doctor, fire brigade, car maintenance,
etc. The service invocation software first asks for authentication of the calling
user, and then searches for the calling location. Once the calling location is found,
the software searches in a data base for numbers of potential service providers
corresponding to the Depannage society members in the vicinity of the caller.
Once various numbers are found, the service tries to connect the caller to one of
the potential called numbers (in a sequential or parallel way). In any case the
caller should be connected to a secretary or to a vocal box. In parallel a second
4. logic will make periodic location requests to the Depannage society members in
order to record their latest locations in the data base. The Depannage service
is implemented as a layered application consisting of several components. Each
layer or component is described by a group of scenarios; the connection between
layers is very clean and precise. The objects in each layer communicate only
among themselves and with the objects in the adjacent layers. This architecture
enables applying methodological approaches to break down the complexity of
the system.
3.2 Components and composites
A telecommunication system is based on a set of components — reusable software
units specified by their interfaces. The specification of these interfaces should be
given by the signatures of the required and provided methods and signals, and
by the description of the dynamic behaviors. Components should be reusable,
thus they should be specified independently of any embedding system.
Fig. 1. The architecture of the Depannage application
A composite structure will be specified as a white box by the set of embedded
components and the connections between these components [14]. Such structural
design could use hierarchical composition. The top-level of the composite struc-
ture will correspond to the complete system provided to the client, in our case
the telecommunication service Depannage.
Fig. 1 shows a partial view of the complete application (using UML compos-
ite structure diagram), the main components involved and the communication
between these components using ports and connectors.
5. 4 Overall view of a design methodology based on
verification
A classical problem in telecommunication is that of “feature interaction” [11].
Telecommunication infrastructure and applications are in a continuous evolution,
new services and service features are developed and deployed in the network
along with existing ones. They are developed by several teams in parallel, in order
to satisfy new customer requirements. The feature interaction problem occurs
when the introduction of a new service (feature) causes the new system to violate
an existing service requirement. This is a critical problem in telecommunication
— involving significant loss of time and money during testing and operation
phases. It can be properly solved only by identifying the problems during the
design and modeling phases.
To address these issues we present a methodology that supports an incre-
mental paradigm for specifying and developing telecommunication applications.
First, we describe a high level specification of the service and component be-
havior, including the behavior of the communication between these components.
This description includes timed constraints. Then the consistency of this high
level specification is validated, and testing is performed with respect to end-to-
end requirements. The analysis is performed initially by simulation and anima-
tion methods. In a second step, smart play-out is used in order to formally verify
some of the requirements.
5 High level specification
The wish to specify components in a reusable way requires that the component
specification should be done independently of any embedding architecture. Such
specification should correspond in a universal LSC to an abstract view of the
component, describing how the component will react to events coming from its
provided ports and how (and when) this component will act on its required ports
(execution flows).
For the system — i.e., the complete application — the specification should
be enhanced by universal LSCs describing the communications between these
components. Such LSCs could include time constraints and delays on the com-
munication. The end-to-end requirements are expressed by existential LSCs and
will be validated during the simulation/animation of the model.
In this paper, we will focus our presentation on the Search component, the
Users component and the communication between these components. A detailed
description of the entire model is available online at [5].
5.1 Search component
This component has two ports, SearchService for communicating with the ap-
plication that will use it and SearchApi in order to communicate with platform
components and indirectly with the users and the environment.
6. Fig. 2. First LSC for Search Component - Concrete
Fig. 3. First LSC for Search Component - Symbolic
The universal chart Search1Exact, appearing in Fig. 2, requires that when-
ever SearchSer1 sends the EstablishSearch method to Search1, as specified
in the prechart, the Search1 port sets the value of Tset to TRUE and then sends
the LegDest(3) method to SearchApi1.
In order to specify this requirement in a generic way, so it will hold for all
other instantiations of the classes SearchService, Search and SearchApi, we
use symbolic instances [12] as shown in the chart Search1 in Fig. 3. Whenever
an instance of class SearchService sends the EstablishSearch method to an
instance of class Search, the Search instance sets the value of Tset to TRUE
and then sends the LegDest(3) method to a searchApi instance which has an
ID that is identical to the ID of the Search instance. This is done by storing
the Search ID using an assignment to variable X7, and in the ellipse above the
7. Fig. 4. Second LSC for Search Component
SearchApi instance specifying the binding condition .ID = x7, meaning that
an instance of class SearchApi with ID equal to the value stored in X7 will be
bound to this chart, and then later the LegDest(3) method will be sent to it.
The universal chart Search2, appearing in Fig. 4 specifies a behavioral re-
quirement that is relevant when the SearchApi gets information on the
LegCallReturn and forwards it to the Search port. The prechart of Fig. 4 con-
tains a scenario and not a single message as in Fig. 3. The chart will be activated
if an instance of class Search sends the LegDest(3) method to a searchApi in-
stance, and this searchApi instance sends the LegCallReturn message back to
the Search instance.
Another LSC feature introduced in Fig. 4 is the If-Then-Else construct used
to specify conditional behavior. In the main chart, if the parameter of
LegCallReturn is FALSE (the parameter is stored in variable X337) then Search
sends LegDest(2) to the SearchApi instance and sets the value of Tset to TRUE.
Otherwise, the other part of the subchart is taken, which involves a nested If-
Then-Else construct. Here we branch according to the time that has elapsed
since the LegDest(3) message was sent. If this time is less than 1 time unit
8. Fig. 5. The Mobile Phone
Search sends LegDest(2) and sets the value of Tset to TRUE as before. This
corresponds to a situation in the system where a very quick answer by the
mobile phone means that we will be connected to its vocal box, a situation
which should be avoided in the Depannage service. If the time that has elapsed
since the LegDest(3) message was sent is greater than or equal to 1 time unit
the message EstablishSearchReturn(TRUE, Mobile) is sent to the appropriate
SearchService instance, corresponding to continuing the process of connecting
to the mobile phone.
5.2 The users
We model only a simple view of the user behavior, focusing for a fixed phone on
three possible states, corresponding to user actions : busy, answer with a delay,
or noanswer. The specification of a mobile phone, shown in Fig. 5 introduces an
additional state quickanswer. In reality, if a mobile phone is reachable but in
a disconnected state, the communication will quickly be connected to the vocal
9. box of the phone. This behavior should be taken into account carefully while
designing the service. Some service logics should not connect the calling party
to a vocal box. In the Depannage service we want to be connected to a person
which is available or to a secretary or in the worst case to the vocal box of the
depannage company, but not to the vocal box of the mobile phone of one of the
Depannage service providers.
5.3 The communication view
Fig. 6. Connectors between components Depannage and Search
Fig. 7. Connectors between components Search and Depannage
10. Fig. 8. Connectors between components Depannage and Location
Developing a new telecommunication application is performed by taking ex-
isting components (each such component is already specified by a set of LSCs),
and connecting them together. In our methodology this assembly of components
is also done by specifying universal LSCs defining the connection between com-
ponents. Following the architecture diagram, these LSCs will specify the com-
munications between components. Such LSCs for connector behaviors may be
simple or complex, depending on time constraints and delays, on the parallelism
of thread execution, and the fact that, in the system architecture, a component
port could be connected to several other component ports (for example the port
ApiES of the component ApiCall in Fig. 1).
To specify the communication between two components following an architec-
tural diagram, we have to construct two LSCs for each event. Consider the con-
nection between the components Depannage and Search. We have to express that
the event EstablishSearch required by the component Depannage and provided
by the component Search should go through the port DepannageSearch of the
component Depannage and the port SearchService of the component Search.
This is described in the charts DepToSearch1 and DepToSearch2 in Fig. 6(a),(b).
Similar LSCs are also specified for the return event EstablishSearchReturn in
Fig. 7(a),(b).
The connection between the components Depannage and Location is de-
scribed in Figs. 8, 9. In these LSCs we also introduce time delay on the communi-
cation. The LSC DepToLoc1 of Fig. 8 specifies that the method SearchLocation
will take between 1 and 2 time units. The method SearchLocation is an asyn-
chronous method, designated by the open arrow, in contrast to the closed arrows
for synchronous methods. This time constraint is specified by storing the time
in variable x452 immediately after sending SearchLocation and adding the two
11. Fig. 9. Connectors between components Location and Depannage
hot conditions requiring Time > x452 + 1 and Time <= x452 + 2. A similar
requirement that the method SearchLocationReturn will take between 1 and
2 time units is specified in Fig. 9.
In some of the cases, describing the connection between components using
LSCs is quite straightforward, as shown in the examples above. We propose
that in the future such LSCs could be derived automatically by the tool using
appropriate annotations on the architecture diagram.
6 Simulation using play-out
Play-out allows a convenient way to debug requirements at an early stage and
to detect problems in the design. For this purpose we can use anti-scenarios,
behavioral requirements that are forbidden in the system. Consider the chart
NoQuickAnswer2 in Fig. 10. It specifies that whenever SinglePhone3 makes a
quick answer by sending the self message UserAction(quickAnswer) and after
that DepSearch1 sends the message EstablishSearch Return(True, Mobile)
to Depannage1, then the condition FALSE specified in the main chart must hold
— which can never occur — implying that this sequence of messages specified in
the prechart corresponding to a connection to the vocal box of a mobile phone
must never occur.
In play-out mode, if this chart participates in the execution, the prechart will
be traced and if it is completed the user will get a message that the system has
aborted due to the violation of a hot condition, as shown in Fig. 11. In this case
the violation was caused by a time delay in the APICall which is triggered by
setting the property CondTime of this object to TRUE. In general, once a violation
12. Fig. 10. A forbidden scenario - No connection to the vocal box of a mobile phone
Fig. 11. Violation of a forbidden scenario during play-out
is detected it indicates a problem in the specification or the design of the service
and should be looked into carefully to identify and fix the cause of the violation.
7 Verification using smart play-out
Smart play-out [7] uses verification methods, mainly model-checking, to execute
and analyze LSCs. There are various modes in which smart play-out can work. In
one of the modes smart play-out functions as an enhanced play-out mechanism,
helping the execution to avoid deadlocks and violations. Thus, in this mode smart
play-out utilizes verification techniques to run programs, rather than to verify
them. In another mode, smart play-out is given an existential chart and asked
if it can be satisfied without violating any of the universal charts. If it manages
to satisfy the existential chart the satisfying run is played out, providing full
information on the execution and reflecting the behavior in the GUI.
13. Fig. 12. An existential chart implying connection to the vocal box of a mobile phone
Fig. 13. A new feature of forwarding calls
In the Depannage application we mainly used existential charts for specifying
scenarios that should not occur, and then asked smart play-out if they can be
satisfied. If the existential chart was satisfied, this means we have discovered an
error in our specification model, and the execution can provide insights on what
went wrong. A cleaner way would have been to specify these scenarios as anti-
scenarios, as shown in Fig. 10. An enhancement to smart play-out is currently
being developed to support this work-flow.
Consider the existential chart shown in Fig. 12. It describes a scenario that
implies a user (on Phone1) being connected to the vocal box of a mobile phone
(Phone3), an undesired behavior since then the user does not get a personal
response to his request as is desired for the Depannage service. Smart play-
out proves given the universal charts in the model that this scenario cannot be
satisfied.
We then added a new feature to our telecommunication model, forwarding
calls, shown in Fig. 13, applied smart play-out, and it found a way to satisfy
14. Fig. 14. Timing Requirements
the chart of Fig. 12. The interaction of the new feature of the forwarding calls
allowed an erroneous situation in which a user is connected to a vocal box. A
short animation of this behavior is shown in [5].
The current version of smart play-out is still restricted in terms of the lan-
guage features it supports. Thus to use it some restrictions should be made on
the model: no symbolic-instances, and only one parameter for each signal. We
are currently working on lifting these restrictions. We have also abstracted and
simplified the model to avoid the well known state-explosion problem. In a simi-
lar manner we have verified also timed properties of the application, as specified
in Fig. 14. The entire model and the reduced versions are all available in [5].
8 Related work and Future directions
Scenario-based specification is very helpful in early stages of development [1],
and is used widely by engineers. A considerable amount of experience has been
gained from it being integrated into the MSC ITU standard [13] and the UML
[14]. The latest versions of the UML recognized the importance of scenario-based
requirements, and UML 2.0 sequence diagrams have been significantly enhanced
in expressive capabilities, inspired in part by the LSCs of [6]. In [8], we report
on the methodological experience gained by using LSCs and the Play-Engine in
several industrial case studies. (We briefly mention the Depannage application
too.)
Performance requirements — the number of requests that a system can man-
age — are very important in telecommunication applications but are not consid-
ered in this work. Simulation techniques based on queuing theory can be used for
such performance evaluation. These techniques are, in many tools, based on the
description of dynamic behavior as execution flows between components and ma-
chines. Thus, LSCs seem to be a suitable language for integrating performance
evaluation and formal verification [3].
15. References
1. R. Alur, G.J. Holzmann, and D. Peled. An analyzer for message sequence charts.
Software Concepts and Tools, 17(2):70–77, 1996.
2. R. Castanet, A. Cavalli, P. Combes, P. Laurencot, M. MacKaya, A. Mederreg,
W. Monin, and F. Zaidi. A multi-service and multi-protocol validation platform-
experimentation results. In TestCom, volume 2978 of Lect. Notes in Comp. Sci.,
pages 17–32. Springer-Verlag, 2004.
3. P. Combes, F. Dubois, W. Monin, and D. Vincent. Looking for better integration
of design and performance engineering. In R. Reed, editor, SDL Forum, volume
2708 of Lect. Notes in Comp. Sci., pages 1–17. Springer-Verlag, 2003.
4. P. Combes, F. Dubois, and B. Renard. An Open Animation Tool: Application
to Telecommunications Systems. Computer Networks, 40(5):599–620, December
2002.
5. P. Combes, D. Harel, and H. Kugler. Supplementary material on the depannage
application. http://cs.nyu.edu/∼kugler/Depannage/.
6. W. Damm and D. Harel. LSCs: Breathing life into message sequence charts. Formal
Methods in System Design, 19(1):45–80, 2001. Preliminary version appeared in
Proc. 3rd IFIP Int. Conf. on Formal Methods for Open Object-Based Distributed
Systems (FMOODS’99).
7. D. Harel, H. Kugler, R. Marelly, and A. Pnueli. Smart play-out of behavioral
requirements. In Proc. 4th
Intl. Conference on Formal Methods in Computer-Aided
Design (FMCAD’02), Portland, Oregon, volume 2517 of Lect. Notes in Comp. Sci.,
pages 378–398, 2002. Also available as Tech. Report MCS02-08, The Weizmann
Institute of Science.
8. D. Harel, H. Kugler, and G. Weiss. Some Methodological Observations Resulting
from Experience Using LSCs and the Play-In/Play-Out Approach. In Proc. Sce-
narios: Models, Algorithms and Tools, volume 3466 of Lect. Notes in Comp. Sci.
Springer-Verlag, 2005.
9. D. Harel and R. Marelly. Come, Let’s Play: Scenario-Based Programming Using
LSCs and the Play-Engine. Springer-Verlag, 2003.
10. D. Harel and R. Marelly. Specifying and Executing Behavioral Requirements: The
Play In/Play-Out Approach. Software and System Modeling (SoSyM), 2(2):82–107,
2003.
11. L. Logrippo and D. Amyot. Feature Interactions in Telecommunications and Soft-
ware Systems VII. IOS Press, 2003.
12. R. Marelly, D. Harel, and H. Kugler. Multiple instances and symbolic variables
in executable sequence charts. In Proc. 17th Ann. ACM Conf. on Object-Oriented
Programming, Systems, Languages and Applications (OOPSLA’02), pages 83–100,
Seattle, WA, 2002.
13. ITU-TS Recommendation Z.120 (11/99): MSC 2000. ITU-TS, Geneva, 1999.
14. UML. Documentation of the unified modeling language (UML). Available from
the Object Management Group (OMG), http://www.omg.org.
15. Z.120 ITU-TS Recommendation Z.120: Message Sequence Chart (MSC). ITU-TS,
Geneva, 1996.