This document proposes a metamodel for modeling reputation-based multi-agent systems using an adaptation of the ArchiMate enterprise architecture modeling framework. It describes a case study applying this metamodel to model an electrical distribution critical infrastructure system. Key elements of the metamodel include:
- Representing agents and their behaviors through policies that integrate both behavior and trust components
- Modeling trust relationships between agents using a reputation-based trust model
- Illustrating the metamodel layers and components on a system that detects weather alerts and broadcasts messages to the public through various channels like SMS or social media
This document proposes an extension of the ArchiMate enterprise architecture framework to model multi-agent systems for critical infrastructure governance. The authors develop a responsibility-driven policy concept and metamodel layers to represent agent behavior and organizational policies across technical, application, and organizational layers. The approach is illustrated through a case study of a financial transaction processing system.
This document proposes enhancements to the Role-Based Access Control (RBAC) model by integrating the concept of responsibility. It summarizes the existing RBAC model and user-role/permission-role assignment processes. It then presents a responsibility model built around three concepts: an employee's obligations derived from responsibilities, the rights required to fulfill obligations, and the employee's commitment to fulfill obligations. The paper argues RBAC could be improved by incorporating acceptance of responsibility within the role assignment process. It proposes integrating the responsibility model with RBAC to address identified weaknesses and modeling the integrated model using the OWL ontology language.
The document provides an overview of several agent-oriented software engineering methodologies, including GAIA, INGENIAS, MaSE, PASSI, Prometheus, Tropos, and ADEM. It discusses the concepts and properties supported by each methodology, their modeling approaches and notations, development lifecycles, and their pragmatic considerations in terms of available resources and tooling. The methodologies generally support common multi-agent concepts but provide different levels of guidance, modeling support, and tooling throughout the development process.
Multi-agent systems can be viewed as a software architecture style consisting of autonomous components called agents. The agents interact through message passing according to a predefined protocol. There are different organizational styles for multi-agent systems including hierarchical, flat, subsumption, and modular organizations. Effective multi-agent systems require specially designed communication protocols that fit the agent architecture, organization, and tasks. Standard communication languages and protocols are increasingly used to facilitate conversations between agents from different systems.
This document proposes a conceptual trusted incident reaction architecture based on a multi-agent system. The architecture is designed to dynamically and flexibly react to security incidents across an enterprise network. It incorporates the concept of trust into the decision-making process for determining and deploying appropriate security responses. The architecture is illustrated using a case study of a medical application distributed across buildings, a campus, and metropolitan area networks.
Alignment of remmo with rbac to manage access rights in the frame of enterpri...christophefeltus
The document proposes aligning a Responsibility metamodel (ReMMo) with the Role-Based Access Control (RBAC) model to better manage access rights based on employee responsibilities within an enterprise architecture. It first defines the ReMMo to represent business responsibilities and related access rights. ReMMo is then integrated with the ArchiMate enterprise architecture framework. Finally, the paper proposes aligning ReMMo and RBAC and provides a reference model for engineering access rights based on aligning business roles, responsibilities, and RBAC roles. This approach uses responsibility as a pivot to integrate business and application layer access rights requirements.
The IT-GRC platform is a solution that is based on
the paradigm of distributed systems, based on multi-agent systems
(MAS) in its different parts namely the user interface, the static
and dynamic configuration of the organization management
profiles, the choice of the best repository and the processing of
processes, it takes advantage of the autonomy and learning aspect
of ADMs as well as their high-level communication and
coordination. However, these technological components are
difficult to manipulate, or users lack the necessary skills to use
them correctly. In this situation, the modeling of a communication
architecture is necessary, in order to adapt the functionalities of
the platform to the needs of the users. To help achieve these goals,
it is necessary to develop a functional and intelligent
communication architecture, adaptable and able to provide a
support framework, allowing access to system functionalities
regardless of physical and time constraints.
Can “Feature” be used to Model the Changing Access Control Policies? IJORCS
Access control policies [ACPs] regulate the access to data and resources in information systems. These ACPs are framed from the functional requirements and the Organizational security & privacy policies. It was found to be beneficial, when the ACPs are included in the early phases of the software development leading to secure development of information systems. Many approaches are available for including the ACPs in requirements and design phase. They relied on UML artifacts, Aspects and also Feature for this purpose. But the earlier modeling approaches are limited in expressing the evolving ACPs due to organizational policy changes and business process modifications. In this paper, we analyze, whether “Feature”- defined as an increment in program functionality can be used as a modeling entity to represent the Evolving Access control requirements. We discuss the two prominent approaches that use Feature in modeling ACPs. Also we have a comparative analysis to find the suitability of Features in the context of changing ACPs. We conclude with our findings and provide directions for further research.
This document proposes an extension of the ArchiMate enterprise architecture framework to model multi-agent systems for critical infrastructure governance. The authors develop a responsibility-driven policy concept and metamodel layers to represent agent behavior and organizational policies across technical, application, and organizational layers. The approach is illustrated through a case study of a financial transaction processing system.
This document proposes enhancements to the Role-Based Access Control (RBAC) model by integrating the concept of responsibility. It summarizes the existing RBAC model and user-role/permission-role assignment processes. It then presents a responsibility model built around three concepts: an employee's obligations derived from responsibilities, the rights required to fulfill obligations, and the employee's commitment to fulfill obligations. The paper argues RBAC could be improved by incorporating acceptance of responsibility within the role assignment process. It proposes integrating the responsibility model with RBAC to address identified weaknesses and modeling the integrated model using the OWL ontology language.
The document provides an overview of several agent-oriented software engineering methodologies, including GAIA, INGENIAS, MaSE, PASSI, Prometheus, Tropos, and ADEM. It discusses the concepts and properties supported by each methodology, their modeling approaches and notations, development lifecycles, and their pragmatic considerations in terms of available resources and tooling. The methodologies generally support common multi-agent concepts but provide different levels of guidance, modeling support, and tooling throughout the development process.
Multi-agent systems can be viewed as a software architecture style consisting of autonomous components called agents. The agents interact through message passing according to a predefined protocol. There are different organizational styles for multi-agent systems including hierarchical, flat, subsumption, and modular organizations. Effective multi-agent systems require specially designed communication protocols that fit the agent architecture, organization, and tasks. Standard communication languages and protocols are increasingly used to facilitate conversations between agents from different systems.
This document proposes a conceptual trusted incident reaction architecture based on a multi-agent system. The architecture is designed to dynamically and flexibly react to security incidents across an enterprise network. It incorporates the concept of trust into the decision-making process for determining and deploying appropriate security responses. The architecture is illustrated using a case study of a medical application distributed across buildings, a campus, and metropolitan area networks.
Alignment of remmo with rbac to manage access rights in the frame of enterpri...christophefeltus
The document proposes aligning a Responsibility metamodel (ReMMo) with the Role-Based Access Control (RBAC) model to better manage access rights based on employee responsibilities within an enterprise architecture. It first defines the ReMMo to represent business responsibilities and related access rights. ReMMo is then integrated with the ArchiMate enterprise architecture framework. Finally, the paper proposes aligning ReMMo and RBAC and provides a reference model for engineering access rights based on aligning business roles, responsibilities, and RBAC roles. This approach uses responsibility as a pivot to integrate business and application layer access rights requirements.
The IT-GRC platform is a solution that is based on
the paradigm of distributed systems, based on multi-agent systems
(MAS) in its different parts namely the user interface, the static
and dynamic configuration of the organization management
profiles, the choice of the best repository and the processing of
processes, it takes advantage of the autonomy and learning aspect
of ADMs as well as their high-level communication and
coordination. However, these technological components are
difficult to manipulate, or users lack the necessary skills to use
them correctly. In this situation, the modeling of a communication
architecture is necessary, in order to adapt the functionalities of
the platform to the needs of the users. To help achieve these goals,
it is necessary to develop a functional and intelligent
communication architecture, adaptable and able to provide a
support framework, allowing access to system functionalities
regardless of physical and time constraints.
Can “Feature” be used to Model the Changing Access Control Policies? IJORCS
Access control policies [ACPs] regulate the access to data and resources in information systems. These ACPs are framed from the functional requirements and the Organizational security & privacy policies. It was found to be beneficial, when the ACPs are included in the early phases of the software development leading to secure development of information systems. Many approaches are available for including the ACPs in requirements and design phase. They relied on UML artifacts, Aspects and also Feature for this purpose. But the earlier modeling approaches are limited in expressing the evolving ACPs due to organizational policy changes and business process modifications. In this paper, we analyze, whether “Feature”- defined as an increment in program functionality can be used as a modeling entity to represent the Evolving Access control requirements. We discuss the two prominent approaches that use Feature in modeling ACPs. Also we have a comparative analysis to find the suitability of Features in the context of changing ACPs. We conclude with our findings and provide directions for further research.
This document discusses building a responsibility model using modal logic. It begins with a literature review of existing policy models and engineering methods related to concepts of accountability, capability and commitment. It identifies that while some concepts like rights and roles are commonly addressed, models do not fully cover all responsibility components. The document then proposes a preliminary responsibility model and defines the main concepts of capability, accountability and commitment. It suggests a formalization of these concepts using deontic logic to help analyze organizational structures and policies for consistency and problems.
Presenting an Excusable Model of Enterprise Architecture for Evaluation of R...Editor IJCATR
The document presents a method for creating an executable model of enterprise architecture diagrams to evaluate reliability. It transforms UML collaboration diagrams into colored Petri nets using an algorithm. This allows simulation of the diagrams to identify potential reliability issues early in the planning process. It aims to avoid high costs of implementation by improving architectural artifacts. The key steps are:
1) Using C4ISR framework and UML diagrams to describe enterprise architecture.
2) Transforming collaboration diagrams to colored Petri nets using a algorithm that represents messages as transitions and senders/receivers as places.
3) Annotating the Petri net model with reliability data to enable simulation and evaluation of reliability.
This document proposes a responsibility modeling language (ReMoLa) to align access rights with business process requirements. ReMoLa is a responsibility-centered meta-model that integrates concepts from the business and technical layers, with the concept of employee responsibility bridging the two. It incorporates four types of obligations from the COBIT framework to refine employee responsibilities and better assign access rights. ReMoLa maps responsibilities to roles in the RBAC model to leverage its advantages for access right management while ensuring responsibilities align with business tasks and employee commitment.
The document discusses key concepts in software design engineering including:
- Design should implement requirements from analysis and be understandable.
- Qualities like modularity, appropriate data structures, and independent components improve design.
- Fundamental concepts like abstraction, architecture, patterns, and modularity compartmentalize a design.
- Design principles guide creating a design that is traceable, reusable, and accommodates change.
This document discusses architectural integration styles for large-scale enterprise software systems. It proposes using architectural styles as a way to generalize common integration solutions at the enterprise system level, similar to how styles are used in traditional software architecture. The document defines key terms and presents a structure for describing architectural integration styles. It then describes several example styles, and presents a case study applying the style selection process to an energy company's system integration project. The goal is to provide an approach for selecting integration solutions based on the characteristics of existing systems and desired quality attributes.
Graph-Based Algorithm for a User-Aware SaaS Approach: Computing Optimal Distr...IJERA Editor
As a tool to exploit economies of scale, Software as a Service cloud models promote Multi-Tenancy which is the notion of sharing instances among a large group of tenants. However, Multi-Tenancy only satisfies requirements that are common to all tenants as well as the fact that tenants themselves hesitate about sharing. In a try to solve this problem, the present paper propose a User-Aware approach for Software as a Service models using Rich-Variant Components. The main contribution of this approach is a framework summarized in a graphbased algorithm enabling deduction of an optimal distribution of instances on application's tenants. To illustrate and evaluate the framework, the approach is applied on a Software as a Service Application for private school management
Jobst Landgrebe The HL7 Services Aware Interoperability Framework (SAIF)Barry Smith
The document summarizes the Health Level 7 (HL7) approach to semantic interoperability and its Services Aware Interoperability Framework (SAIF). It finds that HL7's previous approach using the Reference Information Model (RIM) failed due to disconnection from users, lack of proper foundations, and technical issues. It also finds that the SAIF does not meet the needs of an interoperability framework and will not overcome HL7's crisis because it fails to realize necessary principles, has an inconsistent architecture, and cannot be instantiated. The authors recommend replacing SAIF with a new approach based on fundamental reassessment and using existing standards.
Multiagent Based Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Multiagent Based Software Engineering Methodology
A Framework for Engineering Enterprise Agilityrahmanmokhtar
This document presents a framework for engineering enterprise agility that uses ontologies, viewpoints, and agent-based middleware. The framework aims to enable enterprises to meaningfully share and communicate information between distributed systems. It proposes an architecture with components that utilize HTTP and integrate ontologies, viewpoints, and agent-middleware using open standards to promote semantic interoperability. The framework is intended to support the growth and agility of heterogeneous distributed enterprise systems.
This document discusses the challenges of requirements engineering for cross-organizational enterprise resource planning (ERP) implementations. It proposes analyzing coordination requirements from a coordination theory perspective to better align ERP systems with business needs. The document inventories coordination mechanisms supported by ERP packages and analyzes how mismatches between business flexibility needs and ERP rigidity can be addressed. Explicitly specifying coordination mechanisms may help requirements engineers find a match between an ERP system's coordination support and the mechanisms selected by cooperating organizations.
Intelligent Buildings: Foundation for Intelligent Physical AgentsIJERA Editor
FIPA is an IEEE Computer Society standards organization that promotes agent-based technology and the interoperability of its standards with other technologies. In the design phase of Intelligent Buildings, it is essential to manage many services and facilities, to do this, multi-agent systems are a good tool to manage them. In this paper, we will gereneral description of the features and elements of multiagent systems described by Foundation for Intelligent Physical Agents (FIPA). Secondly, we will focus on the architectures of these multiagent systems. And finally, we will propose a multi-agent system design to see the application in the design of a detached house where the lighting, air conditioning and security systems will be integrated.
This document summarizes four architectural patterns for context-aware systems: WCAM, Event-Control-Action, Action, and architectural pattern for context-based navigation. It discusses examples, problems addressed, solutions, structures, and benefits of each pattern. The patterns are examined to determine which can best overcome complexity and be more extensible for context-aware systems.
Alternating Current Machines-Synchronous MachinesTalia Carbis
This document provides an overview of synchronous machines including:
- Synchronous machines operate at synchronous speed and lock into the rotating magnetic field produced by the stator.
- The rotor is a magnet that is dragged along for the ride as the rotating magnetic field rotates.
- Torque is produced as the magnetic fields of the rotor and stator interact. The torque allows the motor to operate at a constant synchronous speed under varying load.
The document discusses magneto-optical current transformers (MOCTs). MOCTs use the Faraday effect to measure current non-invasively using light. They have two main parts: an optical sensor and electronic processing circuit. The optical sensor contains a polarizer, optical glass prism, and analyzer. Polarized light passes through the prism, whose rotation is proportional to the current induced magnetic field. This intensity-modulated light is converted to an electric signal. MOCTs offer advantages like isolation, wide bandwidth and no saturation, but also have disadvantages like temperature sensitivity and insufficient accuracy for power systems currently.
This document discusses several types of electric motors: AC series motors, universal motors, stepper motors, and shaded pole motors. It provides details on the construction and operation of universal motors and stepper motors. Universal motors can operate on either AC or DC power because the rotor and stator windings are connected in series. Stepper motors rotate in precise angular increments in response to applied digital pulses, making them well-suited for applications requiring precise positional control like printers and CNC machines. The document compares advantages and disadvantages of stepper motors.
The document discusses the design of a high voltage step-up DC-DC converter using coupled inductor and switched capacitor techniques. It aims to achieve a high step-up voltage gain by charging capacitors in parallel and discharging them in series through a coupled inductor. This allows an input voltage of 24 volts to output 400 volts at an appropriate duty ratio. The converter design is analyzed and its simulation results in a 400 volt output voltage without ripples are presented. A plan of action with timeline is also provided to fabricate and test the converter.
The document describes a Magneto Optic Current Transducer (MOCT) which uses the Faraday effect to measure electric current. It consists of an optical sensor near the current carrying conductor, fiber optic cables, and a signal processing unit. The sensor contains polarized light which rotates proportionally to the current. This rotation is measured and transmitted via fiber optics to the processing unit. Key advantages over conventional transformers include increased safety, simpler insulation, and immunity to electromagnetic interference. While MOCTs provide benefits, their accuracy is currently insufficient for power system applications.
Digital testing of high voltage circuit breakerneeraj prasad
Digital testing of high voltage circuit breakers provides several advantages over physical testing. It allows evaluation of future standards, reduction of full-scale laboratory testing, and identification of network issues. The digital process develops a software model of the circuit breaker using data from standard physical tests. This model then provides insights without additional physical testing. While digital testing is more costly and time-consuming initially, it offers opportunities to optimize circuit breaker design and monitor performance over time.
A transformer is a static device that changes alternating current (AC) at one voltage level to AC at another voltage level through electromagnetic induction. It consists of two coils, the primary and secondary windings, wrapped around a laminated iron core. When an alternating current is applied to the primary winding, it produces an alternating magnetic field that induces a voltage in the secondary winding. This allows the transformer to step up or step down voltages without changing the frequency. The transformer transfers power between its two coils through electromagnetic coupling between the coils wound around the iron core.
Running head NETWORK DIAGRAM AND WORKFLOW1NETWORK DIAGRAM AN.docxjeanettehully
Running head: NETWORK DIAGRAM AND WORKFLOW 1
NETWORK DIAGRAM AND WORKFLOW 2
Network and Workflow for Data Analytics Company
Name: sunil patel
Running head: Network diagram and work Flow 1
MIT 681: capstone Assignment 4
Network and Workflow for Data Analytics Company
Every company requires a workflow that is consistent with the network in order to achieve productivity and value addition. There are different network setups for different companies based on the services and products offered. The main idea behind formulation of a network is to map the workflow of the company to different resources available. The network design must guarantee effective connection among the staff members in a company. The relationship between network diagram and company’s workflow is based on the idea to maintain communication in the company among the existing staff members. The achievement of the company’s objective is based on the nature and performance of the network as well as the workflow of the respective company (Kumar & Kirthika, 2017). This paper outlines the design and architecture of a good practice network and workflow for a big data company.
The implementation of the network for a data analytics company requires planning based on the requirements and user roles. Connecting the staff is the main idea in a network architecture. The requirements for a data analytics company shall need data analysts for decision making and identification of opportunities in the market sector (Kumar & Kirthika, 2017). A good network workflow shall guarantee improvement in service offering for the company in context. A highly skilled team shall be required for the achievement of the objectives with the use of the network workflow.
The composition of a data analytics company is made of the leader director of analytics, data science manager and analytics manager. The director is in charge of management of the analytics and data science manager. He/she shall oversee the activities of the lower position leaders. The exploratory and description of the analyses shall be conducted by the data engineers in the lower departments. All data scientists shall be controlled by the data science manager in their respective roles. The company shall also have professionals in different fields and qualifications in other areas who shall collaborate in the achievement of objectives and goals of the institution. The skill set shall include; data architects, statisticians, software engineers, business analysts and data visualizers.
The major component of the network in the data analytics company is the software applied for data analysis. The software is usually created by the software engineers and comprises of components that are able to collect and process data (Cao, Chen, Zhao & Li, 2009). The role of the software engineers in this case is to guide the company with their skills and expertise for the best technology and implementation procedures. The advice and edu ...
This document discusses building a responsibility model using modal logic. It begins with a literature review of existing policy models and engineering methods related to concepts of accountability, capability and commitment. It identifies that while some concepts like rights and roles are commonly addressed, models do not fully cover all responsibility components. The document then proposes a preliminary responsibility model and defines the main concepts of capability, accountability and commitment. It suggests a formalization of these concepts using deontic logic to help analyze organizational structures and policies for consistency and problems.
Presenting an Excusable Model of Enterprise Architecture for Evaluation of R...Editor IJCATR
The document presents a method for creating an executable model of enterprise architecture diagrams to evaluate reliability. It transforms UML collaboration diagrams into colored Petri nets using an algorithm. This allows simulation of the diagrams to identify potential reliability issues early in the planning process. It aims to avoid high costs of implementation by improving architectural artifacts. The key steps are:
1) Using C4ISR framework and UML diagrams to describe enterprise architecture.
2) Transforming collaboration diagrams to colored Petri nets using a algorithm that represents messages as transitions and senders/receivers as places.
3) Annotating the Petri net model with reliability data to enable simulation and evaluation of reliability.
This document proposes a responsibility modeling language (ReMoLa) to align access rights with business process requirements. ReMoLa is a responsibility-centered meta-model that integrates concepts from the business and technical layers, with the concept of employee responsibility bridging the two. It incorporates four types of obligations from the COBIT framework to refine employee responsibilities and better assign access rights. ReMoLa maps responsibilities to roles in the RBAC model to leverage its advantages for access right management while ensuring responsibilities align with business tasks and employee commitment.
The document discusses key concepts in software design engineering including:
- Design should implement requirements from analysis and be understandable.
- Qualities like modularity, appropriate data structures, and independent components improve design.
- Fundamental concepts like abstraction, architecture, patterns, and modularity compartmentalize a design.
- Design principles guide creating a design that is traceable, reusable, and accommodates change.
This document discusses architectural integration styles for large-scale enterprise software systems. It proposes using architectural styles as a way to generalize common integration solutions at the enterprise system level, similar to how styles are used in traditional software architecture. The document defines key terms and presents a structure for describing architectural integration styles. It then describes several example styles, and presents a case study applying the style selection process to an energy company's system integration project. The goal is to provide an approach for selecting integration solutions based on the characteristics of existing systems and desired quality attributes.
Graph-Based Algorithm for a User-Aware SaaS Approach: Computing Optimal Distr...IJERA Editor
As a tool to exploit economies of scale, Software as a Service cloud models promote Multi-Tenancy which is the notion of sharing instances among a large group of tenants. However, Multi-Tenancy only satisfies requirements that are common to all tenants as well as the fact that tenants themselves hesitate about sharing. In a try to solve this problem, the present paper propose a User-Aware approach for Software as a Service models using Rich-Variant Components. The main contribution of this approach is a framework summarized in a graphbased algorithm enabling deduction of an optimal distribution of instances on application's tenants. To illustrate and evaluate the framework, the approach is applied on a Software as a Service Application for private school management
Jobst Landgrebe The HL7 Services Aware Interoperability Framework (SAIF)Barry Smith
The document summarizes the Health Level 7 (HL7) approach to semantic interoperability and its Services Aware Interoperability Framework (SAIF). It finds that HL7's previous approach using the Reference Information Model (RIM) failed due to disconnection from users, lack of proper foundations, and technical issues. It also finds that the SAIF does not meet the needs of an interoperability framework and will not overcome HL7's crisis because it fails to realize necessary principles, has an inconsistent architecture, and cannot be instantiated. The authors recommend replacing SAIF with a new approach based on fundamental reassessment and using existing standards.
Multiagent Based Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Multiagent Based Software Engineering Methodology
A Framework for Engineering Enterprise Agilityrahmanmokhtar
This document presents a framework for engineering enterprise agility that uses ontologies, viewpoints, and agent-based middleware. The framework aims to enable enterprises to meaningfully share and communicate information between distributed systems. It proposes an architecture with components that utilize HTTP and integrate ontologies, viewpoints, and agent-middleware using open standards to promote semantic interoperability. The framework is intended to support the growth and agility of heterogeneous distributed enterprise systems.
This document discusses the challenges of requirements engineering for cross-organizational enterprise resource planning (ERP) implementations. It proposes analyzing coordination requirements from a coordination theory perspective to better align ERP systems with business needs. The document inventories coordination mechanisms supported by ERP packages and analyzes how mismatches between business flexibility needs and ERP rigidity can be addressed. Explicitly specifying coordination mechanisms may help requirements engineers find a match between an ERP system's coordination support and the mechanisms selected by cooperating organizations.
Intelligent Buildings: Foundation for Intelligent Physical AgentsIJERA Editor
FIPA is an IEEE Computer Society standards organization that promotes agent-based technology and the interoperability of its standards with other technologies. In the design phase of Intelligent Buildings, it is essential to manage many services and facilities, to do this, multi-agent systems are a good tool to manage them. In this paper, we will gereneral description of the features and elements of multiagent systems described by Foundation for Intelligent Physical Agents (FIPA). Secondly, we will focus on the architectures of these multiagent systems. And finally, we will propose a multi-agent system design to see the application in the design of a detached house where the lighting, air conditioning and security systems will be integrated.
This document summarizes four architectural patterns for context-aware systems: WCAM, Event-Control-Action, Action, and architectural pattern for context-based navigation. It discusses examples, problems addressed, solutions, structures, and benefits of each pattern. The patterns are examined to determine which can best overcome complexity and be more extensible for context-aware systems.
Alternating Current Machines-Synchronous MachinesTalia Carbis
This document provides an overview of synchronous machines including:
- Synchronous machines operate at synchronous speed and lock into the rotating magnetic field produced by the stator.
- The rotor is a magnet that is dragged along for the ride as the rotating magnetic field rotates.
- Torque is produced as the magnetic fields of the rotor and stator interact. The torque allows the motor to operate at a constant synchronous speed under varying load.
The document discusses magneto-optical current transformers (MOCTs). MOCTs use the Faraday effect to measure current non-invasively using light. They have two main parts: an optical sensor and electronic processing circuit. The optical sensor contains a polarizer, optical glass prism, and analyzer. Polarized light passes through the prism, whose rotation is proportional to the current induced magnetic field. This intensity-modulated light is converted to an electric signal. MOCTs offer advantages like isolation, wide bandwidth and no saturation, but also have disadvantages like temperature sensitivity and insufficient accuracy for power systems currently.
This document discusses several types of electric motors: AC series motors, universal motors, stepper motors, and shaded pole motors. It provides details on the construction and operation of universal motors and stepper motors. Universal motors can operate on either AC or DC power because the rotor and stator windings are connected in series. Stepper motors rotate in precise angular increments in response to applied digital pulses, making them well-suited for applications requiring precise positional control like printers and CNC machines. The document compares advantages and disadvantages of stepper motors.
The document discusses the design of a high voltage step-up DC-DC converter using coupled inductor and switched capacitor techniques. It aims to achieve a high step-up voltage gain by charging capacitors in parallel and discharging them in series through a coupled inductor. This allows an input voltage of 24 volts to output 400 volts at an appropriate duty ratio. The converter design is analyzed and its simulation results in a 400 volt output voltage without ripples are presented. A plan of action with timeline is also provided to fabricate and test the converter.
The document describes a Magneto Optic Current Transducer (MOCT) which uses the Faraday effect to measure electric current. It consists of an optical sensor near the current carrying conductor, fiber optic cables, and a signal processing unit. The sensor contains polarized light which rotates proportionally to the current. This rotation is measured and transmitted via fiber optics to the processing unit. Key advantages over conventional transformers include increased safety, simpler insulation, and immunity to electromagnetic interference. While MOCTs provide benefits, their accuracy is currently insufficient for power system applications.
Digital testing of high voltage circuit breakerneeraj prasad
Digital testing of high voltage circuit breakers provides several advantages over physical testing. It allows evaluation of future standards, reduction of full-scale laboratory testing, and identification of network issues. The digital process develops a software model of the circuit breaker using data from standard physical tests. This model then provides insights without additional physical testing. While digital testing is more costly and time-consuming initially, it offers opportunities to optimize circuit breaker design and monitor performance over time.
A transformer is a static device that changes alternating current (AC) at one voltage level to AC at another voltage level through electromagnetic induction. It consists of two coils, the primary and secondary windings, wrapped around a laminated iron core. When an alternating current is applied to the primary winding, it produces an alternating magnetic field that induces a voltage in the secondary winding. This allows the transformer to step up or step down voltages without changing the frequency. The transformer transfers power between its two coils through electromagnetic coupling between the coils wound around the iron core.
Running head NETWORK DIAGRAM AND WORKFLOW1NETWORK DIAGRAM AN.docxjeanettehully
Running head: NETWORK DIAGRAM AND WORKFLOW 1
NETWORK DIAGRAM AND WORKFLOW 2
Network and Workflow for Data Analytics Company
Name: sunil patel
Running head: Network diagram and work Flow 1
MIT 681: capstone Assignment 4
Network and Workflow for Data Analytics Company
Every company requires a workflow that is consistent with the network in order to achieve productivity and value addition. There are different network setups for different companies based on the services and products offered. The main idea behind formulation of a network is to map the workflow of the company to different resources available. The network design must guarantee effective connection among the staff members in a company. The relationship between network diagram and company’s workflow is based on the idea to maintain communication in the company among the existing staff members. The achievement of the company’s objective is based on the nature and performance of the network as well as the workflow of the respective company (Kumar & Kirthika, 2017). This paper outlines the design and architecture of a good practice network and workflow for a big data company.
The implementation of the network for a data analytics company requires planning based on the requirements and user roles. Connecting the staff is the main idea in a network architecture. The requirements for a data analytics company shall need data analysts for decision making and identification of opportunities in the market sector (Kumar & Kirthika, 2017). A good network workflow shall guarantee improvement in service offering for the company in context. A highly skilled team shall be required for the achievement of the objectives with the use of the network workflow.
The composition of a data analytics company is made of the leader director of analytics, data science manager and analytics manager. The director is in charge of management of the analytics and data science manager. He/she shall oversee the activities of the lower position leaders. The exploratory and description of the analyses shall be conducted by the data engineers in the lower departments. All data scientists shall be controlled by the data science manager in their respective roles. The company shall also have professionals in different fields and qualifications in other areas who shall collaborate in the achievement of objectives and goals of the institution. The skill set shall include; data architects, statisticians, software engineers, business analysts and data visualizers.
The major component of the network in the data analytics company is the software applied for data analysis. The software is usually created by the software engineers and comprises of components that are able to collect and process data (Cao, Chen, Zhao & Li, 2009). The role of the software engineers in this case is to guide the company with their skills and expertise for the best technology and implementation procedures. The advice and edu ...
This document discusses an approach to assembling software products using a product line approach. It presents a separation continuum that separates concerns both vertically (from abstract to implementation layers) and horizontally (between human-facing and machine-facing aspects). An application assembly approach is then discussed where a product line architecture is tied to this separation continuum, allowing high productivity by reusing pre-built software assets to realize new product lines. The approach aims to facilitate experimentation in building large-scale application assembly capabilities.
This document describes a software design approach for developing secure data management applications using model-driven development. It involves modeling an application's conceptual model, security model, and graphical user interface model. A model transformation lifts security policies from the security model to the GUI model. The models are validated for correctness before code generation. The approach was implemented in a tool called Sculpture, which was used to develop three secure web applications: a volunteer management app, electronic health record app, and meal service management app. The approach aims to improve on previous work by providing more expressive modeling languages, validation of models, and automated generation of secure multi-tier applications.
Abstraction and Automation: A Software Design Approach for Developing Secure ...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Enhancing the ArchiMate® Standard with a Responsibility Modeling Language for...Iver Band
In this paper, we describe an innovative approach for aligning the
business layer and the application layer of ArchiMate to ensure
that applications manage access rights consistently with enterprise
goals and risk tolerances. The alignment is realized by using the
responsibility of the employees, which we model using ReMoLa.
The main focus of the alignment targets the definition and the
assignment of the access rights needed by the employees
according to business specification. The approach is illustrated
and validated with a case study in a municipal hospital in
Luxembourg.
The document discusses cloud computing models that could support the growing business needs of Falcon Security. It suggests that Falcon adopt cloud services like Amazon RDS for databases, Amazon S3 for storage, and Google Mail, Office 365, and Microsoft CRM online for software. The paper compares cloud computing to service-oriented architecture and discusses private, public, and hybrid cloud deployment models as well as platform as a service (PaaS), software as a service (SaaS), and service-oriented architecture (SOA).
The document provides an overview of a 7-step process for building an information system. The 7 steps are: 1) Identify and list stakeholders, 2) Identify and list actors, 3) Identify and list use cases, 4) Identify and list scenarios, 5) Identify and list steps, 6) Identify and list classes/objects, and 7) Manage work products. It describes each step in the process, including defining stakeholders, actors, use cases, scenarios, and mapping analysis to design. The process emphasizes discovery, iteration, and developing a shared understanding between stakeholders.
The management of the technical assets is a challenging task and optimizing their usage is critical. To ensure the effective utilization of assets, it is very important to make the effective decision regarding assets throughout its lifecycle and an effective decision based on the fact that how is the asset‟s information managed throughout its lifecycle. The objective of this paper is to develop an application “Asset Management” which effectively manage the asset-information and help decision makers to make effective decisions regarding assets. For this purpose, first we propose to identify the asset information requirements and second is technologies needed to gather such information. The finding of this paper would help in selecting asset management web application to provide information about assets in real world applications so that decision makers can make effective decisions and make effective utilization of assets.
A Resiliency Framework For An Enterprise CloudJeff Nelson
The document summarizes a research paper that proposes a resiliency framework called the Cloud Computing Adoption Framework (CCAF) for enterprise clouds. CCAF includes four major emerging services - software resilience, service components, guidelines, and real case studies - that are designed to improve an organization's security when adopting cloud computing. The framework was validated through a large survey that provided user requirements to guide the system's design and development. CCAF aims to illustrate how software resilience and security can be improved for enterprises moving to the cloud.
A UML Profile for Security and Code Generation IJECEIAES
Recently, many research studies have suggested the integration of safety engineering at an early stage of modeling and system development using Model-Driven Architecture (MDA). This concept consists in deploying the UML (Unified Modeling Language) standard as aprincipal metamodel for the abstractions of different systems. To our knowledge, most of this work has focused on integrating security requirements after the implementation phase without taking them into account when designing systems. In this work, we focused our efforts on non-functional aspects such as the business logic layer, data flow monitoring, and high-quality service delivery. Practically, we have proposed a new UML profile for security integration and code generation for the Java platform. Therefore, the security properties will be described by a UML profile and the OCL language to verify the requirements of confidentiality, authorization, availability, data integrity, and data encryption. Finally, the source code such as the application security configuration, the method signatures and their bodies, the persistent entities and the security controllers generated from sequence diagram of system’s internal behavior after its extension with this profile and applying a set of transformations.
The document provides a report on proposing a decentralized information accountability framework to track how user data is used in the cloud. It analyzes existing cloud service models and outlines the objectives, scope, system analysis, design and feasibility of the proposed framework. The system analysis section describes challenges with current single-server systems and outlines modules for data integrity, security, and distributed storage across multiple servers. A feasibility study examines the technical, social, and economic viability of the project. The system design section provides diagrams to model the data flow, entity relationships, system workflows and use cases of the proposed accountability framework.
Software requirement analysis enhancements by
prioritizing requirement attributes using rank
based Agents.
Ashok Kumar Vinay Goyal
Professor Assistant Professor
Department of Computer Science and Applications Department of MCA
Kurukshetra University, Kurukshetra, India Panipat Institute of Engineering & Technology
Panipat, India
Abstract- This paper proposes a new technique in the
domain of Agent oriented software engineering. Agents
work in autonomous environments and can respond to
agent triggers. Agents can be very useful in requirement
analysis phase of software development process, where
they can react towards the requirement triggers and
result in aligned notations to identify the best possible
design solution from existing designs. Agent helps in
design generation process, which includes the use of
Artificial intelligence. The results produced clearly
shows the improvements over the conventional
reusability principles and ideas.
1. INTRODUCTION
Agent oriented software engineering is a new
emerging technique which is growing very
rapidly. Software development industries have
invested huge efforts in this domain and results
published by many of them are very exiting [1].
The autonomous and reactive nature of agents
makes it possible for the designers to visualize
in terms of real life problem solving scenarios
where socio-logical [2] characteristics of agents
automatically activate the timely checks for any
problem in domain and to solve the same using
agents.
Agents are very helpful in the software
development life cycle. Experiments carried out
in past have shown [2][9][10] the improvement
in the SDLC and conclusion is that agents can be
very helpful in cost and effort minimization; if
tuned properly. Fine-tuning of agents and SDLC
process-state-plug-in for two-way
communications results in agent based software
development process where intelligent agents
will take decisions for better time and resource
utilization.
Fine-tuning of agents and SDLC process-state-
plug-in for two-way communications results in
agent based software development process
where intelligent agents will take decisions for
better time and resource utilization. Agents are
capable of storing historic data, which helps in
decision-making using heuristic based approach.
This paper discusses the details of one such
experiment conducted to improve the
requirement analysis process with the help of
proactive agents. Agents automatically sense the
requirement environment and propose their own
set of important requirement checklist. This is
sort of intelligent assistance with domain
heuristic, which leads to cover all possible
requirement entities of the problem domain.
2. RELATED WORK
Michael Wooldridge, Nicholas R. Jennings &
David Kinny describe the analysis process using
agent-oriented approach [1]. They have
considered the GAIA notations. The analysis
stages of Gaia are:
1) Identify the agent’s roles in the system, which
typically correspond to identify ro ...
IRJET- Software Architecture and Software DesignIRJET Journal
This document discusses software architecture and design. It defines software architecture as the strategic design of a system that addresses global requirements, while software design is the tactical design that addresses local requirements of what a solution does. The document outlines key characteristics of software architecture like serverless and event-driven architectures. It also discusses principles of software design like the single responsibility principle. Finally, it explains that while architecture focuses on structure and high-level design, design delves into implementation details, and there is overlap between the two.
Software engineering Questions and AnswersBala Ganesh
1. Risk management is the process of identifying, addressing, and eliminating potential problems that could threaten the success of a project before they cause damage. This includes issues that could impact cost, schedule, technical success, product quality, or team morale.
2. HIPO (Hierarchical Input Process Output) diagrams were developed at IBM as a design representation and documentation aid. They contain a visual table of contents, overview diagrams, and detailed diagrams.
3. Software maintenance is any work done to modify software after it is operational, such as fixing errors, adding capabilities, removing obsolete code, or optimizing performance. It aims to preserve the software's value over time as requirements, users, and technology change. M
Dynamically Adapting Software Components for the GridEditor IJCATR
The surfacing of dynamic execution environments such as „grids‟ forces scientific applications to take dynamicity. Dynamic
adaptation of Grid Components in Grid Comput ing is a critical issue for the design of framework for dynamic adaptation towards
self-adaptable software development components for the grid. T h i s paper carries the systematic design of dynamic adaptation
framework with the effective implementation of the structure of adaptable component. i . e . incorporating the layered architecture
e n v i r o nme n t with the concept of dynamicity.
The document proposes a dynamic trust computation model called "SecuredTrust" for evaluating trust in multi-agent systems. It aims to address issues with existing trust/reputation models, including their inability to properly evaluate trust of agents with unpredictable malicious behavior or provide quick response to behavioral changes. The model also seeks to distribute workload evenly among service-providing agents. The paper analyzes factors for evaluating agent trust, proposes a quantitative trust measurement model, and a novel load-balancing algorithm based on defined factors. Simulation results show the model can effectively handle strategic changes in malicious agents and efficiently distribute workloads compared to other models.
A study secure multi authentication based data classification model in cloud ...IJAAS Team
Abstract: Cloud computing is the most popular term among enterprises and news. The concepts come true because of fast internet bandwidth and advanced cooperation technology. Resources on the cloud can be accessed through internet without self built infrastructure. Cloud computing is effectively managing the security in the cloud applications. Data classification is a machine learning technique used to predict the class of the unclassified data. Data mining uses different tools to know the unknown, valid patterns and relationshipsin the dataset. These tools are mathematical algorithms, statistical models and Machine Learning (ML) algorithms. In this paper author uses improved Bayesian technique to classify the data and encrypt the sensitive data using hybrid stagnography. The encrypted and non encrypted sensitive data is sent to cloud environment and evaluate the parameters with different encryption algorithms.
Similar to Metamodel for reputation based agents system – case study for electrical distribution scada design (20)
Multi-Agent System (MAS) monitoring solutions are designed for a plethora of usage topics. Existing approach mostly used cloned back-end architectures while front-end monitoring interface tends to constitute the real specificity of the solution. These interfaces are recurrently structured around three dimensions: access to informed knowledge, agent’s behavioural rules, and restitution of real-time states of specific system sector. In this paper, we propose prototyping a sector-agnostic MAS platform (Smart-X) which gathers in an integrated and independent platform all the functionalities required to monitor and to govern a wide range of sector specific environments. For illustration and validation purposes, the use of Smart-X is introduced and explained with a smart-mobility case study.
This document provides an agenda and overview for a joint workshop on security modeling hosted by the ArchiMate Forum and Security Forum. The workshop aims to identify opportunities to improve the conceptual and visual modeling of enterprise information security using TOGAF and ArchiMate. The agenda includes introductions, a research spotlight on strengthening role-based access control with responsibility modeling, an open discussion on complementing TOGAF and ArchiMate with enhanced security modeling, and identifying next steps. The workshop purpose is to enable better security architecture decisions and drive usage of TOGAF and ArchiMate for security architecture.
Aligning the business operations with the appropriate IT infrastructure is a challenging and critical activity. Without efficient business/IT alignment, the companies face the risk not to be able to deliver their business services satisfactorily and that their image is seriously altered and jeopardized. Among the many challenges of business/IT alignment is the access rights management which should be conducted considering the rising governance needs, such as taking into account the business actors' responsibility. Unfortunately, in this domain, we have observed that no solution, model and method, fully considers and integrates the new needs yet. Therefore, the paper proposes firstly to define an expressive Responsibility metamodel, named ReMMo, which allows representing the existing responsibilities at the business layer and, thereby, allows engineering the access rights required to perform these responsibilities, at the application layer. Secondly, the Responsibility metamodel has been integrated with ArchiMate® to enhance its usability and benefits from the enterprise architecture formalism. Finally, a method has been proposed to define the access rights more accurately, considering the alignment of ReMMo and RBAC. The research was realized following a design science and action design based research method and the results have been evaluated through an extended case study at the Hospital Center in Luxembourg.
This document proposes an innovative systemic approach to risk management across interconnected sectors. It suggests using enterprise architecture models to manage cross-sector risks in Luxembourg's complex ICT ecosystem. The approach would provide regulators an overview of all players and systems, as well as models of different sectors to analyze collected data and risks at a national level, fostering accurate and reactive risk mitigation across economic domains.
This document proposes extending the HL7 standard with a responsibility perspective to better manage access rights to patient health records. It presents the ReMMo responsibility metamodel, which defines actors' responsibilities and associated access rights. The paper aims to align ReMMo with the HL7-based eSanté healthcare platform model in Luxembourg to semantically enhance access controls based on users' real responsibilities rather than just roles. It will first map concepts between the two models, then evaluate the alignment through a prototype applying inference rules.
This document presents a study that aims to develop and validate a responsibility model to improve IT governance. It analyzes concepts of responsibility from literature and frameworks like COBIT. The researchers developed a responsibility model with key concepts like obligation, accountability, right, and commitment. They then compare this model to COBIT's representation of responsibility to identify areas for potential enhancement, like adding concepts that COBIT lacks. The document illustrates how the responsibility model could be used to refine COBIT's process for identifying system owners and their responsibilities.
This document proposes an innovative approach called SIM (Secure Identity Management) that aims to make access management policies closer aligned with business objectives. It does this in two ways:
1) By focusing the policy engineering process on business goals and responsibilities defined in processes, using concepts from the ISO/IEC 15504 standard. This links capabilities and accountabilities to process outcomes and work products.
2) By defining a multi-agent system architecture to automate the deployment of policies across heterogeneous IT components and devices. The agents provide autonomy and ability to adapt rapidly according to context.
The approach was prototyped using open source components and aims to improve how access rights are defined according to business needs and deployed across an organization
This document proposes a methodological approach for specifying services and analyzing service compliance considering the responsibility dimension of stakeholders. The approach includes a product model and process model. The product model has three layers: an informational layer describing service context and concepts, an organizational layer describing business rules and roles, and a responsibility dimension layer linking the two. The process model outlines steps for service architects to identify context, define concepts and rules, specify services, and analyze compliance. The approach is illustrated with an example of managing access rights for sensitive healthcare data exchange between organizations.
This document discusses integrating responsibility aspects into service engineering for e-government. It proposes a multi-layered approach including an ontological layer defining legal concepts, an organizational layer describing roles and stakeholders, an informational layer representing data structures and integrity constraints, and a technical layer representing IT components. A responsibility meta-model is also introduced to align responsibilities across these layers and facilitate interoperability between services that share data. The approach aims to ensure service compliance and manage risks associated with e-government services.
1) The document proposes a dynamic approach for assigning functions and responsibilities to agents in a multi-agent system for critical infrastructure management.
2) The approach uses an agent's reputation, which is based on past performance, to determine which agents receive which responsibilities as crisis situations change over time.
3) Assigning responsibilities dynamically based on reputation allows the system to continue operating effectively if an agent becomes isolated or has reduced capabilities during a crisis.
The document describes the NOEMI assessment methodology, which was developed as part of a research project to help very small enterprises (VSEs) improve their IT practices. The methodology aims to assess VSEs' IT capabilities in order to facilitate collaborative IT management across organizations. It was designed to be aligned with common IT standards like ISO/IEC 15504 and ITIL, but adapted specifically for VSEs. The methodology has been tested through several case studies with VSEs in Luxembourg, with promising results.
This document provides a preliminary literature review of policy engineering methods related to the concept of responsibility. It summarizes key access control models and discusses how they address concepts like capability, accountability, and commitment. The document also reviews engineering methods and how they incorporate responsibility considerations. The overall goal is to orient further research towards a new policy model and engineering method that more fully addresses stakeholder responsibility.
This document summarizes an experimental prototype of the OpenSST protocol for secured electronic transactions. OpenSST was developed to achieve high security, simplicity in software engineering, and compatibility with existing standards. The prototype uses OpenSST for the authorization portion of electronic payments in an e-business clearing solution. It describes the OpenSST message format and types, and discusses how OpenSST is implemented in the prototype's three-element architecture of an OpenSST proxy, reverse proxy, and server.
This document proposes an automatic reaction strategy for critical infrastructure SCADA systems. It defines a three-layer metamodel for modeling SCADA components and two types of policies (cognitive and permissive) that govern component behavior. It then presents a two-phase method for identifying these policies from the SCADA architecture and formalizing them to support an automatic reaction strategy. This strategy is modeled as an integral part of the SCADA architecture using the defined metamodel and policy identification method. It includes organizational and application layers with main actors, strategies, and components that realize the reaction policies based on expected automation levels.
This document discusses the NOEMI model, a collaborative management model for ICT processes in SMEs. The model was developed by the Centre Henri Tudor and tested with a cluster of 8 partner SMEs. Key aspects of the model include defining ICT activities across 5 domains, assessing each SME's capabilities, and having an operational team manage activities for the cluster under a coordination committee. The experiment showed improved cost control, management, and partner satisfaction compared to alternatives like outsourcing or hiring individual IT staff. The research is now ready for market transfer as the successful model is adopted long-term by participating SMEs.
The document proposes an agent-based architecture for multi-level security incident reaction in distributed telecommunication networks. The architecture has three levels: a low level interface with the infrastructure, an intermediate level using multi-agent systems to correlate alerts and deploy reactions across domains, and a high level for global supervision and policy management. The architecture was designed based on requirements like scalability, availability, autonomy, and robust reaction and alert management across distributed systems. It was successfully tested for implementing data access control policies.
More from Luxembourg Institute of Science and Technology (20)
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Metamodel for reputation based agents system – case study for electrical distribution scada design
1. Metamodel for Reputation based Agents System –
Case Study for Electrical Distribution SCADA Design
Guy Guemkam
Laboratoire d’informatique de Paris 6,
75005 Paris-France
guy.guemkam@lip6.fr
Jonathan Blangenois
Faculty of Computer Science
University of Namur
blangenoisj@gmail.com
Christophe Feltus
‡
, Djamel Khadraoui
Public Research Centre Henri Tudor
EE-Team
‡
, Luxembourg
{firstname.name}@tudor.lu
ABSTRACT
SCADA systems are urged to face more and more complex critical
situation and thereby, need to continuously evolve towards
integrated decision making capability driven by fancy reaction
strategy. The current research stream in that field, aims,
accordingly, to foster the smartness of the field equipment and
actuators, which predominately exist under the concept of agents.
Those agents are governed by policies that while dictating the
agent behavior, depending on the agent roles and the context of
evolution, also confer to the latter the latitude to react based on
their own perception of the evolving environment. This agent
ability is referring to as the agent smartness and is strongly
determined by, and depending on, the trust perceived by the agent
of its environment. Actual work related to agents tends to consider
that agents evolve and are organized in systems. There exist some
models for representing how these agents are organized at a high
level, models for representing how they are spread in the
networks, models to represent how they communicate to each
other, and so forth. However, as far as we know, no model exists
that integrates all of the above models. Therefore, we do believe
that such an integrated model could have many advantages like
e.g. to know the impact from the action from one layer to another,
to decide which action on a component has the most important
impact on a set of other components, to identify the most critical
component for an infrastructure, to align the agent system with the
corporate objective and to tailor it accordingly. Therefore, we
have decided to frame an innovative version of ArchiMate® for
the multi-agent purpose with the objective to enrich the agent
society collaborations and, more particularly, the description of
the agent behavior endorsed in the policy component and using a
reputation based trust model to improve the reliability, termed
ARMAN. Our work has been illustrated in the frame of a critical
infrastructure in the field of electrical power distribution which is
a highly sensitive research topic. .1
Categories and Subject Descriptors
H.2.7: Security, Integrity, and Protection
General Terms
Management, Performance, Design, Reliability, Experimentation,
Security, Languages, Theory, Verification.
Keywords
ArchiMate®,
metamodel, reputation, SCADA, multi-agents
system, trust, electricity distribution, critical infrastructure.
‡Enterprise Engineering Team is a collaboration between CRP
Henri Tudor, Radboud University Nijmegen and the University of
Applied Science Arnhem-Nijmegen (http://www.ee-team.eu)
1. INTRODUCTION
Enterprise architecture models are frameworks that allow
representing the information system (IS) of companies in (or on a
set of) schemas called views. Those models have undergone major
improvements during the first decade of the 21st
century and some
significant frameworks have been developed since, such as
ArchiMate® [1] or the Zachman framework [2]. These models are
traditionally structured in layers that correspond to different levels
of the organizations’ IS. The business layer, for instance, models
the concept that exists at the business layer such as the processes,
the actors, their business roles, and so forth and which are
supported or represented by IT application layers. At this
application layer the concepts of the IS that are modeled are the
applications, the databases, or for instance, the application data.
The advantages of these enterprise architecture models are that
they allow improving the connections between the concepts from
each layer and, thereby, allow a better integration and an
enhanced support for the decision making processes. Up to now,
agents represented at the business layers [3][4] have been
considered human actors playing business roles. However, rising
security requirements for the management of heterogeneous and
distributed architecture calls for a rethinking of distribution of the
security procedures in both: human and software autonomous
entities. Although having been handled by human employees for
years, the management of complex systems, nowadays, needs to
be shared with intelligent software items, often perceived being
more adapted to act in critical situations. This statement is
enforced by the characteristic ability of the agent to act
autonomously in open, distributed and heterogeneous
environments, in connection or not with an upper authority.
Acknowledging this situation, we are forced to admit that
software agents are no longer to be considered only as basic
software components deployed to support business activities, but
that they are part of the business actors as well, that they plays
some kind of “business role”, and they perform “business tasks”
accordingly. Since then, acquiring an innovative enterprise
architecture framework to represent the behaviors of such agents
appears fully justified and required by the practitioners, especially
the ones engaged in the management of those critical
infrastructures.
In this paper, we propose to explore ArchiMate® and to redraw its
structure in order to fit with agent software actors’ specificities
and domain constraints. The main focus concerns the design and
the consideration of the policies that are centric concepts related
to the activation of agents’ behaviours. The paper is structured as
follows, after having sighted the related works concerning the
enterprise architecture models in Section II; we review the
reputation base trust that is exploited in the modelling of the agent
smartness in Section II. We model the concept of policy that
represents the engine of the agent modelling framework in Section
2. IV and in Section V, we explains layer by layer the entire
Reputation based Agents System Metamodel and illustrates its
different components. In Section VI, we present a case study that
illustrates the exploitation of the enhanced ArchiMate® and we
perform real-time simulations in Section VII. Finally, Section VII
concludes the paper.
As we have notice that agent systems are organized in a way
close to the enterprises system, our proposal analyses how an
enterprise architecture model may be slightly reworked and
adapted for MAS. Therefore, we exploit ArchiMate® which has
the following advantages to be supported by the Open Group2
. It
has a large community and proposes a uniform structure to model
enterprise architecture. Another advantage of ArchiMate® is that
it uses referenced existing modelling languages like UML. With
this aspect we think that it is relevant to provide a lean and simple
structure compliant with the new version of UML to model any
MAS. As a conclusion of our state of the art, we acknowledge the
many other models or frameworks that provide solutions to MAS
models and which are compliant or not with other modelling
languages. As far as we know no existing approach provides a
multiple layer view or an integrated view of these layers.
2. METAMODEL FOR REPUTATION
BASED TRUST
The proposed reputation-based trust management scheme is used
to predict the future behaviour of a component in order to
establish trust among agents and hence to improve security in the
system. The goal of using an architecture using a trust Policy
within a metamodel core is to improve the agent assignment
according to his policy. The trust Policy component depicted in
the Figure 1 signifies the lower value that is necessary for agent to
be assign to a role. Moreover according to his role fulfilment, a
reputation score is used to assess this level of trust. Indeed we
consider reputation as a measure that is derived from direct
and/or indirect knowledge of earlier interactions if any, and is
used to access the level of trust an agent puts into another as in
[18]. This trust policy is linked with the behavioural policy.
Indeed these Behaviour and Trust Policy are combined into
Policy. The rest of the metamodel component is explained in the
next section.
Figure 1. Metamodel Core with Trust Policy.
2.1 Policy Concept and Metamodel Core
Our goal in modelling the multi-agent system into architecture
metamodel is to provide system architects and developers tools to
create their own multi-agent system including the notions of
Agents Policy. As explained in Section II, we have selected the
ArchiMate® language to provide a multiple layered view of multi-
agent system using policies.
To create this metamodel, we have realized a specialization of
the original ArchiMate® metamodel for agent architecture. Firstly
2
http://www.opengroup.org/subjectareas/enterprise/archimate
we have redefined the Core of the metamodel (Figure 1) to figure
out the concept of Policy that hosts the behaviour and the trust
decision mechanism. The Core represents the handling of Passive
Structure by Active Structures during the realization of
Behaviours. For the Active Structures and the Behaviour, the
Core differentiates external concepts that represent how the
architecture is being seen by the external concepts (as a Service
provider attainable by an Interface) and the internal concept
which is composed of Structure Elements (Roles, Components)
and linked to a Policy Execution concept. Passive Structures
contain Object (Data Object, Organizational Object, Artefacts,…)
that represents information of the architecture.
Secondly, the concept of Policy has been defined in accordance
with our specialization of the ArchiMate® metamodel. The
proposed representation is composed of three concepts defining
the Policy
2.2 Policy modelling
The organizational and the application policies may, afterwards,
be modelled as follows:
2.2.1 Organizational Policy.
In the Organizational Layer, Organizational Policy can be
represented as an UML Use Case [14] where concepts of Roles
represent the Actors of the Use Case and the Collaboration
concepts show the connections between them. Concepts of
Products, Value and Organizational Service provide the Goal
of the Use Case. Pre and Post conditions are modelling the
context of the Use Case and are symbolized in the metamodel as
the Event concept (Precondition) and the Organizational Object
(Pre/Post condition).
2.2.2 Application Policy.
Application Policy from the Application Layer is defined in
Section III as the realisation of behaviour by the Application
domain in a configuration of the Data domain. UML provides
support to model the behaviour performed by the Application
domain as Sequence Diagram. Configuration of the Data domain
can be expressed as Preconditions of the Sequence Diagram and
symbolized by the execution of a test-method on the lifeline of the
diagram (Figure 2).
Figure 2. Active structure connections
2.2.3 Reputation based Trust Policy
Each agent used reputation to derive the trustworthiness that it
puts in another based on information provided by probes.
Implementation of TRM mechanisms are translated into agent
behaviours through the concept of Policies called Trust Policies.
As it will be later illustrated through the broadcasting mechanism
(Figure 8), the trust value of each component at an upper level, for
instance MSP agents, is derived from sublevels agents. That
signifies that, for two given agents A and B, the trust value of
agent B computed by agents A is calculated using equation 1,
adapted from [18] as such:
TAB=ORAB= γDRAB+ (1-γ)(μ1IRi1B+ μ2IRi2B+μ1IRi3B)
DRAB=E(Beta(α,β)) E=α/α+β
with μ1+μ2+μ2=1 and 0<γ<1
(1)
(2)
3. DRAB represents the direct reputation of agent B view by agent
A and is obtained through direct interactions using the mean of
the beta distribution calculated from equation 2 extracted from
[18]. IRi1B represents reputation coming from other agent i1 (as
well as i2 and i3) and μ1, μ2 and μ3 represent the trustworthiness of
the associations between each agent. Applying 1 to the
broadcasting mechanism of figure 8, it gives:
TMBP_ACE=
γDRMBP_ACE+(1-γ)(μ1IRi1_ACE+μ2IRi2_ACE+μ1IRi3_ACE)
(3)
with μ1, μ2 and μ3 values calculated based on strategic
broadcasting decision e.g. prioritisation of regional broadcasting
or technology threat mitigation (see Section VII).
3. CASE STUDY IN ELECTRIC POWER
DISTRIBUTION INFRASTRUCTURE
The represent the modelling of MAS with ArchiMate for MAS,
we complete, in this paper, the case study presented in [8]. To
know: electricity is a difficultly storable good. Its production has
to precisely fit with its consumption. To maintain and guarantee
that balance, electric companies supervise the transport of the
electricity and manage the electric network. They keep watching
in real time both production (wind turbine) and consumption
(electric warmer) values to maintain the safety of the system. In
case of productivity problem, solutions are deployed like the
importation of electricity from adjoining countries or user
request, made via TV and newspapers, to adapt the usage of
electric machines (e.g. stop washing machine or dryer).
The broadcasting mechanism (Figure 4) aims at sending alerts
to the population using media such as the SMS or tweets
whenever a weather alert occurs. This section presents the core
components of the broadcasting mechanism. The solution relies
on a MAS technology on the top of the JADE framework [6].
Agents are disseminated on three layers of the infrastructure
corresponding to geographical region (city, region or country) and
they retrieve information from probes located in weather station
and on the electric networks and representing with different
values: pressure, temperature and electric voltage.
The agents that compose the critical architecture are the
following:
The Alert Correlation Engine (ACE) collect, aggregates and
analyses weather information coming from probes deployed over
the network and weather stations. Confirmed alerts are sent to the
Policy Instantiation Engine (PIE). The PIE receives confirmed
alert from the ACE, set the severity level and the extent of the
geographical response. The PIE instantiates high level alert
messages, to be deployed. Finally the high level alert messages are
transferred to the Message Supervising Point (MSP). The MSP, as
explained in detailed in [9] is composed of two modules. The
Policy Analysis (PA) is in charge of analysing the policies
previously instantiated by the PIE. For that, the Policy Status
database stores all communication policies and their current status
(in progress, not applicable, by-passed, enforced, removed…) so
that the PA module can check the consistency of the newly
received message to be deployed. The second module is the
Component Configuration Mapper that selects the appropriate
communication channel.
Figure 3 presents two different kinds of Message Broadcasting
Point (MBP). Indeed, another advantage of MAS is that it is easy
to implement from a model, specific agents in order to perform
specific tasks. Concretely it enables us to use different channel of
communication (e.g. SMS, e-mail, micro-blogging) to send alerts
to citizens, hospitals, etc. By this way our electric blackout
prevention system is easily extensible for future communications
facilities. MBPs receive generic alert messages from the MSP.
Then a specific parser converts the incoming alert message to the
appropriate format according to the channel.
Figure 3. MBP architecture
To consider the mutual trust between agents, each agent
maintains within it a database of levels of trust towards its pairs.
This means e.g. that the MBP has a dedicated level of trust for the
ACE and the MSP.
The broadcasting alert architecture presented in this section is
based on the ReD project [7]. The ReD (Reaction after Detection)
project defines and designs a solution to enhance the
detection/reaction process and improves the overall resilience of
critical infrastructures. Figure 10 introduces the developed
architecture illustrated with our weather broadcast alert system.
The flow is supposed to begin with an alert detected by a probe.
Figure 4. Broadcasting mechanism inside
This alert is send to the ACE agent (City layer) that does or
does not confirm the alert to the PIE. Afterwards, the PIE decides
to apply new policies or to forward the alert to an ACE from a
higher layer (Region Layer). The PIE agent sends the policies to
the MSP agent, which decides which MBP is able to transform the
high level alert message into an understandable format for the
selected communication channel.
In order to manage access rights, we have incorporated to ReD
a Context Rights Management module (CRM). Block on the right
on Figure 10. The CRM is in charge of providing access rights to
agents (E.g. MBP to the probes and Logs File database, MSP to
the Policy Rules Status database). The CRM uses the agent links
and the crisis context database. The first database includes the link
between two agents (type of contextual access right). The second
database includes a set of crisis contexts. Thanks to these
databases the CRM agent is able to detect the agent right to access
each other’s at the operational layer depending on the context.
3.1 ACE Organizational layer
In the Organizational layer of the ACE Agent (Figure 11) we have
represented separately the monitoring aspect from the transaction
4. aspect. We call a transaction a communication of information
from one agent to another (e.g. the ACE sends an alert to a PIE)
and then we consider the monitoring as the representation of
information from an external device. Firstly the Organizational
Role of the ACE is represented as a Collaboration of the PIE Role
and the Device Role.
Figure 5. Detailed reaction architecture for electricity
distribution adaptation based on weather parameters
Each Role of the Collaboration communicates with the ACE
through a proper Organizational Interface one for the monitoring
and another one for the transaction. ACE Role is providing two
Organizational Services depending on only one Organizational
Policy which is dealing with two Events respectively for the
monitoring and the transaction. Secondly the two Organizational
Services provided by the ACE agent are regrouped into a
correlation service symbolized by the Product concept. This
Product has the objective Value to reduce a crisis by giving a
guaranty of short reaction time represented by the Contract
concept. Finally the Contract is applied on Organizational Object
as monitoring information and transaction information.
3.2 ACE Application layer
For the Application layer of the ACE Agent (Figure 11) we found
the separation between the transaction and the monitoring.
Application Services for transactions and monitoring are, as in the
Organizational Policy, linked to only one Application Policy. To
highlight the collaboration between the ACE and the Monitored
Device, we created a Collaboration concept named Monitoring
Administration and shows that this collaboration is constituted of
the Components of the ACE and the Components of the Device.
Device’s components use the Application Monitoring Interface to
communicate with the ACE’s components and the ACE’s
components are composed of the Application Monitoring
Interface.
We use the same approach for the transaction part and rapidly
show that the ACE’s components are composed of two interfaces
deserving the two Application Services. Again the Application
layer contains Data Object as Transaction Messages and
Monitoring Messages used by the different Application
Components of the layer.
Figure 6. ACE agent model
3.3 ACE Technical layer
We found in the Technical layer of the ACE Agent (Figure 11)
another representation of the two collaborators of the ACE agent.
Transaction and Monitoring Infrastructure are separated from
each other. Both of them have Infrastructure Service connected to
the ACE agent’s Node and an Infrastructure Interface where the
collaborators can interact with it. Each Node is respectively
connected to a Communication Path (represented by a logical
Event Queuing) and uses different Artifacts to communicate. We
have intentionally not instantiated Nodes for readability but the
reader can easily imagine that an ACE agent can be deployed on a
computer who’s running an operating system. Also the Network
concept is not defined in our instantiation for the same reason. For
example Monitoring Event Queue between the ACE agent and the
Device can be represented as a Network concept, as an USB cable
and for the Transaction Event Queue by an RJ45 cable.
3.4 ACE Organizational Policy
To illustrate the Organizational Policies of the ACE we choose to
represent the monitoring part of the ACE Role as an UML Use
Case (Figure 12). Monitoring Events are illustrated in the Use
Case as Extension Points and show their impacts on the
behaviours realized in the Perform Monitoring Policy. Roles are
presented as Actors and Collaborations are highlighted by the
different link between the behaviours.
5. Figure 7. ACE Monitoring Organizational Policies Use Case
3.5 ACE Application Policy
Sequences Diagrams have been used to represent the behaviours
performed by the Application Domain of the ACE Agent for the
Application Policy: Perform Detection (Figure 13).
In the Sequence Diagram, behaviour of each component is fit
to his lifeline and in/out Events presented as inter-component
methods call. Context analyse is performed by the component
during the execution of his behaviour.
Figure 8. Perform Detection – High level Sequence Diagram
4. SIMULATIONS
In this paragraph we have simulated a heterogeneous network of
ACE and PIE (Figure 14) agents running the reputation model in
[9]. The framework used for the test environment has been
developed in JAVA and simulate MAS network in a graphical
environment. Each created agent is deployed on thread and is only
connected to a central supervisor (Composed of an Agent
Manager and a Graph Supervisor) that give him the list of his
neighbors depending of his location on the network with a
maximum edge size between agents. The protocol used asks ACE
agents to send a message containing the collected data from the
probe to the nearest PIE every five seconds. Test environment
represents a city of 50x50km with a maximum of 5 kilometers
connection distance between agents. Also simulations have been
running several times during 120 seconds with different load of
malicious agents, respectively 10%, 50% and 90%.
Figure 9. Simulation network
For each load of malicious agents in the network we have
collected the trust table - equation 3 (section IV.3) - of the same
PIE agent, representing his perception of his neighbors ACE
(Table I.)
Table I. PIE perception evolution
10% 50% 90%
ACE Rep ACE Rep ACE Rep
A73 0.8 A73 0.75 A73 0.62
A71 0.86 A71 0.87 A71 0.81
A80 0.69 A80 0.55 A80 0.15
A45 0.72 A45 0.98 A45 0.76
A55 0.91 A55 0.93 A55 0.9
A56 0.93 A56 0.0 A56 0.36
A66 0.82 A66 0.85 A66 0.72
A32 0.8 A32 0.81 A32 0.44
A35 0.84 A35 0.92 A35 0.99
A0 0.73 A0 0.71 A0 0.66
As the percentage of malicious growth, the threshold evolves
according to the reputation. For instance, the reputation of PIE
“A35” growth from 0.84 to 0.99 as the percentage of malicious
PIE grows from 10% to 90%.
5. CONCLUSIONS AND FUTURE WORKS
We have elaborated a an innovative version of ArchiMate® for
MAS purpose to enrich the agent society collaborations and, more
particularly, the description of the agent behavior endorsed in the
policy component and using a reputation based trust model
termed ARMAN. To illustrate our work, a case study has been
performed in the frame of a critical infrastructure related to
electrical power distribution. This case study has allowed
illustrating and validating the definition of policies according to
reaction strategy on the first hand, and depending on evolving
trust parameters amongst agents on the other hand. Finally, we
have simulated a heterogeneous network of ACE and PIE agents
running the reputation model and where different load of
malicious agents have been integrated.
As future works, additional validations are expected in the next
months on larger scale infrastructures. In parallel, a supporting
tool is being developed. The upper validation has been allowed by
the primary functionalities of it. Additional features of that latter
will allow modulating the environment parameters in which the
agents’ network is running and thereby, it will allow refining and
validating the trust based policies evolution along more complex
situations.
6. ACKNOWLEDGEMENTS
This research is supported and funded by the European FP7-
Security project “CockpiCI”, Cybersecurity on SCADA: risk
prediction, analysis and reaction tools for Critical Infrastructures.
7. REFERENCES
[1] M. Lankhorst. ArchiMate language primer, 2004.
[2] J. A. Zachman. 2003. The Zachman Framework For Enterprise
Architecture : Primer for Enterprise Engineering and Manufacturing
By. Engineering, no. July: 1-11.
[3] F. Zambonelli, N. R. Jennings, and M. Wooldridge. 2003.
Developing multiagent systems: The Gaia methodology. ACM
Trans. Softw. Eng. Methodol. 12, 3 (July 2003), 317-370.
[4] V.Torres da Silva, R. Choren, and C. J. P. de Lucena. 2004. A UML
Based Approach for Modeling and Implementing Multi-Agent
Systems. In Proceedings of the Third International Joint Conference
on Autonomous Agents and Multiagent Systems - (AAMAS '04),
Vol. 2. IEEE Computer Society, Washington, DC, USA, 914-921.
[5] G. Guemkam, C. Feltus, C. Bonhomme, P. Schmitt, B. Gâteau, D.
Khadraoui, Z. Guessoum, Financial Critical Infrastructure: A MAS
Trusted Architecture for Alert Detection and Authenticated
Transactions, Sixth IEEE Conference on Network Architecture and
Information System Security (IEEE SAR/SSI2011), France
[6] G. Guemkam, C. Feltus, C. Bonhomme, P. Schmitt, D. Khadraoui,
Z. Guessoum, Reputation based Dynamic Responsibility to Agent
6. for Critical Infrastructure, IEEE/WIC/ACM International
Conference on Intelligent Agent Technology, 2011, Lyon, France.
[7]
[8]
[9] C. Hahn. 2008. A domain specific modeling language for multiagent
systems. In Proceedings of the 7th international joint conference on
Autonomous agents and multiagent systems - (AAMAS '08), Vol. 1.
International Foundation for Autonomous Agents and Multiagent
Systems, Richland, SC, 233-240.
[10] AUML (Agent UML), http://www.auml.org/
[11] Prometheus Methodology.
http://www.cs.rmit.edu.au/agents/SAC2/methodology.html
[12]
[13] M. Lankhorst. ArchiMate language primer, 2004.
[14] J. A. Zachman. 2003. The Zachman Framework For Enterprise
Architecture : Primer for Enterprise Engineering and Manufacturing
By. Engineering, no. July: 1-11.
[15]
[16] UML 2 ( http://www.uml.org/)
.
[17]
[18] .
[19] G. Guemkam, D. Khadraoui, B. Gâteau, Z. Guessoum, ARMAN:
Agent-based Reputation for Mobile Ad hoc Networks. 11th
Conference on Practical Applications of Agents and Multi-Agent
Systems. Salamanca, Spain 22nd-24 May 2013.
J. Blangenois, G. Guemkam, C. Feltus, D. Khadraoui,
Organizational Security Architecture for Critical Infrastructure,
8th International Workshop on Frontiers in Availability, FARES
2013, an International Workshop of the eight International
Conference ARES.