This document describes an approach to optimizing event-based programs using static analysis techniques. It aims to reduce overhead from indirect function calls and argument passing by exploiting predictability in common event sequences. The approach first profiles a program to identify frequent event patterns, then applies various compiler optimizations informed by the profile information. Examples of event-based systems that could benefit include graphical user interfaces, distributed object systems, and operating system kernels.
MICE is a tool for monitoring context evolution and updating context models at runtime. It consists of three main components: a Monitor that collects contextual data from applications, a Context Data Repository that stores the data, and a Modeling Component that retrieves the data and updates context models. The tool was demonstrated by collecting battery data from Android devices using an open-source monitor, storing it in Cosm, an open-source repository, and updating awareness manager models. Future work includes combining multiple data streams and context attributes into more complex models and integrating context and design models at runtime.
New Fault Tolerance Approach using Antecedence Graphs in Multi Agent SystemsIDES Editor
Mobile agents are distributed programs which can
move autonomously in a network, to perform tasks on behalf
of user. They are susceptible to failures due to faults in
communication channels, processors or malicious programs.
In order to gain solid foundation at the heart of today’s esociety,
the mobile agent technology must address the issue of
fault tolerance. Checkpointing has been widely used technique
for providing fault tolerance in mobile agent systems. But the
traditional message passing based checkpointing and rollback
algorithms suffer from problems of excess bandwidth
consumption and large overheads. This paper proposes use of
antecedence graphs and message logs for maintaining fault
tolerance information of agents. For checkpointing, dependent
agents are marked out using antecedence graphs; and only
these agents are involved in process of taking checkpoints. In
case of failures, the antecedence graphs and message logs are
regenerated for recovery and then normal operation
continued. The proposed scheme reports less overheads,
speedy execution and reduced recovery times as compared to
existing graph based schemes.
This document summarizes various software complexity measures. It discusses code-based complexity measures like Halstead's software metrics and McCabe's cyclomatic complexity measure. It also examines cognitive complexity measures that consider human factors like KLCID, cognitive functional size, and cognitive information complexity. Code-based measures evaluate complexity based on attributes like program size, flow, and module interfaces. Cognitive measures attempt to quantify the effort required for a person to understand software based on inputs, outputs, and internal processing. The document provides definitions and formulas for calculating several complexity metrics and compares their approaches to measuring software complexity.
The document discusses linguistic structures for incorporating fault tolerance into application software. It begins by explaining that as software complexity increases, software faults have become more prevalent and impactful, necessitating fault tolerance at the application level. It then establishes a set of desirable attributes for application-level fault tolerance structures and surveys current solutions, assessing each according to these attributes. The goal is to identify shortcomings and opportunities to develop improved fault tolerance structures.
The document describes an algorithm for tolerating crash failures in distributed systems called the algorithm of mutual suspicion (AMS). AMS allows the backbone of a distributed application to tolerate failures of up to n-1 of its n components. Each node runs one agent of the backbone consisting of 3 tasks: D for the system database, I for monitoring "I'm alive" signals, and R for error recovery. The coordinator periodically sends "I'm alive" messages and assistants reply with acknowledgments. If a node does not receive the expected messages, it enters a suspicion period and may deduce a crashed component and initiate recovery actions.
1. The document presents a failure recovery scheme for mobile computing systems based on checkpointing and handoff count. It proposes taking checkpoints when the handoff count exceeds a threshold or the distance between mobile support stations exceeds a threshold.
2. The scheme aims to optimize both failure-free operation costs and failure recovery costs. Checkpoints are taken to minimize the amount of work lost after failures while limiting overhead during normal operation.
3. By delimiting the number of mobile support stations and distance from which recovery information needs to be collected, the proposed scheme aims to reduce recovery time and costs. Taking checkpoints based on handoff count and distance thresholds helps organize recovery information to facilitate efficient failure recovery.
Karl Marx propuso que la economía determina la historia humana y que la estructura económica capitalista debe transformarse. Según Marx, la economía es el factor determinante de la sociedad, mientras que el Estado, el poder político y el derecho son elementos de la superestructura determinados por la base económica. Marx también creía que el Estado y el poder político son instrumentos de control y explotación de la clase dominante, con el derecho legitimando su dominio.
MICE is a tool for monitoring context evolution and updating context models at runtime. It consists of three main components: a Monitor that collects contextual data from applications, a Context Data Repository that stores the data, and a Modeling Component that retrieves the data and updates context models. The tool was demonstrated by collecting battery data from Android devices using an open-source monitor, storing it in Cosm, an open-source repository, and updating awareness manager models. Future work includes combining multiple data streams and context attributes into more complex models and integrating context and design models at runtime.
New Fault Tolerance Approach using Antecedence Graphs in Multi Agent SystemsIDES Editor
Mobile agents are distributed programs which can
move autonomously in a network, to perform tasks on behalf
of user. They are susceptible to failures due to faults in
communication channels, processors or malicious programs.
In order to gain solid foundation at the heart of today’s esociety,
the mobile agent technology must address the issue of
fault tolerance. Checkpointing has been widely used technique
for providing fault tolerance in mobile agent systems. But the
traditional message passing based checkpointing and rollback
algorithms suffer from problems of excess bandwidth
consumption and large overheads. This paper proposes use of
antecedence graphs and message logs for maintaining fault
tolerance information of agents. For checkpointing, dependent
agents are marked out using antecedence graphs; and only
these agents are involved in process of taking checkpoints. In
case of failures, the antecedence graphs and message logs are
regenerated for recovery and then normal operation
continued. The proposed scheme reports less overheads,
speedy execution and reduced recovery times as compared to
existing graph based schemes.
This document summarizes various software complexity measures. It discusses code-based complexity measures like Halstead's software metrics and McCabe's cyclomatic complexity measure. It also examines cognitive complexity measures that consider human factors like KLCID, cognitive functional size, and cognitive information complexity. Code-based measures evaluate complexity based on attributes like program size, flow, and module interfaces. Cognitive measures attempt to quantify the effort required for a person to understand software based on inputs, outputs, and internal processing. The document provides definitions and formulas for calculating several complexity metrics and compares their approaches to measuring software complexity.
The document discusses linguistic structures for incorporating fault tolerance into application software. It begins by explaining that as software complexity increases, software faults have become more prevalent and impactful, necessitating fault tolerance at the application level. It then establishes a set of desirable attributes for application-level fault tolerance structures and surveys current solutions, assessing each according to these attributes. The goal is to identify shortcomings and opportunities to develop improved fault tolerance structures.
The document describes an algorithm for tolerating crash failures in distributed systems called the algorithm of mutual suspicion (AMS). AMS allows the backbone of a distributed application to tolerate failures of up to n-1 of its n components. Each node runs one agent of the backbone consisting of 3 tasks: D for the system database, I for monitoring "I'm alive" signals, and R for error recovery. The coordinator periodically sends "I'm alive" messages and assistants reply with acknowledgments. If a node does not receive the expected messages, it enters a suspicion period and may deduce a crashed component and initiate recovery actions.
1. The document presents a failure recovery scheme for mobile computing systems based on checkpointing and handoff count. It proposes taking checkpoints when the handoff count exceeds a threshold or the distance between mobile support stations exceeds a threshold.
2. The scheme aims to optimize both failure-free operation costs and failure recovery costs. Checkpoints are taken to minimize the amount of work lost after failures while limiting overhead during normal operation.
3. By delimiting the number of mobile support stations and distance from which recovery information needs to be collected, the proposed scheme aims to reduce recovery time and costs. Taking checkpoints based on handoff count and distance thresholds helps organize recovery information to facilitate efficient failure recovery.
Karl Marx propuso que la economía determina la historia humana y que la estructura económica capitalista debe transformarse. Según Marx, la economía es el factor determinante de la sociedad, mientras que el Estado, el poder político y el derecho son elementos de la superestructura determinados por la base económica. Marx también creía que el Estado y el poder político son instrumentos de control y explotación de la clase dominante, con el derecho legitimando su dominio.
The document discusses a framework for a self-healing module (SHM) to automate response to failures in a virtual manufacturing execution system (vMES). The SHM would detect failures, determine resolutions, and enact resolutions without human intervention. This would improve productivity by automating error recovery. The SHM framework uses event listeners, triggers, and actions. Listeners detect events, triggers determine responses, and actions enact those responses, such as restarting processes, migrating virtual machines, or adjusting database settings. The goal is to automate operations and improve response time to failures in virtual manufacturing environments.
An event-driven architecture consists of event producers that generate event streams and event consumers that listen for events. It allows for loose coupling between components and asynchronous event handling. Key aspects include publish/subscribe messaging patterns, event processing by middleware, and real-time or near real-time information flow. Benefits include scalability, loose coupling, fault tolerance, and the ability to add new consumers easily. Challenges include guaranteed delivery, processing events in order or exactly once across multiple consumer instances. Common tools used include Apache Kafka, Apache ActiveMQ, Redis, and Apache Pulsar.
This document discusses using an intelligent systems approach for cloud forensics. It proposes using agent-based modeling with functional programming to process forensic data from cloud systems in parallel. The agents would update "pheromone levels" to indicate the suspiciousness of data entries and cooperate to process large amounts of log data from cloud systems. The approach aims to handle the large volumes of data in clouds using lightweight, reactive agents programmed with a functional style in F#.
Event-driven Infrastructure - Mike Place, SaltStack - DevOpsDays Tel Aviv 2016DevOpsDays Tel Aviv
"As we move into the age of containerization, it becomes more important than ever to figure out how to automate, monitor and deploy systems which are resilient and well-understood.
In this talk, we'll discuss methods for building infrastructures with universal event buses and reactive systems which can act as a nervous system for our computing environments."
The document presents a framework for monitoring service systems from a language-action perspective. It formalizes interaction protocols, policies, and commitments using communicative acts to specify goals and monitors at multiple abstraction levels. This allows events from low-level message exchanges to be analyzed to understand compliance with higher-level policies. The framework is demonstrated with purchase order scenarios and is shown to provide effective monitoring of service interactions and processes.
The document discusses Grid Computing, which uses distributed computing resources like computer clusters connected via high-speed networks to provide high computational power. It describes the Globus Toolkit, an open-source software toolkit that provides basic services for building Grids. Key components of the Globus Toolkit allow for resource management, security, data management, and communication. The document also discusses parallel programming using MPI (Message Passing Interface) and potential applications of Grid Computing such as distributed supercomputing, real-time systems, and data-intensive processing.
WSO2 Complex Event Processor (WSO2 CEP) helps identify the most meaningful events and patterns from multiple data sources, analyze their impacts, and act on them in real time. 100% open source, it allows you a set up a more agile connected business, responding to urgent business situations with both speed and precision.
Truly dependable software systems should be built with structuring techniques able to decompose the software complexity without
hiding important hypotheses and assumptions such as those regarding
their target execution environment and the expected fault- and system
models. A judicious assessment of what can be made transparent and
what should be translucent is necessary. This paper discusses a practical
example of a structuring technique built with these principles in mind:
Reflective and refractive variables. We show that our technique offers
an acceptable degree of separation of the design concerns, with limited
code intrusion; at the same time, by construction, it separates but does
not hide the complexity required for managing fault-tolerance. In particular, our technique offers access to collected system-wide information
and the knowledge extracted from that information. This can be used
to devise architectures that minimize the hazard of a mismatch between
dependable software and the target execution environments.
The Indo-American Journal of Agricultural and Veterinary Sciences is an online international journal published quarterly. It is a peer-reviewed journal that focuses on disseminating high-quality original research work, reviews, and short communications of the publishable paper.
This document describes modeling and verifying a telecommunication application called Depannage using Live Sequence Charts (LSCs) and the Play-Engine tool. Depannage allows users to call for help from emergency services. The application is complex due to its distributed architecture, time constraints, and evolving components.
The authors specify Depannage using LSCs to describe component behaviors and interactions. Key components include Search, which locates emergency providers, and Users, which models user states. Universal LSCs define mandatory component behaviors, while existential LSCs define possible behaviors. The Play-Engine tool is used to simulate, animate, and formally verify the LSC specification.
The methodology captures requirements at a high level using
Seeing O S Processes To Improve Dependability And Safetyalanocu
This document proposes and evaluates a sealed process architecture as an alternative to the traditional open process architecture used in most modern operating systems. The key aspects of a sealed process architecture are:
1. Code within a process cannot change once execution begins (fixed code invariant).
2. A process's state cannot be directly accessed by other processes (state isolation invariant).
3. All communication between processes is explicit, with sender and receiver control (explicit communication invariant).
4. The kernel API respects the above invariants and does not allow them to be subverted (closed API invariant).
The document describes an implementation of sealed processes in the Singularity operating system and presents preliminary benchmarks showing competitive performance compared to open
The Geoquorum approach for implementing atomic read/write shaved memory in mobile ad hoc networks. This
problem in distributed computing is revisited in the new setting provided by the emerging mobile computing technology. A
simple solution tailored for use in ad hoc networks is employed as a vehicle for demonstrating the applicability of formal
requirements and design strategies to the new field of mobile computing. The approach of this paper is based on well
understood techniques in specification refinement, but the methodology is tailored to mobile applications and help designers
address novel concerns such as logical mobility, the invocations, specific conditions constructs
Formal Specification for Implementing Atomic Read/Write Shared Memory in Mobi...ijcsit
The Geoquorum approach for implementing atomic read/write shaved memory in mobile ad hoc networks. This
problem in distributed computing is revisited in the new setting provided by the emerging mobile computing technology. A
simple solution tailored for use in ad hoc networks is employed as a vehicle for demonstrating the applicability of formal
requirements and design strategies to the new field of mobile computing. The approach of this paper is based on well
understood techniques in specification refinement, but the methodology is tailored to mobile applications and help designers
address novel concerns such as logical mobility, the invocations, specific conditions constructs. The proof logic and
programming notation of mobile UNITY provide the intellectual tools required to carryout this task. Also, the quorum
systems are investigated in highly mobile networks in order to reduce the communication cost associated with each distributed
operation.
Dynamic RWX ACM Model Optimizing the Risk on Real Time Unix File SystemRadita Apriana
The preventive control is one of the well advance controls for recent security for protection of data
and services from the uncertainty. Because, increasing the importance of business, communication
technologies and growing the external risk is a very common phenomenon now-a-days. The system
security risks put forward to the management focus on IT infrastructure (OS). The top management has to
decide whether to accept expected losses or to invest into technical security mechanisms in order to
minimize the frequency of attacks, thefts as well as uncertainty. This work contributes to the development
of an optimization model that aims to determine the optimal cost to be invested into security mechanisms
deciding on the measure component of UFS attribute. Our model should be design in such way, the Read,
Write & Execute automatically Protected, Detected and Corrected on RTOS. We have to optimize the
system attacks and down time by implementing RWX ACM mechanism based on semi-group structure,
mean while improving the throughput of the Business, Resources & Technology.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The real time publisher subscriber inter-process communication model for dist...yancha1973
1) The document proposes a real-time publisher/subscriber model for inter-process communication in distributed real-time systems.
2) In the model, processes publish and subscribe to messages using logical handles called distribution tags, without knowledge of senders/receivers.
3) An application programming interface is presented that allows processes to create/destroy tags, publish/receive messages, and query senders/receivers.
4) The model is fault-tolerant, supporting applications like clock synchronization across nodes and allowing processes to be upgraded online.
Dynamically Adapting Software Components for the GridEditor IJCATR
The surfacing of dynamic execution environments such as „grids‟ forces scientific applications to take dynamicity. Dynamic
adaptation of Grid Components in Grid Comput ing is a critical issue for the design of framework for dynamic adaptation towards
self-adaptable software development components for the grid. T h i s paper carries the systematic design of dynamic adaptation
framework with the effective implementation of the structure of adaptable component. i . e . incorporating the layered architecture
e n v i r o nme n t with the concept of dynamicity.
Architecture for Intrusion Detection System with Fault Tolerance Using Mobile...IJNSA Journal
This paper proposes a fault tolerant distributed intrusion detection system architecture that uses mobile agents. The architecture includes a mobile agent platform (MAP) that provides an execution environment for mobile agents (MAs) to run specialized monitoring tasks. When the main IDS server goes down, the MAP on backup client hosts can collectively take over server responsibilities according to priority. MAs dispatch from the MAP to detect intrusions by invoking filter, correlator and interpreter agents. The paper outlines this architecture and discusses directions for implementing an IDS using the MAP, MAs, detection engines and XML data storage. This approach aims to improve scalability, platform independence and reliability over other IDS models.
The document discusses a framework for a self-healing module (SHM) to automate response to failures in a virtual manufacturing execution system (vMES). The SHM would detect failures, determine resolutions, and enact resolutions without human intervention. This would improve productivity by automating error recovery. The SHM framework uses event listeners, triggers, and actions. Listeners detect events, triggers determine responses, and actions enact those responses, such as restarting processes, migrating virtual machines, or adjusting database settings. The goal is to automate operations and improve response time to failures in virtual manufacturing environments.
An event-driven architecture consists of event producers that generate event streams and event consumers that listen for events. It allows for loose coupling between components and asynchronous event handling. Key aspects include publish/subscribe messaging patterns, event processing by middleware, and real-time or near real-time information flow. Benefits include scalability, loose coupling, fault tolerance, and the ability to add new consumers easily. Challenges include guaranteed delivery, processing events in order or exactly once across multiple consumer instances. Common tools used include Apache Kafka, Apache ActiveMQ, Redis, and Apache Pulsar.
This document discusses using an intelligent systems approach for cloud forensics. It proposes using agent-based modeling with functional programming to process forensic data from cloud systems in parallel. The agents would update "pheromone levels" to indicate the suspiciousness of data entries and cooperate to process large amounts of log data from cloud systems. The approach aims to handle the large volumes of data in clouds using lightweight, reactive agents programmed with a functional style in F#.
Event-driven Infrastructure - Mike Place, SaltStack - DevOpsDays Tel Aviv 2016DevOpsDays Tel Aviv
"As we move into the age of containerization, it becomes more important than ever to figure out how to automate, monitor and deploy systems which are resilient and well-understood.
In this talk, we'll discuss methods for building infrastructures with universal event buses and reactive systems which can act as a nervous system for our computing environments."
The document presents a framework for monitoring service systems from a language-action perspective. It formalizes interaction protocols, policies, and commitments using communicative acts to specify goals and monitors at multiple abstraction levels. This allows events from low-level message exchanges to be analyzed to understand compliance with higher-level policies. The framework is demonstrated with purchase order scenarios and is shown to provide effective monitoring of service interactions and processes.
The document discusses Grid Computing, which uses distributed computing resources like computer clusters connected via high-speed networks to provide high computational power. It describes the Globus Toolkit, an open-source software toolkit that provides basic services for building Grids. Key components of the Globus Toolkit allow for resource management, security, data management, and communication. The document also discusses parallel programming using MPI (Message Passing Interface) and potential applications of Grid Computing such as distributed supercomputing, real-time systems, and data-intensive processing.
WSO2 Complex Event Processor (WSO2 CEP) helps identify the most meaningful events and patterns from multiple data sources, analyze their impacts, and act on them in real time. 100% open source, it allows you a set up a more agile connected business, responding to urgent business situations with both speed and precision.
Truly dependable software systems should be built with structuring techniques able to decompose the software complexity without
hiding important hypotheses and assumptions such as those regarding
their target execution environment and the expected fault- and system
models. A judicious assessment of what can be made transparent and
what should be translucent is necessary. This paper discusses a practical
example of a structuring technique built with these principles in mind:
Reflective and refractive variables. We show that our technique offers
an acceptable degree of separation of the design concerns, with limited
code intrusion; at the same time, by construction, it separates but does
not hide the complexity required for managing fault-tolerance. In particular, our technique offers access to collected system-wide information
and the knowledge extracted from that information. This can be used
to devise architectures that minimize the hazard of a mismatch between
dependable software and the target execution environments.
The Indo-American Journal of Agricultural and Veterinary Sciences is an online international journal published quarterly. It is a peer-reviewed journal that focuses on disseminating high-quality original research work, reviews, and short communications of the publishable paper.
This document describes modeling and verifying a telecommunication application called Depannage using Live Sequence Charts (LSCs) and the Play-Engine tool. Depannage allows users to call for help from emergency services. The application is complex due to its distributed architecture, time constraints, and evolving components.
The authors specify Depannage using LSCs to describe component behaviors and interactions. Key components include Search, which locates emergency providers, and Users, which models user states. Universal LSCs define mandatory component behaviors, while existential LSCs define possible behaviors. The Play-Engine tool is used to simulate, animate, and formally verify the LSC specification.
The methodology captures requirements at a high level using
Seeing O S Processes To Improve Dependability And Safetyalanocu
This document proposes and evaluates a sealed process architecture as an alternative to the traditional open process architecture used in most modern operating systems. The key aspects of a sealed process architecture are:
1. Code within a process cannot change once execution begins (fixed code invariant).
2. A process's state cannot be directly accessed by other processes (state isolation invariant).
3. All communication between processes is explicit, with sender and receiver control (explicit communication invariant).
4. The kernel API respects the above invariants and does not allow them to be subverted (closed API invariant).
The document describes an implementation of sealed processes in the Singularity operating system and presents preliminary benchmarks showing competitive performance compared to open
The Geoquorum approach for implementing atomic read/write shaved memory in mobile ad hoc networks. This
problem in distributed computing is revisited in the new setting provided by the emerging mobile computing technology. A
simple solution tailored for use in ad hoc networks is employed as a vehicle for demonstrating the applicability of formal
requirements and design strategies to the new field of mobile computing. The approach of this paper is based on well
understood techniques in specification refinement, but the methodology is tailored to mobile applications and help designers
address novel concerns such as logical mobility, the invocations, specific conditions constructs
Formal Specification for Implementing Atomic Read/Write Shared Memory in Mobi...ijcsit
The Geoquorum approach for implementing atomic read/write shaved memory in mobile ad hoc networks. This
problem in distributed computing is revisited in the new setting provided by the emerging mobile computing technology. A
simple solution tailored for use in ad hoc networks is employed as a vehicle for demonstrating the applicability of formal
requirements and design strategies to the new field of mobile computing. The approach of this paper is based on well
understood techniques in specification refinement, but the methodology is tailored to mobile applications and help designers
address novel concerns such as logical mobility, the invocations, specific conditions constructs. The proof logic and
programming notation of mobile UNITY provide the intellectual tools required to carryout this task. Also, the quorum
systems are investigated in highly mobile networks in order to reduce the communication cost associated with each distributed
operation.
Dynamic RWX ACM Model Optimizing the Risk on Real Time Unix File SystemRadita Apriana
The preventive control is one of the well advance controls for recent security for protection of data
and services from the uncertainty. Because, increasing the importance of business, communication
technologies and growing the external risk is a very common phenomenon now-a-days. The system
security risks put forward to the management focus on IT infrastructure (OS). The top management has to
decide whether to accept expected losses or to invest into technical security mechanisms in order to
minimize the frequency of attacks, thefts as well as uncertainty. This work contributes to the development
of an optimization model that aims to determine the optimal cost to be invested into security mechanisms
deciding on the measure component of UFS attribute. Our model should be design in such way, the Read,
Write & Execute automatically Protected, Detected and Corrected on RTOS. We have to optimize the
system attacks and down time by implementing RWX ACM mechanism based on semi-group structure,
mean while improving the throughput of the Business, Resources & Technology.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The real time publisher subscriber inter-process communication model for dist...yancha1973
1) The document proposes a real-time publisher/subscriber model for inter-process communication in distributed real-time systems.
2) In the model, processes publish and subscribe to messages using logical handles called distribution tags, without knowledge of senders/receivers.
3) An application programming interface is presented that allows processes to create/destroy tags, publish/receive messages, and query senders/receivers.
4) The model is fault-tolerant, supporting applications like clock synchronization across nodes and allowing processes to be upgraded online.
Dynamically Adapting Software Components for the GridEditor IJCATR
The surfacing of dynamic execution environments such as „grids‟ forces scientific applications to take dynamicity. Dynamic
adaptation of Grid Components in Grid Comput ing is a critical issue for the design of framework for dynamic adaptation towards
self-adaptable software development components for the grid. T h i s paper carries the systematic design of dynamic adaptation
framework with the effective implementation of the structure of adaptable component. i . e . incorporating the layered architecture
e n v i r o nme n t with the concept of dynamicity.
Architecture for Intrusion Detection System with Fault Tolerance Using Mobile...IJNSA Journal
This paper proposes a fault tolerant distributed intrusion detection system architecture that uses mobile agents. The architecture includes a mobile agent platform (MAP) that provides an execution environment for mobile agents (MAs) to run specialized monitoring tasks. When the main IDS server goes down, the MAP on backup client hosts can collectively take over server responsibilities according to priority. MAs dispatch from the MAP to detect intrusions by invoking filter, correlator and interpreter agents. The paper outlines this architecture and discusses directions for implementing an IDS using the MAP, MAs, detection engines and XML data storage. This approach aims to improve scalability, platform independence and reliability over other IDS models.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Programming Foundation Models with DSPy - Meetup Slides
P106 rajagopalan-read
1. Profile-Directed Optimization of Event-Based Programs
Mohan Rajagopalan Saumya K. Debray
Department of Computer Science
University of Arizona
Tucson, AZ 85721, USA
mohan, debray @cs.arizona.edu
Matti A. Hiltunen Richard D. Schlichting
AT&T Labs-Research
180 Park Avenue
Florham Park, NJ 07932, USA
hiltunen, rick @research.att.com
ABSTRACT structure user interaction code in GUI systems [8, 18], form the
2010.05.17
Events are used as a fundamental abstraction in programs ranging
from graphical user interfaces (GUIs) to systems for building cus-
basis for configurability in systems to build customized distributed
services and network protocols [4, 9, 16], are the paradigm used for
tomized network protocols. While providing a flexible structuring
:
and execution paradigm, events have the potentially serious draw-
asynchronous notification in distributed object systems [19], and
are advocated as an alternative to threads in web servers and other
back of extra execution overhead due to the indirection between types of system code [20, 23]. Even operating system kernels can
modules that raise events and those that handle them. This pa- be viewed as event-based systems, with the occurrence of interrupts
per describes an approach to addressing this issue using static opti- and system calls being events that drive execution.
mization techniques. This approach, which exploits the underlying The rationale behind using events is multifaceted. Events are
predictability often exhibited by event-based programs, is based on asynchronous, which is a natural match for the reactive execution
first profiling the program to identify commonly occurring event behavior of GUIs and operating systems. Events also allow the
sequences. A variety of techniques that use the resulting profile in- modules raising events to be decoupled from those fielding the
formation are then applied to the program to reduce the overheads events, thereby improving configurability. In short, event-based
associated with such mechanisms as indirect function calls and ar- programming is generally more flexible and can often be used to
realize richer execution semantics than traditional procedural or
4. , but
rov-
Components
bound to more than one event. An event is ignored if no handlers
are bound to the event. The execution order of multiple handlers
time bound to the same event may be important. Bindings may be static,
sub- i.e., remain the same throughout the execution of the program, or
ased dynamic, i.e., may change at runtime. Figure 1 illustrates bindings.
ques
e de-
head
Events Handlers Handler 1
Event A
on 2 Handler 2
Event B
fol-
zing Event C
Handler 3
ction
Handler 4
ives Event D
ents Handler 5
and
Cac-
ser- Figure 1: Event bindings
ution
ws, a
cus- Bindings are maintained in a registry that maps each event to
ction a list of handlers. The registry may be implemented as a shared
5. onsists of two or more event handlers. Events in Cactus are user- procedure.
efined. A typical composite protocol uses 10-20 different events In addition to these three, X has a number of other mechanisms
Examples
onsisting of a few external events caused by interactions with soft-
ware outside the composite protocol and numerous internal events
sed to structure the internal processing of a message or service
equest. Each event typically has multiple event handlers. As a re-
that can be broadly classified as event handling, namely timeouts,
signal handlers, and input handlers. Each of these mechanisms
allows the program to specify a procedure to be called when a given
condition occurs. For all these handler types, X provides operations
ult, Cactus composite protocols often have long chains of events for registering the handlers and activating them.
nd event handlers activated by one event. Section 4 gives concrete
xamples of events used in a Cactus composite protocol.
The Cactus runtime system provides a variety of operations for
3. OPTIMIZATION APPROACH
managing events and event handlers. In particular, operations are Compiler optimizations are based on being able to statically pre-
rovided for binding an event handler to a specified event (bind) dict aspects of a program’s runtime behavior using either invariants
in figure 3. The X server is a program that runs on each sy
nd for activating an event (raise). Event handler binding is com- that always hold at runtime (i.e., based on dataflow analysis) or as-
supporting a graphics display and is responsible for managing
letely dynamic. Events can be raised either synchronously or sertions that are likely to hold (i.e., based on execution profiles).
vice drivers. Application programs, also called X clients, ma
Top API
synchronously, and an event can also be raised with a specified Event-based systems, in contrast, are largely unpredictable in their
elay to implement time-driven execution. The orderEvents han-
Micro!protocols of event runtime behavior remote the the display system. X serversthe be- clients
local or due to to uncertainties associated with and X
the X-protocol for communication. X clients are typically bui
ler execution can be specified if desired. Arguments can be passed the Xlib libraries using toolkits such as Xt, GTK, or Qt. X cli
DESPrivacy
o handlers in both the bind and raise operations.msgFromAbove
Other operations
are implemented as a collection X!Client Application are the b
Devices of widgets, which
re available for unbinding handlers, creating and deleting events,
KeyedMD5Integrity msgFromBelow Device Drivers blocks of X applications.
building
alting event execution, and canceling a delayed event. Handler
xecution is atomic with respect to concurrency, i.e., a handler is X!Server X event is defined as “a packet Toolkit sent by the serv
An Xt of data Qt
RSAAuthenticity openSession
xecuted to completion before any other handler is started unless it the client in response to user behavior orXLib
to window system cha
oluntarily yields the CPU. Cactus does not directly support com- resulting from interactions between windows” [18]. Example
ClientKeyDistribution keyMiss X events include mouse motion, focus change, and button p
lex events, but such events can be implemented by defining a new X!Protocol
... ... These events are recognized through device drivers and relaye
vent and having a micro-protocol raise this event when the condi-
ons for the complex event are satisfied.Bottom API the X server, which in turn conveys them to X clients. The
The X Window system. X is a popular GUI framework for Unix Figure 3: Architecture of X Window systems may choose t
framework specifies 33 basic events. X clients
ystems. The standard architecture of an X based system is shown spond to any of these based on event masks that are specifie
bind time. Events are also used for communication between
Figure 2: Cactus composite protocol gets. Events can arrive in any order and are queued by the X cl
Event activation in X is similar to synchronous activation in
108 general model.
The X architecture has three mechanisms for handling ev
bound to the event at that time results in a correct transformation. event handlers, callback functions, and action procedures. All t
Similarly, it is easy to see that sequences of nested synchronous ac- map to handlers in the general model and are used to specify di
tivations can be readily optimized. The specific optimization tech- ent granularities of control. Event handlers, the most primitive
niques and their limitations are discussed below in section 3. simply procedures bound to event names. Callback functions
action procedures are more commonly used high-level abstract
10. Graph Optimizations(2)
Event Chains and Subsumption
392 Events Handlers
romUserL MsgFromUserH
Event A
93 310
Handler1 Handler2 Handler3
ser ControllerFiredHandler Graph View { { {
H1_code H2_code H3_code
FEC SFU1 FEC SFU2 } } }
Event Graph
Threshold = 300 479
SeqSegSFU TD S2N
SegFromUser ControllerFiring
Handler Merging
391 TDriver SFU FEC S2N
Seg2Net
Adapt S2N
PAU WFC S2N Events Handlers
Event A Handler123
{
91 391 H1_code
H2_code
392 Figure 8: Subsuming events }
H3_code
rollerClkH ControllerClkL
ure 6: Reduced event graph
from within a handler for SegFromUser, the latter will wait un- Figure 7: Handler merging
til the handling of Seg2Net has been completed, at which point
control will return to the handler for SegFromUser. In this case,
the handler for Seg2Net can be subsumed into that for SegFro-
e event graph foreliminating the synchronous im- raisetranslates into a sequence of indirect function calls. There are two
mUser, thereby a video player application event between
fthem.
a configurable transport protocol called CTP
12. Total Execution Time (sec)
Frame rate Orig. ( ) Opt. ( )
Experiment Results 10
15
20
25
43.1
30.9
24.5
23.9
41.9
30.3
22.1
21.3
97.2
98.0
90.2
89.1
Key: Orig: Original program;
Figure 10: Video player
Push time ( sec)
Size Orig. ( ) Opt. ( ) (%) O
Total Execution Time (sec) Event Handler Time (sec) Event Processing Time ( sec) 241
64 274 Speedup 88.0
Frame rate Orig. ( ) Opt. ( ) (%) Orig. ( ) Opt. ( ) (%) Original 287
128 Optimized 263 ( ) 91.6
10 43.1 41.9 97.2 2.3 0.9 39.1 256 304 273 89.8
Adapt 55 11 80.0
15 30.9 30.3 98.0 1.6 0.6 37.5 512 336 299 89.0
SegFromUser 346 41 88.2
20 24.5 22.1 90.2 1.5 0.5 33.3 1024 430 373 86.7
Seg2Net 137 37 73.0
25 23.9 21.3 89.1 1.5 0.5 33.3 2048 572 552 96.5
Key: Orig: Original program; Opt: Optimized program Figure 11: Event processing times in the video player.
Figure 12: Impact of optimiza
Figure 10: Video player optimization results.
Push time ( sec) optimization on overall execution time becomes more pronounced
Processing Orig. ( sec) Opt. ( )
Time ( ) Speedup that the time for thetime (portion is reduced markedly in most Event
Pop push sec) cases, Execution Time ( sec)
Size (%)with improvements of ( to 13.3%. The improvements
Orig. ( ) Opt. up ) (%) as thein the pop increases ) that when) the frame rate is low, the
frame rate Orig. ( is Opt. (
Type (%)
Original Optimized ( )
64 274 241 88.0 portion397 also noticeable although not as high as CPU is idle a large part of the time. As a result, the unoptimized
are 378 95.2 for the push por-
Scroll 158 148 93.7
55
128 28711 80.0
263 91.6 tion, typically around 5% but going as high as 12%.
460 448 97.4 program can simply use a bit more of the idle83.8 to keep up with
time
r 346 41 88.2 Popup 37 31
256 304 273 89.8 484 457 94.4 the required frame rate. However, when the frame rate increases,
An examination of the effects of our optimizations on these two
137 37 73.0
512 336 299 89.0 programs indicates two main sources of benefits: both programs must do more work in a time unit and the idle time
494 470 95.1 the reduction of
Figure 13: Optimization of X events
1024 430 373
ent processing times in the video player. 86.7 argument marshaling overhead when invoking event handlers, When the frame rate becomes high enough, the unopti-
608 570 93.8 decreases. and
2048 572 552 96.5 handler merging that leads to a reduction in the mized program runs out of extra idle time and starts falling behind
1016 893 87.9 number of han-
dler invocations. The elimination of marshaling overhead seems to
the optimized program. This indicates that our optimizations are
Figure 12: Impact of optimization largest effect on the overall performance improvements
have the in SecComm
achieved. The main effect of handler merging especially effectivePopup by over 16%.such as techniques were that
aboutto reduce that of for mobile systems These handheld PDAs
is 6% and the
rall execution time becomes more pronounced number of function calls between handlers that are executed in less powerful processors than desktop systems.
tend to havese- level of action handlers, although it would be
applied here at the op
creases is that when the frame rate is low, the quence. Merging also creates opportunities for possible to optimize configurable secure opening up callbacks in that
additional code im- a one step further by communication service
SecComm is
part of the time. As a result, the unoptimized provements due to standard compiler optimizations. sup
Execution Time (the idle time to keep up with
use a bit more of sec)
theallowsway. customization of security attributes for a communica-
same the eve
Codeifin event handlersevent B changed
event binding for is usually a small fraction connection, including performed using the Athena widget
tion of the total
Orig. ( ) Opt. ( ) (%)
rate. However, when the frame rate increases, These optimizations were privacy, authenticity, integrity, and non-
program size. To measure the effects B); our optimization on code eve
call(original code for event of repudiation. One of the features of SecComm is its support for
do 158 work in a time unit93.7 the idle time
more 148 and size, we counted the number of instructions in family based on Xt and Xlib provided with XFree86. The Athena
else the original and op- can
e frame rate becomes high enough, the unopti-
37 31 83.8 timized programs using theand optimized code for event is a minimalsecuritywith limited configurability, and there- se-
merged, inlined, command objdump implementing a toolkit property using combinations of basic
toolkit program |
-d B;
out of extra idle time and starts falling behind fore provided limited scope for applying configuration of SecComm
curity micro-protocols. We optimized aour optimizations. The
wc -l. Our optimizations produce a code size increase of 1.3% mi
am. Optimization of X our optimizations are
13: This indicates that events for the video player and 1.1% for SecComm. event model in more recent (and popular) toolkits such message body
with three micro-protocols, two of which encrypt the as Gnome all
for mobile systems such as handheld PDAs that