This document discusses developing autonomic computer vision systems through self-monitoring, self-regulation, self-repair, and self-description. It proposes a software model for tracking-based perceptual components with these autonomic properties. The components include modules for target prediction, detection, update, and recognition, controlled by a supervisor that monitors performance and adjusts parameters to maintain quality of service. The supervisor can also respond to queries to describe the component's state and capabilities.
A Runtime Evaluation Methodology and Framework for Autonomic SystemsIDES Editor
An autonomic system provides self-adaptive ability
that enables system to dynamically adjust its behavior on
environmental changes or system failure. Fundamental
process of adaptive behavior in an autonomic system is consist
of monitoring system or/and environment information,
analyzing monitored information, planning adaptation policy
and executing selected policy. Evaluating system utility is
one of a significant part among them. We propose a novel
approach on evaluating autonomic system at runtime. Our
proposed method takes advantage of a goal model that has
been widely used at requirement elicitation phase to capture
system requirements. We suggest the state-based goal model
that is dynamically activated as the system state changes. In
addition, we defined type of constraints that can be used to
evaluate goal satisfaction level. We implemented a prototype
of autonomic computing software engine to verity our proposed
method. We simulated the behavior of the autonomic
computing engine with the home surveillance robot scenario
and observed the validity of our proposed method
In a Power plant with a Distributed Control System ( DCS ), process parameters are continuously stored in
databases at discrete intervals. The data contained in these databases may not appear to contain valuable
relational information but practically such a relation exists. The large number of process parameter values
are changing with time in a Power Plant. These parameters are part of rules framed by domain experts for
the expert system. With the changes in parameters there is a quite high possibility to form new rules using
the dynamics of the process itself. We present an efficient algorithm that generates all significant rules
based on the real data. The association based algorithms were compared and the best suited algorithm for
this process application was selected. The application for the Learning system is studied in a Power Plant
domain. The SCADA interface was developed to acquire online plant data.
Survey on replication techniques for distributed systemIJECEIAES
Distributed systems mainly provide access to a large amount of data and computational resources through a wide range of interfaces. Besides its dynamic nature, which means that resources may enter and leave the environment at any time, many distributed systems applications will be running in an environment where faults are more likely to occur due to their ever-increasing scales and the complexity. Due to diverse faults and failures conditions, fault tolerance has become a critical element for distributed computing in order for the system to perform its function correctly even in the present of faults. Replication techniques primarily concentrate on the two fault tolerance manners precisely masking the failures as well as reconfigure the system in response. This paper presents a brief survey on different replication techniques such as Read One Write All (ROWA), Quorum Consensus (QC), Tree Quorum (TQ) Protocol, Grid Configuration (GC) Protocol, Two-Replica Distribution Techniques (TRDT), Neighbour Replica Triangular Grid (NRTG) and Neighbour Replication Distributed Techniques (NRDT). These techniques have its own redeeming features and shortcoming which forms the subject matter of this survey.
The document discusses using ontology to enhance management of operating system services. It proposes an ontology-based architecture to classify and organize various operating system services. The architecture divides services into the kernel and exokernel layers. Individual service ontologies are developed for search, backup, security, and networking and then merged into a universal ontology. This approach aims to improve service development, deployment and management by adding semantics and enabling knowledge sharing and reusability across operating systems.
The document describes IntelliSuite's integrated MEMS design flow, which allows users to model devices from the schematic level through physical layout and verification, as well as simulate multi-physics processes. IntelliSuite combines top-down and bottom-up design approaches for efficient and accurate modeling. Key capabilities include schematic capture, physical layout, process simulation, and system-level modeling extraction for fast behavioral simulation.
The document describes an algorithm for tolerating crash failures in distributed systems called the algorithm of mutual suspicion (AMS). AMS allows the backbone of a distributed application to tolerate failures of up to n-1 of its n components. Each node runs one agent of the backbone consisting of 3 tasks: D for the system database, I for monitoring "I'm alive" signals, and R for error recovery. The coordinator periodically sends "I'm alive" messages and assistants reply with acknowledgments. If a node does not receive the expected messages, it enters a suspicion period and may deduce a crashed component and initiate recovery actions.
This document discusses assumptions in software systems and their failure. It introduces three syndromes caused by assumption failures: 1) Horning syndrome, where assumptions are removed during software development, 2) Hidden Intelligence syndrome, where assumptions are hardcoded and hidden, and 3) Boulding syndrome, where a software system's dependencies and properties are unknown. It proposes strategies to address each syndrome and envisions a holistic approach to software development that explicitly addresses assumptions and their failures.
Expert systems are computer programs that contain subject-specific knowledge from human experts. They use this knowledge along with inference engines to provide expert-level advice. An expert system's knowledge comes from knowledge engineers who gather information from subject matter experts and codify it for the system. Expert systems are comprised of a knowledge base containing factual and heuristic knowledge and a reasoning component that uses techniques like forward and backward chaining to draw conclusions. Some applications of expert systems include medical diagnosis, agricultural pest control advice, and educational tools.
A Runtime Evaluation Methodology and Framework for Autonomic SystemsIDES Editor
An autonomic system provides self-adaptive ability
that enables system to dynamically adjust its behavior on
environmental changes or system failure. Fundamental
process of adaptive behavior in an autonomic system is consist
of monitoring system or/and environment information,
analyzing monitored information, planning adaptation policy
and executing selected policy. Evaluating system utility is
one of a significant part among them. We propose a novel
approach on evaluating autonomic system at runtime. Our
proposed method takes advantage of a goal model that has
been widely used at requirement elicitation phase to capture
system requirements. We suggest the state-based goal model
that is dynamically activated as the system state changes. In
addition, we defined type of constraints that can be used to
evaluate goal satisfaction level. We implemented a prototype
of autonomic computing software engine to verity our proposed
method. We simulated the behavior of the autonomic
computing engine with the home surveillance robot scenario
and observed the validity of our proposed method
In a Power plant with a Distributed Control System ( DCS ), process parameters are continuously stored in
databases at discrete intervals. The data contained in these databases may not appear to contain valuable
relational information but practically such a relation exists. The large number of process parameter values
are changing with time in a Power Plant. These parameters are part of rules framed by domain experts for
the expert system. With the changes in parameters there is a quite high possibility to form new rules using
the dynamics of the process itself. We present an efficient algorithm that generates all significant rules
based on the real data. The association based algorithms were compared and the best suited algorithm for
this process application was selected. The application for the Learning system is studied in a Power Plant
domain. The SCADA interface was developed to acquire online plant data.
Survey on replication techniques for distributed systemIJECEIAES
Distributed systems mainly provide access to a large amount of data and computational resources through a wide range of interfaces. Besides its dynamic nature, which means that resources may enter and leave the environment at any time, many distributed systems applications will be running in an environment where faults are more likely to occur due to their ever-increasing scales and the complexity. Due to diverse faults and failures conditions, fault tolerance has become a critical element for distributed computing in order for the system to perform its function correctly even in the present of faults. Replication techniques primarily concentrate on the two fault tolerance manners precisely masking the failures as well as reconfigure the system in response. This paper presents a brief survey on different replication techniques such as Read One Write All (ROWA), Quorum Consensus (QC), Tree Quorum (TQ) Protocol, Grid Configuration (GC) Protocol, Two-Replica Distribution Techniques (TRDT), Neighbour Replica Triangular Grid (NRTG) and Neighbour Replication Distributed Techniques (NRDT). These techniques have its own redeeming features and shortcoming which forms the subject matter of this survey.
The document discusses using ontology to enhance management of operating system services. It proposes an ontology-based architecture to classify and organize various operating system services. The architecture divides services into the kernel and exokernel layers. Individual service ontologies are developed for search, backup, security, and networking and then merged into a universal ontology. This approach aims to improve service development, deployment and management by adding semantics and enabling knowledge sharing and reusability across operating systems.
The document describes IntelliSuite's integrated MEMS design flow, which allows users to model devices from the schematic level through physical layout and verification, as well as simulate multi-physics processes. IntelliSuite combines top-down and bottom-up design approaches for efficient and accurate modeling. Key capabilities include schematic capture, physical layout, process simulation, and system-level modeling extraction for fast behavioral simulation.
The document describes an algorithm for tolerating crash failures in distributed systems called the algorithm of mutual suspicion (AMS). AMS allows the backbone of a distributed application to tolerate failures of up to n-1 of its n components. Each node runs one agent of the backbone consisting of 3 tasks: D for the system database, I for monitoring "I'm alive" signals, and R for error recovery. The coordinator periodically sends "I'm alive" messages and assistants reply with acknowledgments. If a node does not receive the expected messages, it enters a suspicion period and may deduce a crashed component and initiate recovery actions.
This document discusses assumptions in software systems and their failure. It introduces three syndromes caused by assumption failures: 1) Horning syndrome, where assumptions are removed during software development, 2) Hidden Intelligence syndrome, where assumptions are hardcoded and hidden, and 3) Boulding syndrome, where a software system's dependencies and properties are unknown. It proposes strategies to address each syndrome and envisions a holistic approach to software development that explicitly addresses assumptions and their failures.
Expert systems are computer programs that contain subject-specific knowledge from human experts. They use this knowledge along with inference engines to provide expert-level advice. An expert system's knowledge comes from knowledge engineers who gather information from subject matter experts and codify it for the system. Expert systems are comprised of a knowledge base containing factual and heuristic knowledge and a reasoning component that uses techniques like forward and backward chaining to draw conclusions. Some applications of expert systems include medical diagnosis, agricultural pest control advice, and educational tools.
An expert system is a computer-based system that uses knowledge from human experts to solve problems typically requiring an expert. Expert systems model the problem-solving abilities of human experts. They are well-suited for problems that do not require complex reasoning, are well-understood, use objectively described data, require fast and accurate answers, or where human expertise is difficult to obtain. Expert systems have three main modules: a knowledge acquisition module to obtain expertise from human experts, a consultation module to provide answers to user questions, and an explanation module to explain how answers are inferred.
In this presentation we introduce a family of gossiping algorithms whose members share the same structure though they vary their performance in function of a combinatorial parameter. We show that such parameter may be considered as a “knob” controlling the amount of communication parallelism characterizing the algorithms. After this we introduce procedures to operate the knob and choose parameters matching the amount of communication channels currently provided by the available communication system(s). In so doing we provide a robust mechanism to tune the production of requests for communication after the current operational conditions of the consumers of such requests. This can be used to achieve high performance and programmatic avoidance of undesirable events such as message collisions.
Paper available at https://dl.dropboxusercontent.com/u/67040428/Articles/pdp12.pdf
Operation of an O.S
Structure of an operating system,
Operating systems with monolithic structure
Layered design of an operating system
Virtual machine operating systems
Kernel based operating systems
The document describes a self-learning real-time expert system developed for fault diagnosis in power plants. It uses historic operational data and machine learning techniques to automatically generate new rules for the expert system's knowledge base. The key steps are: (1) preprocessing historical data, (2) applying an association algorithm (Tertius was found most suitable) to generate new rules, (3) validating and formatting new rules, and (4) updating the knowledge base. This allows the expert system to continuously improve its diagnostic ability by learning from plant operational data.
1. The document describes an expert system and its components.
2. It defines an expert system as an intelligent computer program that uses knowledge and reasoning to solve problems that usually require human expertise.
3. The key components of an expert system are the knowledge base, inference engine, explanation facility, and knowledge acquisition facility.
The document discusses linguistic structures for incorporating fault tolerance into application software. It begins by explaining that as software complexity increases, software faults have become more prevalent and impactful, necessitating fault tolerance at the application level. It then establishes a set of desirable attributes for application-level fault tolerance structures and surveys current solutions, assessing each according to these attributes. The goal is to identify shortcomings and opportunities to develop improved fault tolerance structures.
The document describes a microcontroller-based emulator that can mimic the functionality of various logic gates and structured logic devices in real-time. The emulator uses a read-only memory (ROM) device to store the logic patterns of different devices. Address lines of the ROM are used as inputs to select the desired device and its corresponding logic pattern. This allows the emulator to function like the selected logic device through its input/output pins. The emulator provides a low-cost solution for students to perform digital design experiments and engineers to prototype systems without requiring the actual logic components.
Characteristics and Quality Attributes of Embedded Systemanand hd
The document discusses the characteristics and quality attributes of embedded systems. It describes key characteristics of embedded systems such as being application specific, reactive and real-time in response, operating in harsh environments, being distributed, and having concerns for small size, weight and power. It then outlines important quality attributes for embedded systems including operational attributes like response, throughput, reliability, maintainability, security and safety as well as non-operational attributes.
The document discusses systems and their components. It defines a system as having specific structures defined by components and processes. A system has inputs, processes, outputs, and storage. The main components are the processor, which transforms inputs into outputs, and control, which guides the system. Feedback provides control. Characteristics of systems include having multiple components that work toward a common objective. Systems can be abstract or physical, open or closed, deterministic or probabilistic, and involve manual or automated human intervention. SQL is used to access and manipulate database systems according to standards.
Nowadays there are several security tools that used to protect computer systems, computer networks, smart devices and etc. against attackers. Intrusion detection system is one of tools used to detect attacks. Intrusion Detection Systems produces large amount of alerts, security experts could not investigate important alerts, also many of that alerts are incorrect or false positives. Alert management systems are set of approaches that used to solve this problem. In this paper a new alert management system is presented. It uses K-nearest neighbor as a core component of the system that classify generated alerts. The suggested system serves precise results against huge amount of generated alerts. Because of low classification time per each alert, the system also could be used in online systems.
Classification of physiological signals for wheel loader operators using Mult...Reno Filla
This summarizes a research paper that presents a classification approach called MMSE-CBR to classify physiological parameters of wheel loader operators. It combines case-based reasoning (CBR) with multivariate multiscale entropy analysis (MMSE) to fuse data from multiple physiological sensors. The MMSE algorithm is used to extract features from signals measured by sensors like ECG, finger temperature, skin conductance, and respiration rate. These features are used to form cases in a case library. The CBR approach then classifies new cases by retrieving similar past cases. The approach was evaluated on data from 18 professional drivers and achieved 83.33% accuracy in classifying subjects as "stressed" or "healthy", comparable to an expert's
Microsoft Word - Welcome to Universiti Teknologi PETRONASbutest
The document presents a model for an intrusion prevention and self-healing system for network security inspired by the human immune system. The model uses multiple agents to detect, prevent, and heal harmful events on a network. It combines concepts from biological intrusion prevention and self-healing, applying mechanisms from the innate and adaptive immune systems. The system aims to autonomously secure a network through robust detection of anomalies and misuse, integrated with a self-healing mechanism to enhance network resilience.
Xiaomi - China brand mobile phone company Rudy Chen
Xiaomi is a Chinese electronics company known for its smartphones and other devices. Founded in 2010, Xiaomi creates affordable technology products like smartphones, tablets, TVs, and accessories. It sells directly to consumers through online stores and is the top smartphone brand in China.
The Redmi Note is Xiaomi's largest smartphone offering with a 5.5-inch display and plastic build. It has a 13-megapixel camera, 2GB of RAM, 8GB of storage, and a 3,200mAh battery that provides all-day battery life. While the design could be improved and it runs an older version of Android, the Redmi Note offers strong performance for its expected price of Rs 9,999 in India.
Xiaomi has expanded internationally using several key strategies:
1) The "triathlon model" integrates hardware, software, and internet services to generate profit from services rather than just hardware.
2) Aggressive pricing allows Xiaomi to undercut competitors by pricing products close to manufacturing costs.
3) Social media and "hunger marketing" create viral buzz among youth to fuel sales and gather customer feedback.
Xiaomi is poised to enter the Indian smartphone market as it represents a major growth opportunity. India is projected to become the second largest smartphone market globally by 2017 despite current low penetration rates. Xiaomi plans to target the affordable and mid-range price segments between Rs. 5,000-25,000 that are seeing the strongest growth through an initial partnership with Flipkart for online sales and a controlled retail strategy in major cities. Localizing content, customizing software and services, and providing superior after-sales support will be critical for Xiaomi to succeed against entrenched competitors in India like Samsung and Micromax.
Xiaomi Inc.: Positioning and Strategy for Entry into Indian Smartphone MarketCaio Porciuncula
Xiaomi is a Chinese technology company that has grown rapidly since 2010 to become the third largest smartphone maker globally. The document outlines Xiaomi's mission, values, and strategic focus of producing high-quality premium products at low prices. It also summarizes Xiaomi's objectives of expanding further into developing markets like India, Brazil, and Russia, as well as its long term goals of entering Western markets and expanding into new product categories like smart home devices. Key to Xiaomi's success is its online sales model using flash sales and its practice of incorporating weekly software and occasional hardware updates based on user feedback.
This document discusses vision and mission statements and provides examples from various companies. It defines a vision statement as outlining an organization's long-term goals and desired future state. A mission statement defines an organization's fundamental purpose and reason for existing. The document then provides examples of vision and mission statements from companies like Dell, Pfizer, McDonald's, Unilever, Toyota, Apple, and others to illustrate their key components and how they differ between organizations.
Vision, mission, objectives, and goals provide strategic direction for an organization. A vision describes what an organization aspires to become, while a mission outlines its current purpose. Objectives specify quantifiable targets to achieve within a set timeframe. Goals are short-term milestones that support achieving long-term objectives. Together, they guide an organization and provide a framework for evaluating performance.
The document discusses the visions and missions of the top 100 companies on the Fortune Global 500 list. It provides examples of visions, which describe where companies want to go, and missions, which describe the purpose and goals of the companies. The visions generally focus on leadership, performance, and innovation, while the missions often discuss the products/services provided and benefits for customers, shareholders, and society.
An expert system is a computer-based system that uses knowledge from human experts to solve problems typically requiring an expert. Expert systems model the problem-solving abilities of human experts. They are well-suited for problems that do not require complex reasoning, are well-understood, use objectively described data, require fast and accurate answers, or where human expertise is difficult to obtain. Expert systems have three main modules: a knowledge acquisition module to obtain expertise from human experts, a consultation module to provide answers to user questions, and an explanation module to explain how answers are inferred.
In this presentation we introduce a family of gossiping algorithms whose members share the same structure though they vary their performance in function of a combinatorial parameter. We show that such parameter may be considered as a “knob” controlling the amount of communication parallelism characterizing the algorithms. After this we introduce procedures to operate the knob and choose parameters matching the amount of communication channels currently provided by the available communication system(s). In so doing we provide a robust mechanism to tune the production of requests for communication after the current operational conditions of the consumers of such requests. This can be used to achieve high performance and programmatic avoidance of undesirable events such as message collisions.
Paper available at https://dl.dropboxusercontent.com/u/67040428/Articles/pdp12.pdf
Operation of an O.S
Structure of an operating system,
Operating systems with monolithic structure
Layered design of an operating system
Virtual machine operating systems
Kernel based operating systems
The document describes a self-learning real-time expert system developed for fault diagnosis in power plants. It uses historic operational data and machine learning techniques to automatically generate new rules for the expert system's knowledge base. The key steps are: (1) preprocessing historical data, (2) applying an association algorithm (Tertius was found most suitable) to generate new rules, (3) validating and formatting new rules, and (4) updating the knowledge base. This allows the expert system to continuously improve its diagnostic ability by learning from plant operational data.
1. The document describes an expert system and its components.
2. It defines an expert system as an intelligent computer program that uses knowledge and reasoning to solve problems that usually require human expertise.
3. The key components of an expert system are the knowledge base, inference engine, explanation facility, and knowledge acquisition facility.
The document discusses linguistic structures for incorporating fault tolerance into application software. It begins by explaining that as software complexity increases, software faults have become more prevalent and impactful, necessitating fault tolerance at the application level. It then establishes a set of desirable attributes for application-level fault tolerance structures and surveys current solutions, assessing each according to these attributes. The goal is to identify shortcomings and opportunities to develop improved fault tolerance structures.
The document describes a microcontroller-based emulator that can mimic the functionality of various logic gates and structured logic devices in real-time. The emulator uses a read-only memory (ROM) device to store the logic patterns of different devices. Address lines of the ROM are used as inputs to select the desired device and its corresponding logic pattern. This allows the emulator to function like the selected logic device through its input/output pins. The emulator provides a low-cost solution for students to perform digital design experiments and engineers to prototype systems without requiring the actual logic components.
Characteristics and Quality Attributes of Embedded Systemanand hd
The document discusses the characteristics and quality attributes of embedded systems. It describes key characteristics of embedded systems such as being application specific, reactive and real-time in response, operating in harsh environments, being distributed, and having concerns for small size, weight and power. It then outlines important quality attributes for embedded systems including operational attributes like response, throughput, reliability, maintainability, security and safety as well as non-operational attributes.
The document discusses systems and their components. It defines a system as having specific structures defined by components and processes. A system has inputs, processes, outputs, and storage. The main components are the processor, which transforms inputs into outputs, and control, which guides the system. Feedback provides control. Characteristics of systems include having multiple components that work toward a common objective. Systems can be abstract or physical, open or closed, deterministic or probabilistic, and involve manual or automated human intervention. SQL is used to access and manipulate database systems according to standards.
Nowadays there are several security tools that used to protect computer systems, computer networks, smart devices and etc. against attackers. Intrusion detection system is one of tools used to detect attacks. Intrusion Detection Systems produces large amount of alerts, security experts could not investigate important alerts, also many of that alerts are incorrect or false positives. Alert management systems are set of approaches that used to solve this problem. In this paper a new alert management system is presented. It uses K-nearest neighbor as a core component of the system that classify generated alerts. The suggested system serves precise results against huge amount of generated alerts. Because of low classification time per each alert, the system also could be used in online systems.
Classification of physiological signals for wheel loader operators using Mult...Reno Filla
This summarizes a research paper that presents a classification approach called MMSE-CBR to classify physiological parameters of wheel loader operators. It combines case-based reasoning (CBR) with multivariate multiscale entropy analysis (MMSE) to fuse data from multiple physiological sensors. The MMSE algorithm is used to extract features from signals measured by sensors like ECG, finger temperature, skin conductance, and respiration rate. These features are used to form cases in a case library. The CBR approach then classifies new cases by retrieving similar past cases. The approach was evaluated on data from 18 professional drivers and achieved 83.33% accuracy in classifying subjects as "stressed" or "healthy", comparable to an expert's
Microsoft Word - Welcome to Universiti Teknologi PETRONASbutest
The document presents a model for an intrusion prevention and self-healing system for network security inspired by the human immune system. The model uses multiple agents to detect, prevent, and heal harmful events on a network. It combines concepts from biological intrusion prevention and self-healing, applying mechanisms from the innate and adaptive immune systems. The system aims to autonomously secure a network through robust detection of anomalies and misuse, integrated with a self-healing mechanism to enhance network resilience.
Xiaomi - China brand mobile phone company Rudy Chen
Xiaomi is a Chinese electronics company known for its smartphones and other devices. Founded in 2010, Xiaomi creates affordable technology products like smartphones, tablets, TVs, and accessories. It sells directly to consumers through online stores and is the top smartphone brand in China.
The Redmi Note is Xiaomi's largest smartphone offering with a 5.5-inch display and plastic build. It has a 13-megapixel camera, 2GB of RAM, 8GB of storage, and a 3,200mAh battery that provides all-day battery life. While the design could be improved and it runs an older version of Android, the Redmi Note offers strong performance for its expected price of Rs 9,999 in India.
Xiaomi has expanded internationally using several key strategies:
1) The "triathlon model" integrates hardware, software, and internet services to generate profit from services rather than just hardware.
2) Aggressive pricing allows Xiaomi to undercut competitors by pricing products close to manufacturing costs.
3) Social media and "hunger marketing" create viral buzz among youth to fuel sales and gather customer feedback.
Xiaomi is poised to enter the Indian smartphone market as it represents a major growth opportunity. India is projected to become the second largest smartphone market globally by 2017 despite current low penetration rates. Xiaomi plans to target the affordable and mid-range price segments between Rs. 5,000-25,000 that are seeing the strongest growth through an initial partnership with Flipkart for online sales and a controlled retail strategy in major cities. Localizing content, customizing software and services, and providing superior after-sales support will be critical for Xiaomi to succeed against entrenched competitors in India like Samsung and Micromax.
Xiaomi Inc.: Positioning and Strategy for Entry into Indian Smartphone MarketCaio Porciuncula
Xiaomi is a Chinese technology company that has grown rapidly since 2010 to become the third largest smartphone maker globally. The document outlines Xiaomi's mission, values, and strategic focus of producing high-quality premium products at low prices. It also summarizes Xiaomi's objectives of expanding further into developing markets like India, Brazil, and Russia, as well as its long term goals of entering Western markets and expanding into new product categories like smart home devices. Key to Xiaomi's success is its online sales model using flash sales and its practice of incorporating weekly software and occasional hardware updates based on user feedback.
This document discusses vision and mission statements and provides examples from various companies. It defines a vision statement as outlining an organization's long-term goals and desired future state. A mission statement defines an organization's fundamental purpose and reason for existing. The document then provides examples of vision and mission statements from companies like Dell, Pfizer, McDonald's, Unilever, Toyota, Apple, and others to illustrate their key components and how they differ between organizations.
Vision, mission, objectives, and goals provide strategic direction for an organization. A vision describes what an organization aspires to become, while a mission outlines its current purpose. Objectives specify quantifiable targets to achieve within a set timeframe. Goals are short-term milestones that support achieving long-term objectives. Together, they guide an organization and provide a framework for evaluating performance.
The document discusses the visions and missions of the top 100 companies on the Fortune Global 500 list. It provides examples of visions, which describe where companies want to go, and missions, which describe the purpose and goals of the companies. The visions generally focus on leadership, performance, and innovation, while the missions often discuss the products/services provided and benefits for customers, shareholders, and society.
This document summarizes an academic paper about autonomic computing and self-healing systems. It begins with an introduction to self-adapting systems and their development. It then discusses key characteristics of self-managing systems like self-configuration and self-healing. The document outlines tools used to design autonomic systems, including the MAPE-K loop and variability models. It also describes implementations of self-healing systems using redundancy, managers, and architecture-centric approaches. The document concludes by discussing challenges in self-healing system design around fault isolation, tool integration, and reliance on design-time models.
Discrete event systems comprise of discrete state spaces and eventNitish Nagar
This document discusses software for modeling and analyzing discrete event systems. It provides examples of software for control synthesis (TCT), verification (COSPAN), timed discrete event systems (KRONOS, UPPAAL, TTCT), and hybrid systems (HYTECH, SHIFT). It also lists examples and benchmarks for testing software, including examples of supervisory control.
Decision Making and Autonomic ComputingIOSR Journals
Abstract: Autonomic Computing refers to the self-managing characteristics of distributed computing
resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users.
An autonomic system makes decisions on its own, using high-level policies; it will constantly check and
optimize its status and automatically adapt itself to changing conditions. As widely reported in literature, an
autonomic computing framework might be seen composed by autonomic components interacting with each
other.
An Autonomic Computing can be modeled in terms of two main control loops (local and global) with
sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting
policies based on self- and environment awareness.
The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning
while keeping the system's complexity invisible to the user.
General Terms: Autonomic systems, Self-configuration, Self-healing, Self-optimization, Self-protection.
Keywords: Know itself, reconfigure, recover from extraordinary events, expert in self-protection,
The document describes the Blue Eyes system, which monitors operators' physiological parameters and visual attention to detect issues. The system includes a mobile data acquisition unit (DAU) worn by the operator and a central system. The DAU collects data from sensors via Bluetooth and sends it to the central system for analysis. The central system analyzes the data, records conclusions, and triggers alarms if abnormal readings are detected. The system is designed to help supervise operators and assure proper performance in industrial environments.
The document provides an overview of systems, including definitions and common types. It discusses systems as having components, boundaries, inputs, outputs and purposes. Natural systems include physical and living systems, while man-made systems include social, organizational and information systems. The roles in systems development projects are outlined, including users at different levels and with varying experience who have different needs. The systems analyst must understand the perspectives of operational, supervisory and executive level users.
An Algorithm Based Simulation Modeling For Control of Production SystemsIJMER
This document describes an algorithm-based simulation approach for modeling and controlling flexible production systems. The approach models both the physical production system and the control system to evaluate their integrated performance. Key features include:
1) The approach integrates control system design into the physical simulation to evaluate their combined impact.
2) The algorithm-based design is extensible and allows modeling of different control programs and production system designs.
3) Finite automata formalism provides a mathematical foundation for logical and quantitative analysis of the system.
4) The framework facilitates robust controller models that can resolve issues like deadlocks and accommodate failures.
5) Analysts can evaluate how different control programs and production system designs impact
TOWARD ORGANIC COMPUTING APPROACH FOR CYBERNETIC RESPONSIVE ENVIRONMENTijasa
The developpment of the Internet of Things (IoT) concept revives Responsive Environments (RE) technologies. Nowadays, the idea of a permanent connection between physical and digital world is technologically possible. The capillar Internet relates to the Internet extension into daily appliances such as they become actors of Internet like any hu-man. The parallel development of Machine-to-Machine
communications and Arti cial Intelligence (AI) technics start a new area of cybernetic. This paper presents an approach for Cybernetic Organism (Cyborg) for RE based on Organic Computing (OC). In such approach, each appli-ance is a part of an autonomic system in order to control a physical environment.The underlying idea is that such systems must have self-x properties in order to adapt their behavior to
external disturbances with a high-degree of autonomy.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Autonomics Computing (with some of Adaptive Systems) and Requirements Enginee...Jehn
This presentantion gives an overview on Autonomic Computing. Next, show the state-of-the-art on Requirements Engineering for Autonomic Computing based on 4 papers
Robotics and expert systems are discussed. Robotics involves the design and construction of robots using principles of electrical, mechanical and computer engineering. Robots contain sensors, control systems, manipulators and power supplies. Expert systems are computer programs that perform tasks normally requiring human expertise. They contain a user interface, knowledge base and inference engine. The knowledge base stores facts and rules provided by human experts. The inference engine uses this knowledge to answer user queries.
This document describes a methodology for simulating heterogeneous embedded systems that include hardware, software, and electromechanical parts. Interfaces were developed to allow a VHDL simulator to communicate with a physical systems simulator and application programs. The interfaces use the C language links of the simulators to establish communication channels and exchange commands/parameter values between the simulated parts in a chained master/slave synchronization scheme. This unified simulation environment allows designers to validate the behavior of the entire system early in the design process before implementing any of its parts.
The document discusses theories of interactivity from various disciplines such as communications theory, educational technology, and cognitive science. It explores how interactivity requires mutual influence and reciprocity between two or more entities. Truly interactive systems can generate customized responses based on user input rather than just following predetermined prompts. The document also examines different types of "interactive artworks" and how they attempt to achieve different levels of audience participation and emergent behaviors. It provides an example of Carol Flax's interactive book artwork that uses sensors to trigger related video and audio when pages are turned.
The document discusses theories of interactivity from various disciplines such as communications theory, educational technology, and cognitive science. It explores how interactivity requires mutual influence and reciprocity between two or more entities. Truly interactive systems can generate customized responses based on user input rather than just following predetermined prompts. The document also examines different types of "interactive artworks" and how they attempt to achieve different levels of audience participation and emergent behaviors. It provides an example of Carol Flax's interactive book artwork that uses sensors to trigger related video and audio when pages are turned.
Software requirement analysis enhancements by
prioritizing requirement attributes using rank
based Agents.
Ashok Kumar Vinay Goyal
Professor Assistant Professor
Department of Computer Science and Applications Department of MCA
Kurukshetra University, Kurukshetra, India Panipat Institute of Engineering & Technology
Panipat, India
Abstract- This paper proposes a new technique in the
domain of Agent oriented software engineering. Agents
work in autonomous environments and can respond to
agent triggers. Agents can be very useful in requirement
analysis phase of software development process, where
they can react towards the requirement triggers and
result in aligned notations to identify the best possible
design solution from existing designs. Agent helps in
design generation process, which includes the use of
Artificial intelligence. The results produced clearly
shows the improvements over the conventional
reusability principles and ideas.
1. INTRODUCTION
Agent oriented software engineering is a new
emerging technique which is growing very
rapidly. Software development industries have
invested huge efforts in this domain and results
published by many of them are very exiting [1].
The autonomous and reactive nature of agents
makes it possible for the designers to visualize
in terms of real life problem solving scenarios
where socio-logical [2] characteristics of agents
automatically activate the timely checks for any
problem in domain and to solve the same using
agents.
Agents are very helpful in the software
development life cycle. Experiments carried out
in past have shown [2][9][10] the improvement
in the SDLC and conclusion is that agents can be
very helpful in cost and effort minimization; if
tuned properly. Fine-tuning of agents and SDLC
process-state-plug-in for two-way
communications results in agent based software
development process where intelligent agents
will take decisions for better time and resource
utilization.
Fine-tuning of agents and SDLC process-state-
plug-in for two-way communications results in
agent based software development process
where intelligent agents will take decisions for
better time and resource utilization. Agents are
capable of storing historic data, which helps in
decision-making using heuristic based approach.
This paper discusses the details of one such
experiment conducted to improve the
requirement analysis process with the help of
proactive agents. Agents automatically sense the
requirement environment and propose their own
set of important requirement checklist. This is
sort of intelligent assistance with domain
heuristic, which leads to cover all possible
requirement entities of the problem domain.
2. RELATED WORK
Michael Wooldridge, Nicholas R. Jennings &
David Kinny describe the analysis process using
agent-oriented approach [1]. They have
considered the GAIA notations. The analysis
stages of Gaia are:
1) Identify the agent’s roles in the system, which
typically correspond to identify ro ...
This document describes the BlueEyes system, which monitors operators' physiological parameters and brain activity to detect issues. It collects data from sensors using a wireless Data Acquisition Unit (DAU) and sends it to a central system for analysis. The DAU gets eye movement, pulse, oxygenation and other biometric data. It sends alerts to operators and supervisors if abnormal readings are detected. The system is designed to help supervise operators in complex, safety-critical environments like industrial plants.
Building a usage profile for anomaly detection on computer networksNathanael Asaam
This paper proposes building usage profiles of computer systems using behavior models to detect anomalies. It uses regression to determine usage profiles based on system variables from audit trails. It also applies hidden Markov models to study system states and state transitions to predict unusual behavior. With the usage profile and Markov chain model, anomalies can be detected. The research aims to develop anomaly detection and risk analysis systems through this approach. An experiment was conducted to represent a system's usage with a mathematical model using micro usage models on desktops. The techniques allow for correctness, promptness and ease of use in intrusion detection.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
The document describes the design of a Drowsy Driver Detection System. The system uses a camera to monitor the driver's face and eyes in order to detect fatigue. It analyzes the video input to locate the eyes and determine if they are open or closed. If the eyes are closed for 5 consecutive frames, the system concludes the driver is falling asleep and issues a warning signal. The system is designed to work in both day and night conditions using a machine vision based approach with the goal of developing a non-intrusive real-time fatigue monitoring system.
IRJET- Survey on EEG Based Brainwave Controlled Home AutomationIRJET Journal
This document summarizes research on using electroencephalography (EEG) brainwave signals to control home automation devices. It discusses several previous studies that used EEG signals to control robots, anticipate finger movements, control environmental devices for paralyzed patients, and classify gestures for home automation. The document then outlines a study that analyzed brainwave signals to detect patterns related to thoughts and emotions. These signals were sensed by a brainwave sensor, transmitted via Bluetooth, processed using Matlab, and used to send commands to control home devices. The goal was to develop a brain-computer interface for controlling home automation systems using EEG brainwave signals.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
How to Get CNIC Information System with Paksim Ga.pptx
10.1.1.97.5586
1. Autonomic Computer Vision Systems
James L. Crowley1, Daniela Hall2, Remi Emonet1
1
INP Grenoble, Project-Group PRIMA, INRIA Rhône-Alpes, France
2
Société ActiCM, Moirans, France
{James L. Crowley, Remi Emonet}@inrialpes.fr, Daniela.Hall@free.fr
Abstract. For most real applications of computer vision, variations in operating
conditions result in poor reliability. As a result, real world applications tend to
require lengthy set-up and frequent intervention by qualified specialists. In this
paper we describe how autonomic computing can be used to reduce the cost of
installation and enhance reliability for practical computer vision systems. We
begin by reviewing the origins of autonomic computing. We then describe the
design of a tracking-based software component for computer vision. We use a
software component model to describe techniques for regulation of internal
parameters, error detection and recovery, self-configuration and self-repair for
vision systems.
Keywords: Robust Computer Vision Systems, Autonomic Computing,
Perceptual Components, Layered Software Architecture
1 Towards Robust Computer Vision
Machine perception is notoriously unreliable. Even in controlled laboratory
conditions, programs for computer vision generally require supervision by the
programmer or another highly trained engineer. To meet constraints on reliability,
vision system developers commonly design systems with parameters that can be
adapted manually. In the best case, such parameters are controllable from an on-line
interactive interface. All too often such parameters are simply embedded within the
source code as magic numbers. Such systems perform well in controlled or static
environments, but can require careful set up and "tuning" by experts when installed in
new operating conditions. The need for installation and frequent maintenance by
highly trained experts can severely limit the market size for such systems. Robust
operation in changing, real world environments requires fundamental progress in the
way computer vision systems are designed and deployed.
We believe that autonomic computing offers a theoretical foundation for practical
computer vision systems. Furthermore, it appears that machine perception is an ideal
domain for the study of autonomic computing techniques, because of requirements for
robust real time response under different operating conditions, and the availability of
feedback on quality of processing. In this paper, we describe how autonomic
computing can be used to build computer vision systems.
2. 1.1 Autonomic Computing
Autonomic computing has emerged as an effort that takes inspiration from
biological systems to render computing systems robust [1]. According to Klephard
and Chess, [2] the term "Autonomic Computing" has been introduced by IBM vice
president Paul Horn, in a keynote address to the National Academy of Engineers at
Harvard University in March 2001. Horn presented autonomic computing systems as
systems that can manage themselves given high-level objectives from administrators.
The term autonomic computing was adapted as a metaphor inspired by natural self-
governing systems, and in particular, from the autonomic nervous system found in
mammals.
The autonomic nervous system (ANS) is a part of the nervous system that is not
consciously controlled and serves to regulate the homeostasis of organs and
physiological functions. The ANS is commonly divided into three subsystems: the
sympathetic nervous systems (SNS), parasympathetic nervous system (PNS) and
enteric nervous systems (ENS). The sympathetic nervous system acts primarily on the
cardiovascular system and activates the sympatho-adrenal response of the body, also
known as the fight or flight response. The parasympathetic system (PNS), also
known as the "rest and digest" system, complements the sympathetic system by
slowing the heart, and returning blood circulation in the lungs, muscles and gastro-
intestinal system to normal conditions after reactions by the sympathetic nervous
system. The enteric nervous system (ENS) autonomously regulates digestion, and is
increasingly referred to as a "second brain".
Inspiration from biological models has lead to efforts to design computing systems
with a number of autonomic properties. These include:
• Self-monitoring: The ability of a component or system to observe its internal state,
including such quality-of-service metrics as reliability, precision, rapidity, or
throughput.
• Self-regulation: The ability of a component or system to regulate its internal
parameters so as to assure a quality-of-service metric such as reliability, precision,
rapidity, or throughput.
• Self-repair: The ability of a component or system to reconfigure itself so as to
respond to changes in the operating environment or external requirements [3].
• Self-description: The ability of a component or system to provide a description of
its internal state.
Designing systems that have these properties can result in additional computing
overhead, but can also return benefits in system reliability and usability.
In order to fully exploit an autonomic approach, computer vision systems must be
embedded as part of a larger system for user services designed according to
autonomic principles. Most important is the ability to automatically launch, configure
and reconfigure components and adjust parameters for component systems. Modern
tools for software ontologies can be used to provide a component registry to record
available sensors and components according to their capabilities, and to provide
diagnostic and configuration methods to the larger system. An ontology server can
provide a registry for the data types and events consumed and produced by
components, thus making it possible for a system to detect and repair errors, either by
modifying data or by reconfiguring the interconnection of modules or components.
3. With such an approach, systems can respond to failure or degradation by shutting
down the failed component and launching an alternate.
A number of authors have experimented with such techniques for constructing
robust computer vision systems. Murino [4] addresses the problem of automatic
parameter regulation for vision systems within a multi-layered component
architecture. Each layer has its own set of parameters that are tuned such that the
evidence (coming from the lower level) and the expectation (coming from the higher
level) are consistent. Scotti [5] proposes an approach based on self-organizing maps
(SOM). A SOM is learned by registering good parameter settings. During runtime, the
automatic parameter selection chooses the closest setting in SOM space that
performed best during training.
In [6], Min proposes an approach for comparing the performance of different
segmentation algorithms by searching the optimal parameters for each algorithm. He
proposes an interesting multi-loci hill-climbing scheme on a coarsely sampled
parameter space. The segmentation system performance is evaluated with respect to a
given ground truth. This approach is designed for the comparison of algorithms and
requires testing a large number of different parameter settings. For this reason, the
utility of this approach for on-line parameter regulation is less appropriate.
Robertson and Brady [7] propose an architecture for self-adaptive systems. They
consider an image analysis system as a closed-loop control system that integrates
knowledge in order to be self-evaluating. Measuring and comparing the system output
to the desired output and applying a corrective force to the system leads to increased
performance. The difficult point is to generate a model of the desired output. They
demonstrate their approach on the segmentation of aerial images using a bank of
different filter operators. The system selects automatically the best filter for the
current image conditions.
The system and approach described below have been worked out in a series of
European projects over the last 6 years. Experimental validation for techniques for
parameter initialization can be found in Hall et a. [8]. Techniques and validation for
parameter regulation are reported in [9]. In this paper we build on and extend this
earlier work.
1.2 Some Terminology
Unfortunately, the vocabulary for component-oriented software engineering and
autonomic systems are still emerging, and the literature is replete with inconsistent
uses. Thus it is important for us to clarify some of the terms used in this paper.
Auto vs. Self: In the software engineering literature, one can often find an almost
interchangeable use of the terms "auto" and "self". The situation is simpler in French,
where only the term "auto" exists, and the scientific culture seems to shy from
anthropomorphic notions. In this paper, we have adopted the following usage: "Self"
is used to refer to autonomic abilities that are provided using an explicit (declarative
or symbolic) description of a system. Thus a component can be said to be "self-
descriptive" when it contains a declarative description of its function and principles of
operation, or of its internal components and their interconnections. The term "Auto"
will be used for techniques that do not employ an explicit declarative description, but
4. simply on a measured property. Thus an auto-regulatory system may be implemented
as a hardwired feedback process that regulates parameters without explicit description
of their meaning or use while a self-repairing system will use a declarative model to
reconfigure its internal function.
Homeostasis and Autonomic Control: Homeostasis or "autonomic regulation of
internal state" is a fundamental property for robust operation in an uncontrolled
environment. A component is auto-regulated when processing is monitored and
controlled so as to maintain a certain quality of service. For example, processing time
and precision are two important state variables for a tracking process. These two may
be traded off against each other. The process supervisor maintains homeostasis by
adapting module parameters using the auto-critical reports.
Modules, Components and Services: Process architectures have been explored
for computer vision systems since the 1990's [10]. Such architectures are a form of
data flow models for software architectures [11]. We apply a component model [12]
to the definition of software at three distinct layers, as shown in figure 1. These three
layers are the service layer, the component layer and module layer. Programming
style tends to vary for these three layers. A software agent architecture, using tools
such as the JAVA JADE environment is often appropriate for the service layer. The
software components are generally programmed as autonomous computer programs
under the control of a component supervisor. Modules can be programmed as a class
with methods using an object-oriented style or as a procedure or subroutine in a more
classical procedural language. Each layer provides an appropriate set of
communication protocols and configuration primitives, as well as interface protocols
between layers.
Fig. 1. Three Layers in a component-based software architecture
Modules: Modules may be formally defined as synchronous transformations
applied to a certain class of data or events, as illustrated in Figure 2. Modules
generally have no state. They are executed by a call to a method (or a function or a
subroutine depending on the programming language) accompanied by a vector of
parameters. The parameters specify the data to be processed, and describes how the
transform is to be applied. The output from a module is generally output to a serial
stream or posted as an events to an event dispatcher.
A module may be considered as auto-descriptive when it returns a report that
describes the results of processing. Examples of information contained in such a
report may include elapsed execution time, confidence in the result, or any exceptions
that were encountered.
5. Fig. 2. Modules apply a transformation to data and return a report
An example of such an auto-descriptive module is a module that transforms RGB
color pixels within a region of interest of an image into a scalar values that represents
the probability that the pixel belongs to a target. Such a transformation can defined as
a lookup table representing a ratio of color histograms [13]. The command to run the
module is accompanied by a parameter vector that includes a pointer to the input
image buffer, a pointer for an output image buffer, a pointer to a ROI (Region of
interest) data structure, a pointer to the lookup table, and a step size at which the
transform is to be applied within the ROI. Computing time for this process may be
reduced by restricting processing to one pixel out of S (S represents a “step size”)
[14]. Most computer vision algorithms can be implemented as an assembly of such
modules.
Perceptual Components: Modules may be assembled into software components.
A component is an autonomous assembly of modules executed in a cyclic manner
controlled by a supervisor as shown in figure 3. The component supervisor interprets
commands and parameters, supervises the execution of the components, and responds
to queries from the components with a description of the current state and capabilities
of the component. The auto-critical report from modules allows the supervisor to
adapt the execution schedule for the next cycle so as to maintain a target for quality of
service, such as execution time or number of targets tracked. In our group, we embed
C++ modules in a modified "scheme" environment [15] to implement supervised
perceptual components.
Fig. 3. A perceptual component composed of a set of modules under the cyclic
control of a component supervisor.
User services can be designed as software agents that observe human activity and
respond to events. An example of such a service would be an activity logging service
in an office environment, that maintains a journal of classes of activities such as
talking on the phone, typing at a computer or meeting with visitors. In the CHIL
project [16] we have implemented a number of such services based on dynamic
assembly of perceptual components. Such services can be driven by a situation model
[17] that integrates information from different perceptual components. In this paper
we will concentrate on the software component design model for computer vision to
provide such services.
6. 2. Autonomic Perceptual Components
As described above, perceptual components are software components that observe
a scene or a recorded data stream in order to measure properties, to detect and
recognize entities or activities, or to detect events. Perceptual components based on
tracking constitute an important class of components with many applications. Such
components integrate information over time, typically through calculations based on
statistical estimation. Tracking components may be designed for nearly any detection
or recognition method, and provide important benefits in terms of robustness of
detection and focus of processing resources. In this section, we propose a general
software model for tracking-based perceptual components.
2. 1. Tracking-Based Perceptuel Components
Tracking systems constitute an important class of perceptual components. The
architecture for a typical tracking-based perceptual component is shown in figure 4.
Tracking is a cyclic process of recursive estimation, classically defined as a Bayesian
estimation process composed of three phases: Predict, Detect, and Update. A well-
known framework for such estimation is the Kalman filter [18]. The prediction phase
uses the previously estimated attributes for observed targets (or entities) to predict
current values in order to control observation. The detection phase applies the
prediction to the current data to locate and observe the current values for properties,
as well as to detect new targets. The update phase tests the results of detection to
eliminate erroneous or irrelevant detections (distracters), recalculate the latest
estimates for parameters for targets, and amend the list of observed targets to account
for new and lost targets.
Process Supervisor
Detection Target Detection Target Prediction
Regions
Detection Modules
Detection Modules
Video Image Detection Modules Update Targets Recognition Entities
Demon Detection Modules
Detection Modules Recognition
Processes
Fig. 4. A tracking-based perceptual component.
We add a recognition phase, an auto-regulation phase, and a communication phase
to the classical tracking phases in our tracking-based perceptual component. In the
recognition phase, recognition algorithms are applied to the current list of targets to
verify or determine labels for targets, and to recognize events or activities. The auto-
regulation phase determines the quality of a service metric, such as total cycle time or
confidence and adapts the list of targets as well as the target parameters to maintain a
desired quality. During the communication phase, the supervisor responds to requests
7. from other components. These requests may ask for descriptions of the component
state or capabilities, or may provide specification of new recognition methods.
2.2 Component Supervisor
The component supervisor acts as a scheduler, invoking execution of modules in a
synchronous manner. The execution report returned by each module allows the
supervisor to monitor performance and to adjust module parameters for the next
cycle. The results of processing are fed to a statistical classifier to detect incorrect
operation. When such errors are detected a second classifier can be used to select a
procedure to correct the error, as described below. The supervisor is able to respond
to external queries with a description of the current state and capabilities. We
formalize these abilities as the autonomic properties of self-monitoring, auto-
regulation, self-repair, and self-description.
A self-monitoring process maintains a model of its own behavior in order to
estimate the confidence for its outputs. Self-monitoring allows a process to detect and
adapt to changing observational circumstances by reconfiguring its component
modules and operating parameters. Techniques for acquiring an operating model, for
monitoring operating conditions, and for repairing operational problems are described
in section 3.
A process is auto-regulated when processing is monitored and controlled so as to
maintain a certain quality of service. For example, processing time and precision are
two important state variables for a tracking process. These two parameters may be
traded off against each other. The process controllers may be instructed to give
priority to either the processing rate or precision. The choice of priority is determined
by the federation tool or by the federation supervisor. Techniques for monitoring
output quality and for regulating internal parameters are also described in section 3.
Each target and each detection region contains a specification for the module to be
applied, the region over which to apply the module, and the step size to apply
processing. Recognition methods are loaded as snippets of code that can generate
events or write data to streams. These methods may be downloaded to a component as
part of the configuration process to give a tracking process a specific functionality.
Quality of service metrics such as cycle time and number of targets can be
maintained by dropping targets based on a priority assignment or by reducing
resolution for processing of some targets (for example based on size).
A self-descriptive controller can provide a symbolic description of its parameters,
data structures, functions and current internal state. Such descriptions are useful for
both manual and automatic composition of federations of components. During
initialization, components may publish a description of their basic functionality and
data types in an ontology server. During execution, components can respond to
requests for information about current state, with information such as number and
confidence of currently observed targets, current response time, or other quality of
service information.
8. 2.3 Assembling Components to Provide Services
A user service is created by assembling a collection of software components.
Available components may be discovered by interrogating the ontology server. An
open research challenge is to provide an ontological system for indexing components
based on function in a manner that is sufficiently general to capture future
functionalities as they emerge. In addition the ontology server is used to establish
compatible communications of data. The problem of aligning ontologies of data
structures is manageable when components are co-designed by a single group. The
problem becomes very difficult when components are developed independently with
no prior effort to agree on specifications. This problem resembles to web-ontology
alignment problem that currently receives attention in software engineering.
We have constructed a middle-ware environment [19] that allows us to
dynamically launch and connect components on different machines. This
environment, called O3MiSCID, provides an XML based interface that allows
components to declare input command messages, output data structures, as well as
current operational state.
Fig. 5. An example of a system for tracking of blobs in 3D. The 3D Bayesian blob
tracker provides a ROI and detection method for a number of 2D entity detection
components. The result is used to update a list of 3D blobs.
As a simple example of a service provided by an assembly of perceptual
components, consider a system that integrates targets from multiple cameras to
provide 3-D target tracking, as shown in figure 4. A set of tracked entities is provided
by a Bayesian 3D tracking process that tracks targets in 3D scene coordinates [20].
This process specifies the predicted 2-D Region of Interest (ROI) and detection
method for a set of pixel-level detection components. These components use color,
motion or background difference subtraction to detect and track blobs in an image
stream from a camera. The O3MICID middle ware makes it possible to dynamically
add or drop cameras to the process during tracking.
9. 3. Methods for autonomic perceptual components
In this section we briefly describe methods for self-monitoring, auto-regulation,
and self repair of perceptual components.
3.1 Self-monitoring and Auto-regulation
As described above, a supervisor can sum the execution times from the auto-report
from modules to determine the most recent cycle time. Excessive execution time can
be reduced by such methods as removing targets from the tracked target list, reducing
the frequency of calls to the detection regions used to detect new targets, or directing
some detection modules to reduce resolution by processing 1 out of S pixels.
In addition, a component supervisor can be made to detect errors in the current
targets using statistical pattern recognition. Our approach is to use a binary classifier
whose parameters have been trained from data that is known to be valid. When the
parameters of tracked entities are classified as erroneous, a second, multi-class
classifier can be applied to select a repair procedure.
The goal of the auto-critical evaluation is to monitor the performance of the system
in order to detect degradation in performance. This requires the definition of a
measure that estimates the goodness of component output with respect to a reference
model that is constructed in a learning phase. Such a reference model captures normal
component output, and provides a likelihood that the current outputs are normal.
There are a variety of representation forms that could provide a reference models.
These include histograms, graphs and Gaussian Mixture Models (GMMs).
Histograms can often provide a good solution to concrete problems despite their
simplicity. Probability density approximations such as histograms or GMMs have the
advantage that a goodness score can be defined easily based on statistical estimation.
All probabilistic reference models have in common that they estimate the true
probability density function (pdf) of measurements. For example a pdf represented by
a GMM can be obtained by applying a standard learning approach such as clustering
to the training data and representing each cluster by a Gaussian. An alternative
approach, proposed by Makris and Ellis, [21] is to learn entry and exit points from
examples and represent them as a Gaussian mixture. Trajectories are represented by a
topological graph.
The scene reference model together with a quality metric forms the knowledge
base of the self-adaptive system. It allows the system to judge the quality of the
system output and to select parameters that are optimal with respect to a quality
metric. The success of the self-adaptive technique depends on the representativeness
of this scene reference model and its metric. As a consequence, model generation is
an important step within this approach.
For commercial applications, incremental techniques for learning the scene
reference model are especially desirable, because only a limited number of ground
truth data may be available for initialization for each environment. Such techniques
have the great advantage, that they can be refined as more data becomes available.
10. 3.2 Error Recovery
According to Kephart and Chess [2], error recovery (or self-healing in their
terminology) is composed of three phases: error detection, error diagnosis, and error
repair. We have explored techniques for self-healing perceptual components based on
the addition of modules for "Error Detection", "Error Classification" and "Error
Repair" for our tracking-based perceptual component.
The "Error Detection" module monitors the output of a perceptual component
maintaining a target history and a binary classifier. A variety of different binary
classification methods may be used for error detection. In our experiments, we have
used a support vector machine, followed by a running average of the distance from
the decision boundary for the last N cycles. As long as the running average remains
within the normal range, targets parameters are simply added to the target history.
When a target is classified as abnormal recent sequence is extracted from the target
history for further processing.
The recent target history for an abnormal target is passed to the error classification
component for diagnosis. Error classification is a two-stage multi-class recognition
process that assigns the target history to one of a known set of error classes, or a
special "unkown" class. The first stage classifier discriminates known from unknown
error classes. The second stage assigns the target history to one of the known error
classes. Both stages are implemented as support vector machines.
Error classification is based on a set of features extracted from the recent target
history. Features that we have used include
• the mean target area
• the mean target elongation
• the mean target motion energy
• the mean target area variation energy:
Both error classifiers are implemented as support vector machines (SVM) classifiers
using the Radial Basis Function (RBF) kernel.
The first SVM classifier is trained by considering all training data from all known
error classes as a single class. The implementation uses LIBSVM [22] to learn this
one-class SVM. The second classifier is a multi-class SVM learnt using the samples
of the known error classes. A RBF kernel is also used for this classifier.
A database of repair strategies associates each error class with a particular strategy.
In a fully automatic system, the strategies would be discovered automatically from a
large set of examples and a large set of possible commands for error repair. An
appropriate method would be the acquisition of successful error repair procedures by
trial and error. Unfortunately, such a fully automatic system would require a very long
time to generate a successful set of repair procedures.
As with other systems that use expert knowledge to control a vision system [23],
[24] use hand coded repair procedures. Such repair procedures capture the expert
knowledge of the designer. We have used this approach to design simple but efficient
repair strategies for perceptual components.
New repair strategies can be added during the system lifetime as new classes of
errors are detected. Periodically, the "unkown" classes are reviewed by a human who
may assign the history to a known error and thus update the error classifier, or assign
11. them to a new error class. When a new error class is detected, the classification
process is updated, and a repair procedure is selected or encoded by the human. Error
repair may involve deleting spurious targets, changing target parameters or detection
module used for that target, change parameters for initial target detection or changing
the set of modules used by the supervisor during each cycle.
There is not necessarily a one-to-one relation between error classes and repair
strategies: a given strategy can be used to repair error from various classes. In the case
of the tracking system, we use some simple repair procedures such as killing targets
that are identified as false positives by the classifier, refreshing the background in
problematic regions or even doing nothing (e.g. in the case of a "not an error" class or
when the erroneous target has already disappeared).
4. Conclusions
Autonomic computing makes possible the design of systems that adapt processing to
changes in operating conditions. In this paper we have described how autonomic
methods can be included within a component-oriented design. Software components
for computer vision can be provided with automatic parameter initialization and
regulation, self-monitoring and error detection and repair, to provide systems in
services may be assembled dynamically from a collection of independent
components.
References
1. P. Horn, “Autonomic Computing: IBM's Perspective on the State of Information
Technology,” http://researchweb.watson.ibm.com/autonomic, Oct. 15, 2001.
2. J. O. Kephart, and D. M. Chess, "The Vision of Autonomic Computing", IEEE Computer,
Vol. 36, No. 1, pp 41-50, Jan. 2003.
3. V. Poladian et al., “Dynamic Configuration of Resource-Aware Services,” Proc. 24th Int.
Conf. Software Engineering, pp. 604-613, May 2004.
4. Murino, V., Foresti, G., and Regazzoni, C. (1996). A distributed probabilistic system for
adaptive regulation of image processing parameters. IEEE Trans. on Systems, Man, and
Cybernetics - Part B: Cybernetics, 26(1):1–20.
5. Scotti, G., Marcenaro, L., and Regazzoni, C. (2003). A s.o.m. based algorithm for video
surveillance system parameter optimal selection. In Advanced Video and Signal Based
Surveillance.
6. Min, J., Powell, M., and Bowyer, K. (2004). Automated performance evaluation of range
image segmentation algorithms. IEEE Trans. on Systems, Man, and Cybernetics - Part B:
Cybernetics, 34(1):263–271
7. P. Robertson and J. M. Brady, "Adaptive image analysis for aerial surveillance", IEEE
Intelligent Systems, 14(3), pp30–36, May/June 1999.
8. D. Hall, R. Emonet, J. L. Crowley, "An automatic approach for parameter selection in self-
adaptive tracking", International Conference on Computer Vision Theory and Applications
(VISAPP), Springer Verlag, Setúbal, Portugal, Feb., 2006
9. D. Hall, "Automatic parameter regulation of perceptual systems", in Image and Vision
Computing, Volume 24, Issue 8, pp 870-881, Aug. 2006.
12. 10. J. L. Crowley, "Integration and Control of Reactive Visual Processes", Robotics and
Autonomous Systems, Vol 15, No. 1, december 1995.
11. A. Finkelstein, J. Kramer and B. Nuseibeh, "Software Process Modeling and Technology",
Research Studies Press, John Wiley and Sons Inc, 1994.
12. M. Shaw and D. Garlan, Software Architecture: Perspectives on an Emerging Disciplines,
Prentice Hall, 1996.
13. K. Schwerdt and J. L. Crowley, "Robust Face Tracking using Color", 4th IEEE
International Conference on Automatic Face and Gesture Recognition", Grenoble, France,
March 2000.
14. J. Piater and J. Crowley, "Event-based Activity Analysis in Live Video using a Generic
Object Tracker", Performance Evaluation for Tracking and Surveillance, PETS-2002,
Copenhagen, June 2002.
15. A. Lux, "The Imalab Method for Vision Systems", International Conference on Vision
Systems, ICVS-03, Graz, april 2003.
16. M. Danninger, T. Kluge, R. Stiefelhagen, "MyConnector: analysis of context cues to predict
human availability for communication", International Conference on Multimodal
Interaction, ICMI 2006: pp12-19, Trento, 2006.
17. J. L. Crowley, O. Brdiczka, and P. Reignier, "Learning Situation Models for Understanding
Activity", In The 5th International Conference on Development and Learning 2006
(ICDL06), Bloomington, Il., USA, June 2006.
18. R. Kalman, "A new approach to Linear Filtering and Prediction Problems", Transactions of
the ASME, Series D. J. Basic Eng., Vol 82, 1960.
19. R. Emonet, D. Vaufreydaz, P. Reignier, J. Letessier, "O3MiSCID: an Object Oriented
Opensource Middleware for Service Connection, Introspection an Discover", 1st IEEE
International Workshop on Services Integration in Pervasive Environments - June 2006
20. A. Ferrer-Biosca and A. Lux, "A Visual Service for Distributed Environments: a Bayesian
3D Person Tracker", PRIMA internal Report, 2007.
21. Makris, D. and Ellis, T. (2003). Automatic learning of an activity-based semantic scene
model. In Advanced Video and Signal Based Surveillance, pages 183–188.
22. C.-C. Chang and C.J. Lin. Libsvm - a library for support vector machines. available
23. B. Géoris, F. Brémond, M. Thonnat, and B. Macq, "Use of an evaluation and diagnosis
method to improve tracking performances", In International Conference on Visualization,
Imaging and Image Proceeding, September 2003.
24. C. Shekhar, S. Moisan, R. Vincent, P. Burlina, and R. Chellappa, "Knowledge-based
control of vision systems", Image and Vision Computing, 17(9), pp 667–683, 1999.