The document presents an overview of probability collectives (PC), a distributed optimization approach for solving complex multi-agent systems. PC formulates the problem as agents (variables) that iteratively sample strategies and update their probability distributions to minimize a global objective function in a cooperative manner. The key characteristics of PC include exploiting concepts from game theory, statistical physics, and optimization. PC can handle continuous, discrete, and mixed variable problems in a scalable way and is robust to agent failures. Constraint handling techniques are developed to apply PC to constrained optimization problems.
Defuzzification is the process of producing a quantifiable result in Crisp logic, given fuzzy sets and corresponding membership degrees. It is the process that maps a fuzzy set to a crisp set. It is typically needed in fuzzy control systems.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Fuzzified pso for multiobjective economic load dispatch problemeSAT Journals
Abstract Power system engineers are always striving hard to run the system with effective utilization of real and reactive powers generated by the generating plants. Reactive power is used to provide better voltage profile as well as to reduce system losses. Membership functions are written for fuel cost, losses, stability index and emission release. As minimization of real power loss over the transmission lines, an attempt is made in this paper to optimize each objective individually using Fuzzy logic approach. In this paper basic assumption is Decision Maker (DM) has imprecise or fuzzy goals of satisfying each of the objectives, the multi objective problem is thus formulated as a fuzzy satisfaction maximization problem which is basically a min-max problem. The multi objective problem is handled using the fuzzy decision satisfaction maximization technique which is an efficient technique to obtain trade off solution in multi objective problems. The developed algorithm for Optimization of each objective is tested on IEEE 30 bus system. Simulation results of IEEE 30 bus network are presented to show the effectiveness of the proposed method. Keywords: Real power, Reactive power, losses, membership functions, fuzzy logic and trade off solution
Non-Blind Deblurring Using Partial Differential Equation MethodEditor IJCATR
In this paper, a new idea for two dimensional image deblurring algorithm is introduced which uses basic concepts of PDEs... The various methods to estimate the degradation function (PSF is known in prior called non-blind deblurring) for use in restoration are observation, experimentation and mathematical modeling. Here, PDE based mathematical modeling is proposed to model the degradation and recovery process. Several restoration methods such as Weiner Filtering, Inverse Filtering [1], Constrained Least Squares, and Lucy -Richardson iteration remove the motion blur either using Fourier Transformation in frequency domain or by using optimization techniques. The main difficulty with these methods is to estimate the deviation of the restored image from the original image at individual points that is due to the mechanism of these methods as processing in frequency domain .Another method, the travelling wave de-blurring method is a approach that works in spatial domain.PDE type of observation model describes well several physical mechanisms, such as relative motion between the camera and the subject (motion blur), bad focusing (defocusing blur), or a number of other mechanisms which are well modeled by a convolution. In last PDE method is compared with the existing restoration techniques such as weiner filters, median filters [2] and the results are compared on the basis of calculated PSNR for various noises
Defuzzification is the process of producing a quantifiable result in Crisp logic, given fuzzy sets and corresponding membership degrees. It is the process that maps a fuzzy set to a crisp set. It is typically needed in fuzzy control systems.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Fuzzified pso for multiobjective economic load dispatch problemeSAT Journals
Abstract Power system engineers are always striving hard to run the system with effective utilization of real and reactive powers generated by the generating plants. Reactive power is used to provide better voltage profile as well as to reduce system losses. Membership functions are written for fuel cost, losses, stability index and emission release. As minimization of real power loss over the transmission lines, an attempt is made in this paper to optimize each objective individually using Fuzzy logic approach. In this paper basic assumption is Decision Maker (DM) has imprecise or fuzzy goals of satisfying each of the objectives, the multi objective problem is thus formulated as a fuzzy satisfaction maximization problem which is basically a min-max problem. The multi objective problem is handled using the fuzzy decision satisfaction maximization technique which is an efficient technique to obtain trade off solution in multi objective problems. The developed algorithm for Optimization of each objective is tested on IEEE 30 bus system. Simulation results of IEEE 30 bus network are presented to show the effectiveness of the proposed method. Keywords: Real power, Reactive power, losses, membership functions, fuzzy logic and trade off solution
Non-Blind Deblurring Using Partial Differential Equation MethodEditor IJCATR
In this paper, a new idea for two dimensional image deblurring algorithm is introduced which uses basic concepts of PDEs... The various methods to estimate the degradation function (PSF is known in prior called non-blind deblurring) for use in restoration are observation, experimentation and mathematical modeling. Here, PDE based mathematical modeling is proposed to model the degradation and recovery process. Several restoration methods such as Weiner Filtering, Inverse Filtering [1], Constrained Least Squares, and Lucy -Richardson iteration remove the motion blur either using Fourier Transformation in frequency domain or by using optimization techniques. The main difficulty with these methods is to estimate the deviation of the restored image from the original image at individual points that is due to the mechanism of these methods as processing in frequency domain .Another method, the travelling wave de-blurring method is a approach that works in spatial domain.PDE type of observation model describes well several physical mechanisms, such as relative motion between the camera and the subject (motion blur), bad focusing (defocusing blur), or a number of other mechanisms which are well modeled by a convolution. In last PDE method is compared with the existing restoration techniques such as weiner filters, median filters [2] and the results are compared on the basis of calculated PSNR for various noises
Hand gesture recognition using discrete wavelet transform and hidden Markov m...TELKOMNIKA JOURNAL
Gesture recognition based on computer-vision is an important part of human-computer interaction. But it lacks in several points, that was image brightness, recognition time, and accuracy. Because of that goal of this research was to create a hand gesture recognition system that had good performances using discrete wavelet transform and hidden Markov models. The first process was pre-processing, which done by resizing the image to 128x128 pixels and then segmented the skin color. The second process was feature extraction using the discrete wavelet transform. The result was the feature value in the form of a feature vector from the image. The last process was gesture classification using hidden Markov models to calculate the highest probability of feature matrix which had obtained from the feature extraction process. The result of the system had 72% of accuracy using 150 training and 100 test data images that consist five gestures. The newness thing found in this experiment were the effect of acquisition and pre-processing. The accuracy had been escalated by 14% compared to Sebastien’s dataset at 58%. The increment effect propped by brightness and contrast value.
An Inclusive Analysis on Various Image Enhancement TechniquesIJMER
Digital Image enhancement is the process of adjusting digital images so that the results are
more suitable for display or further image analysis. It provides a multitude of choices for improving the
visual quality of images or to provide a “better transform representation for future automated image
processing. The enhancement technique differs from one field to another field. The existing techniques
of image enhancement can be classified into two categories: Spatial Domain and Frequency domain
enhancement. Many images like satellite images, medical images, aerial images and even real life
photographs suffer from poor contrast and noise. It improves the quality (clarity) of images for human
viewing by eradicating blurs, noise, increasing contrast, and revealing image details.
8 ijaems jan-2016-20-multi-attribute group decision making of internet public...INFOGAIN PUBLICATION
In this paper, an emergency group decision method is presented to cope with internet public opinion emergency with interval intuitionistic fuzzy linguistic values. First, we adjust the initial weight of each emergency expert by the deviation degree between each expert’s decision matrix and group average decision matrix with interval intuitionistic fuzzy numbers. Then we can compute the weighted collective decision matrix of all the emergencies based on the optimal weight of emergency expert. By utilizing the interval intuitionistic fuzzy weighted arithmetic average operator one can obtain the comprehensive alarm value of each internet public opinion emergency. According to the ranking of score value and accuracy value of each emergency, the most critical internet public emergency can be easily determined to facilitate government taking related emergency operations. Finally, a numerical example is given to illustrate the effectiveness of the proposed emergency group decision method.
It is rather surprising that in software engineering, standard measurement units have yet to be
widely accepted and used. Every other engineering discipline has their own. By and large, effort
is the most commonly used parameter for measuring software initiatives. The problem of
course is that effort is not an independent variable – it depends on who is doing the work and
how it is done. This presentation looks at an approach that has been used to convert the large
amount of effort data usually collected in an organization into something that can meaningfully
be used for estimation and comparison purposes.
A detailed note on the Fourier Transform of the Unit Step Signal. This text explains the various approaches used in the evaluation of the Fourier transform of the unit step signal.
Hand gesture recognition using discrete wavelet transform and hidden Markov m...TELKOMNIKA JOURNAL
Gesture recognition based on computer-vision is an important part of human-computer interaction. But it lacks in several points, that was image brightness, recognition time, and accuracy. Because of that goal of this research was to create a hand gesture recognition system that had good performances using discrete wavelet transform and hidden Markov models. The first process was pre-processing, which done by resizing the image to 128x128 pixels and then segmented the skin color. The second process was feature extraction using the discrete wavelet transform. The result was the feature value in the form of a feature vector from the image. The last process was gesture classification using hidden Markov models to calculate the highest probability of feature matrix which had obtained from the feature extraction process. The result of the system had 72% of accuracy using 150 training and 100 test data images that consist five gestures. The newness thing found in this experiment were the effect of acquisition and pre-processing. The accuracy had been escalated by 14% compared to Sebastien’s dataset at 58%. The increment effect propped by brightness and contrast value.
An Inclusive Analysis on Various Image Enhancement TechniquesIJMER
Digital Image enhancement is the process of adjusting digital images so that the results are
more suitable for display or further image analysis. It provides a multitude of choices for improving the
visual quality of images or to provide a “better transform representation for future automated image
processing. The enhancement technique differs from one field to another field. The existing techniques
of image enhancement can be classified into two categories: Spatial Domain and Frequency domain
enhancement. Many images like satellite images, medical images, aerial images and even real life
photographs suffer from poor contrast and noise. It improves the quality (clarity) of images for human
viewing by eradicating blurs, noise, increasing contrast, and revealing image details.
8 ijaems jan-2016-20-multi-attribute group decision making of internet public...INFOGAIN PUBLICATION
In this paper, an emergency group decision method is presented to cope with internet public opinion emergency with interval intuitionistic fuzzy linguistic values. First, we adjust the initial weight of each emergency expert by the deviation degree between each expert’s decision matrix and group average decision matrix with interval intuitionistic fuzzy numbers. Then we can compute the weighted collective decision matrix of all the emergencies based on the optimal weight of emergency expert. By utilizing the interval intuitionistic fuzzy weighted arithmetic average operator one can obtain the comprehensive alarm value of each internet public opinion emergency. According to the ranking of score value and accuracy value of each emergency, the most critical internet public emergency can be easily determined to facilitate government taking related emergency operations. Finally, a numerical example is given to illustrate the effectiveness of the proposed emergency group decision method.
It is rather surprising that in software engineering, standard measurement units have yet to be
widely accepted and used. Every other engineering discipline has their own. By and large, effort
is the most commonly used parameter for measuring software initiatives. The problem of
course is that effort is not an independent variable – it depends on who is doing the work and
how it is done. This presentation looks at an approach that has been used to convert the large
amount of effort data usually collected in an organization into something that can meaningfully
be used for estimation and comparison purposes.
A detailed note on the Fourier Transform of the Unit Step Signal. This text explains the various approaches used in the evaluation of the Fourier transform of the unit step signal.
Can the performance of a computer system be increased through overclocking such that the percentage gain of work performed is greater than the percentage increase of electricity consumed?
Digital Gateway is the Digital Marketing Agency. we have mention Everything in this Slide Please have a look. for any query text us on skype : Elaborationindia
These slides presents the optimization using evolutionary computing techniques. Particle Swarm Optimization and Genetic Algorithm are discussed in detail. Apart from that multi-objective optimization are also discussed in detail.
CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...IJCNCJournal
This paper presents a method for constructing intrusion detection systems based on efficient fuzzy rulebased
classifiers. The design process of a fuzzy rule-based classifier from a given input-output data set can
be presented as a feature selection and parameter optimization problem. For parameter optimization of
fuzzy classifiers, the differential evolution is used, while the binary harmonic search algorithm is used for
selection of relevant features. The performance of the designed classifiers is evaluated using the KDD Cup
1999 intrusion detection dataset. The optimal classifier is selected based on the Akaike information
criterion. The optimal intrusion detection system has a 1.21% type I error and a 0.39% type II error. A
comparative study with other methods was accomplished. The results obtained showed the adequacy of the
proposed method
Genetic Algorithm for solving Dynamic Supply Chain Problem AI Publications
The solution of dynamic supply chain problem is studied using both genetic algorithms and multistage dynamic programming. This is possible by employing Euler approximation method to approximate the differential of the variables. The problem is reformulated as unconstrained optimization problem which solved by genetic Algorithm and Multi Stages Dynamic Programming. The solution evaluation results from using dynamic programming and genetic algorithm is performed
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Waqas Tariq
Software cost estimation deals with the financial and strategic planning of software projects. Controlling the expensive investment of software development effectively is of paramount importance. The limitation of algorithmic effort prediction models is their inability to cope with uncertainties and imprecision surrounding software projects at the early development stage. More recently, attention has turned to a variety of machine learning methods, and soft computing in particular to predict software development effort. Fuzzy logic is one such technique which can cope with uncertainties. In the present paper, Particle Swarm Optimization Algorithm (PSOA) is presented to fine tune the fuzzy estimate for the development of software projects . The efficacy of the developed models is tested on 10 NASA software projects, 18 NASA projects and COCOMO 81 database on the basis of various criterion for assessment of software cost estimation models. Comparison of all the models is done and it is found that the developed models provide better estimation
ODSC 2019: Sessionisation via stochastic periods for root event identificationKuldeep Jiwani
In todays world majority of information is generated by self sustaining systems like various kinds of bots, crawlers, servers, various online services, etc. This information is flowing on the axis of time and is generated by these actors under some complex logic. For example, a stream of buy/sell order requests by an Order Gateway in financial world, or a stream of web requests by a monitoring / crawling service in the web world, or may be a hacker's bot sitting on internet and attacking various computers. Although we may not be able to know the motive or intention behind these data sources. But via some unsupervised techniques we can try to infer the pattern or correlate the events based on their multiple occurrences on the axis of time. Associating a chain of events in order of time helps in doing a root event analysis. In certain cases a time ordered correlation and root event identification is good enough to automatically identify signatures of various malicious actors and take appropriate corrective actions to stop cyber attacks, stop malicious social campaigns, etc.
Sessionisation is one such unsupervised technique that tries to find the signal in a stream of events associated with a timestamp. In the ideal world it would resolve to finding periods with a mixture of sinusoidal waves. But for the real world this is a much complex activity, as even the systematic events generated by machines over the internet behave in a much erratic manner. So the notion of a period for a signal also changes in the real world. We can no longer associate it with a number, it has to be treated as a random variable, with expected values and associated variance. Hence we need to model "Stochastic periods" and learn their probability distributions in an unsupervised manner.
The main focus of this talk will be to showcase applied data science techniques to discover stochastic periods. There are many ways to obtain periods in data, so the journey would begin by a walk through of existing techniques like FFT (Fast Fourier Transform) then discuss about Gaussian Mixture Models. After highlighting the short comings of these techniques we will succinctly explain one of the most general non-parametric Bayesian approaches to solve this problem. Without going too deep in the complex math, we will get back to applied data science and discuss a much simpler technique that can solve the same problem if certain assumptions are satisfied.
In this talk we will demonstrate some time based pattern we discovered while working on a security analytics use case that uses Sessionisation. In the talk we will demonstrate such patterns based on an open source malware attack datasets that is available publicly.
Key concepts explained in talk: Sessionisation, Bayesian techniques of Machine Learning, Gaussian Mixture Models, Kernel density estimation, FFT, stochastic periods, probabilistic modelling, Bayesian non-parametric methods
Event link: http://www.meetup.com/NYC-Open-Data/events/161342472/
A free R workshop given by SupStat Inc at New York R user group and NYC Open Data Meetup group
Determining costs of construction errors, based on fuzzy logic systems ipcmc2...Mohammad Lemar ZALMAİ
In construction projects, construction errors affect negatively to the production, that influences the overall of the project in both time and budget. Generally, construction companies could not estimate this kind of errors during the bidding process. In this case, these companies did not consider important issues on the budget of the contract, and in the contracting period, project participants assumed that the project would be executed as it scheduled and designed. During the project, different construction processes’ costs are higher than estimated values due to construction errors.
The errors that were recognized during the construction process cause time and financial losses, on the other hand, the errors that were noticed after the project’s termination cause repair and correction costs. Moreover, the company may gain a bad reputation in the sector.
The key points of this study are to analyze project costs by considering construction errors and re-construction costs due to labor errors by using fuzzy interpretation mechanism. This methodology is applied to a residential construction project. With using of this methodology, forthcoming extra costs related to construction errors can be estimated. And some precautions can be taken for further legal conflicts between parties.
An Improved Adaptive Multi-Objective Particle Swarm Optimization for Disassem...IJRESJOURNAL
With the development of productivity and the fast growth of the economy, environmental pollution, resource utilization and low product recovery rate have emerged subsequently, so more and more attention has been paid to the recycling and reuse of products. However, since the complexity of disassembly line balancing problem (DLBP) increases with the number of parts in the product, finding the optimal balance is computationally intensive. In order to improve the computational ability of particle swarm optimization (PSO) algorithm in solving DLBP, this paper proposed an improved adaptive multi-objective particle swarm optimization (IAMOPSO) algorithm. Firstly, the evolution factor parameter is introduced to judge the state of evolution using the idea of fuzzy classification and then the feedback information from evolutionary environment is served in adjusting inertia weight, acceleration coefficients dynamically. Finally, a dimensional learning strategy based on information entropy is used in which each learning object is uncertain. The results from testing in using series of instances with different size verify the effect of proposed algorithm.
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...Waqas Tariq
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...CSCJournals
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Probability Collectives
1. 1
Probability Collectives: A Distributed
Optimization for Multi-Agent Systems
Anand J. Kulkarni, Tai Kang
Optimization and Agent Technology Research (OAT Research) Lab
www.oatresearch.org
2. 2
Outline
Introduction
Motivation and Objectives
Probability Collectives (PC)
Unconstrained PC Formulation
Validation of the Unconstrained PC
Constrained Handling Techniques
Heuristic Approach
Penalty Function Approach
Feasibility-based Rule I
Feasibility-based Rule II
Conclusions
Future Recommendations
3. 3
Introduction- What are Complex Systems?
Complex systems: a broad term encompassing a research approach to
problems in the diverse areas such as Social Structures, earthquake
prediction, climate change and weather forecasting, counter-terrorism,
financial systems, project rescheduling, molecular biology,
cybernetics, etc.
Complex systems generally have many (interconnected) components that
not only interact but also compete with one another to deliver the best they
can to reach the desired system objective.
Any move by a component affects the moves by other components and so
on. So it is difficult to understand the behavior of the entire system simply by
knowing the individual components and their behavior
Complex Systems in Engineering:
1) Internet Search
2) Manufacturing and Scheduling
3) Supply Chain
4) Sensor Networks
5) Aerospace Systems
6) Telecommunication Infrastructure
4. 4
Introduction- Solving Complex Systems- Centralized
System
Limitations:
1. Communication Overload
2. Computational Overload
3. Large Storage Space
4. Processing Bottleneck
5. Adds Latency (delay)
6. Limited Scalability
7. Reduced Robustness
A Single/Central Agent is supposed to have all the capabilities such as
problem solving in order to alleviate user’s cognitive load.
The Agent is provided with general knowledge, storage space, etc. to deal
with wide variety of tasks/computations.
Central
Agent
Tasks/Sensors
Centralized
System
5. 5
Introduction- Solving Complex Systems- Distributed System
Advantages
1. Reduced Risk of Bottleneck
2. Reduced Risk of Latency
3. Robustness
4. Highly Scalable
5. Easy to Maintain & Debug
In a Decentralized and Distributed System, the total work is decomposed into
different expert modules. Each expert module is an autonomous
agent, i.e. having local control, decision. All Agents achieve their
individual goals contributing towards the system objective.
Local cooperation is to avoid the duplication of the work.
Challenges
1. Coordination
2. Handling Constraints
6. Probability Collectives (PC): Motivation and Objectives
• GA, PSO, ACO, Wasp Colony System, Swarm-bot, etc. have been
used for solving complex problems
• As the complexity of the problem domain grew these problems
became quite tedious to be solved using above algorithms.
• Probability Collectives is an emerging AI tool in the framework of
COllective INtelligence (COIN) for modeling and controlling
distributed MAS. Proposed by Dr. David Wolpert in 1999 in a
technical report presented to NASA and further elaborated by S.R.
Bieniawski in 2005.
• It is an obvious tool to deal with the increasing complexity as it
decomposes the problem into sub-problems.
6
7. State-of-the-Art - Probability Collectives (PC)
• Joint Routing and Resource Allocation in Wireless Sensor Networks
--- Choosing the optimal number of nodes in a cluster and the cluster head
(Ryder et al. 2005, Mohammed et al. 2007)
• Solving the Benchmark Problems
– Multimodality, non-separability, non-linearity, etc. (Huang et al. 2005)
– Robustness, rate of descent, trapping in false minima, etc.
• University Course Scheduling (Autry et al. 2008)
7
8. State-of-the-Art - Probability Collectives (PC)
8
Mechanical Design
10 bar truss problem
(Bieniawski et al. 2004)
Conflict Resolution
Airplanes Collision Avoidance
(Sislak et al. 2011)
Airplane fleet assignment
(Wolpert et al. 2004)
9. 9
Objectives: Probability Collectives (PC)
Develop a more generic and powerful approach of PC by
incorporating constraint handling techniques necessary for solving
constrained optimization problems and further test these techniques
by solving a variety of challenging constrained problems
Solve the path planning of Multiple Unmanned Aerial Vehicles
(MUAVs) by modeling it as a MTSP and solving by the PC approach
Modify the PC approach to make it more efficient and faster
- inherent and desirable characteristics
- key benefits of being a distributed, decentralized and
cooperative approach
10. Characteristics of PC
PC works through the COllective INtelligence (COIN) framework
exploiting the advantages of Decentralized, Distributed & Cooperative
approach.
• Deep connections to Game Theory, Statistical Physics &
Optimization
• Successfully exploits the important concept of “Nash Equilibrium”
• PC can be applied to continuous, discrete or mixed variables, etc.,
• Works on Probability Distribution directly incorporating Uncertainty
10
11. Characteristics of PC
• The Homotopy function for each agent (variable) helps the
algorithm to jump out of the local minima and further reach the
global minima.
• It can successfully avoid the tragedy of commons, skipping the
local minima and further reach the true global minimum.
• It can efficiently handle problems with a large number of variables
i.e. scalable.
• It is robust and can accommodate the agent failure case.
11
12. Formulation of Unconstrained PC
• Consider a general unconstrained problem (in minimization sense)
comprising variables
• Variables Agents/Players of a game being played iteratively.
• Initially, every agents is given a sampling interval/space
• Every agent randomly samples strategies from within the
corresponding sampling interval .
12
( ) ( )1 2 1
, ,..., ,..., ,i N N
G f X X X X X−
=X
,lower upper
i i i
Ψ ∈ Ψ Ψ
N
[ ][1] [2] [ ]
{ , ,..., ,..., } , 1,2,...,imr
i i i i iX X X X i N and= =X
1 2 1... ...i N Nm m m m m−= = = = = =
i im
iΨ
13. 13
Formulation of Unconstrained PC
{ }[1] [?] [?] [1] [?] [?]
1 2 1, ,..., ,..., ,i i N NX X X X X−=Y
Agent selects its first strategy and samples randomly from other
agents’ strategies as well.
( )[1]
iG Y
1 [ ] [ ][ ][1] [2] [1] [2] [1] [2]
1 1 1 1{ , ,..., } ,..., { , ,..., } ,..., { , ,..., }i Nm mm
i i i i N N N NX X X X X X X X X= = =X X X
{ } ( )
{ } ( )
{ } ( )
{ } ( )
[2] [?] [?] [2] [?] [2]
1 2
[3] [?] [?] [3] [?] [3]
1 2
[ ] [?] [?] [ ] [?] [ ]
1 2
[ ] [ ] [ ][?] [?] [?]
1 2
, ,..., ,...,
, ,..., ,...,
, ,..., ,...,
, ,..., ,...,i i i
i i N i
i i N i
r r r
i i N i
m m m
i i N i
X X X X G
X X X X G
X X X X G
X X X X G
= ⇒
= ⇒
= ⇒
= ⇒
Y Y
Y Y
Y Y
Y Y
M
M
( )[ ]
1
im
r
i
r
G
=
⇒ ∑ Y
i
14. Formulation of Unconstrained PC
14
• The ultimate goal of every agent is to identify its strategy value
which contributes the most towards the minimization of the sum
(collection) of these system objectives i.e. .
• Possibly many local minima
• Directly minimizing may require excessive computational efforts
• Homotopy Method: modify the function by converting it into
another topological space by constructing a related and easier
function . This forms the Homotopy function:
( )[ ]
1
im
r
i
r
G
=
∑ Y
i
( )( ) ( ) [ )[ ]
1
, ( ) , 0,
im
r
i i i i
r
J q T G T f T
=
= − ∈ ∞∑X Y X
( )if X
15. Formulation of Unconstrained PC
• Analogy to Helmholtz free energy
One of the ways to achieve the thermal equilibrium and hence minimize
the energy to do work is actually minimizing the internal energy
through an annealing schedule, i.e. stepwise drop the temperature of
the system from to achieving the equilibrium
in every step.
15
( )( ) ( ) [ )[ ]
1
, ( ) , 0,
im
r
i i i i
r
J q T G T f T
=
= − ∈ ∞∑X Y X
L D T S= −
Energy available to do work Internal energy Spontaneous (Random) energy
initialT T= 0 finalT or T T→ →
16. Formulation of Unconstrained PC
Deterministic Annealing
• It suggests conversion of the variables into random real valued
probabilities which converts the into .
16
( )( ) ( ) [ )[ ] [ ] [ ]
2
1 1
, ( ) ( )log ( ) , 0,
i im m
r r r
i i i i i
r r
J q T E G T q X q X T
= =
= − − ∈ ∞ ÷
∑ ∑X Y
[ ]
1
( )
im
r
i
r
G
=
∑ Y ( )[ ]
1
( )
im
r
i
r
E G
=
∑ Y
( )( ) ( ) [ )[ ]
1
, ( ) , 0,
im
r
i i i i
r
J q T G T f T
=
= − ∈ ∞∑X Y X
( )( ) ( ) [ )[ ]
1
, ( ) , 0,
im
r
i i i i
r
J q T E G T S T
=
= − ∈ ∞∑X Y
17. Formulation of Unconstrained PC
17
0
0.05
0.1
0.15
1 2 3 4 5 6 7 8 9 10
0
0.05
0.1
0.15
1 2 3 4 5 6 7 8 9 10
[ ]
( ) [ ]
( )1
1 1 1... 1/im
q X q X m= = = [ ]
( ) [ ]
( )1
... 1/im
N N Nq X q X m= = =
L
0
0.05
0.1
0.15
1 2 3 4 5 6 7 8 9 10
L
[ ]
( ) [ ]
( )1
... 1/im
i i iq X q X m= = =
Agent 1 Agent i Agent N
{ }
{ }
{ }
[1] [?] [1] [?]
1
[ ] [?] [ ] [?]
1
[ ] [ ][?] [?]
1
,..., ,...,
,..., ,...,
,..., ,...,i i
i i N
r r
i i N
m m
i i N
X X X
X X X
X X X
=
=
=
Y
Y
Y
M
M
( )( )[ ]
1
im
r
i
r
E G
=
∑ Y
( ) ( ) ( ){ } [ ]
( ) [ ]
( ) ( )
[ ]
( )( )
( )( )
( ) ( ) ( ){ } [ ]
( ) [ ]
( ) ( )
[ ]
( )( )
( )( )
( ) ( ) ( ){ } [ ]
( ) [ ]
( ) ( )
[ ]
( )( )
( )( )
1 1 ?[?] [1] [?] [1]
1
?[?] [ ] [?] [ ]
1
?[ ] [ ][?] [?]
1
,..., ,..., Y
,..., ,..., Y
,..., ,..., Y i ii i
i N i i ii
i
r rr r
i N i i ii
i
m mm m
i N i i ii
i
q X q X q X G q X q X E G
q X q X q X G q X q X E G
q X q X q X G q X q X E G
⇒ =
⇒ =
⇒ =
∏
∏
∏
Y
Y
Y
M
M
Strategies Strategies Strategies
18. Formulation of Unconstrained PC
• The minimization of the Homotopy function can be carried out using
a suitable second order optimization approach such Nearest
Newton Descent Scheme as well as Broyden-Fletcher-Goldfarb-
Shanno (BFGS) scheme.
18
0
0.05
0.1
0.15
1 2 3 4 5 6 7 8 9 10
0
0.05
0.1
0.15
1 2 3 4 5 6 7 8 9 10
0
0.05
0.1
0.15
1 2 3 4 5 6 7 8 9 10
0
0.2
0.4
0.6
0.8
1
1 2 3 4 5 6 7 8 9 10
0
0.2
0.4
0.6
0.8
1
1 2 3 4 5 6 7 8 9 10
0
0.2
0.4
0.6
0.8
1 2 3 4 5 6 7 8 9 10
[ ] [ ] [ ] [ ] [ ] [ ]
{ } [ ]
( )1 2 1, ,..., ,..., ,fav fav fav fav fav fav fav
i N NX X X X X G−= ⇒Y Y
Favorable Strategy Favorable Strategy Favorable Strategy
Agent 1 Agent i Agent N
19. Formulation of Unconstrained PC
• Updating of the Sampling Interval (Neighboring Method)
• Convergence and Final Solution
If
If there is no significant change in the system objectives for
successive considerable number of iterations
19
[ ]
( ) [ ]
( ), , 0 1
fav favupper lower upper lower
i i down i i i down i i downX Xλ λ λ Ψ ∈ − Ψ − Ψ + Ψ − Ψ < ≤
[ ] [ ] [ ] [ ] [ ]
{ } [ ], , , , , ,
1 2 1, ,..., , ( )
fav final fav final fav final fav final fav final fav final
N NX X X X G−= ⇒Y Y
0finalT T or T= →
[ ] [ ], , 1
( ) ( )
fav n fav n
G G ε−
− ≤Y Y
20. 20
Nash Equilibrium (Necessary Properties):
Rationality: Select the best possible strategy by guessing other agents’
strategies
Convergence: Same class policy of selecting the best possible strategy and
guessing other agents’ strategies (guaranteed: policy does not change)
Nash Equilibrium in PC
: by guessing other agents’ strategies
and : is communicated to every other agent
Formulation of Unconstrained PC
[ ]fav
iX
[ ]fav
iX [ ]
( )
fav
G Y
21. 21
Solution to Rosenbrock Function using PC
( ) ( ) ( )
1 2 22
1
1
100 1
N
i i i
i
f x x x
−
+
=
= − + −
∑X
where [ ]1 2 3....... Nx x x x=X
lower limit upper limit
1,2,...,
ix
i N
≤ ≤
=
Agents/
(Variables)
Strategy Values Selected with maximum Probability
Trial-1 Trial-2 Trial-3 Trial-4 Trial-5 Range of Values
Agent-1 1.0000 0.9999 1.0002 1.0001 0.9997 -1.0 to 1.0
Agent-2 1.0000 0.9998 1.0001 1.0001 0.9994 -5.0 to 5.0
Agent-3 1.0001 0.9998 1.0000 0.9999 0.9986 -3.0 to 3.0
Agent-4 0.9998 0.9998 0.9998 0.9995 0.9967 -3.0 to 8.0
Agent-5 0.9998 0.9999 0.9998 0.9992 0.9937 1.0 to 10.0
Fun. Value 2 x 10-5
1 x 10-5
2 x 10-5
2 x 10-5
5 x 10-5
Fun. Evals. 288100 223600 359050 204750 242950
Results
22. 22
Solution to Rosenbrock using PC (Comparison)
Method No. of Var./
Agents
Function
Value
Function
Evaluations
Variable Range(s)/
Strategy Sets
CGA 2 0.000145 250 -2.048 to 2.048
PAL 2
5
≈ 0.01
≈ 2.5
5250
100000
-2.048 to 2.048
-2.048 to 2.048
Modified DE 2
5
1 × 10-6
1 × 10-6
1089
11413
-5 to 10
-5 to 10
LCGA 2 ≈ 0.00003 -- -2.12 to 2.12
PC 5 0.00001 223600 -1.0 to 1.0
-5.0 to 5.0
-3.0 to 3.0
-3.0 to 8.0
1.0 to 10.0
23. Unconstrained Test Problems
1. Ackley Function
2. Beale Function
3. Bohachevsky Function
4. Booth Function
5. Branin Function
6. Colville Function
7. Dixon & Price Function
8. Easom Function
9. Goldstein & Price Function
10. Griewank Function
11. Hartmann Functions
12. Hump Function
13. Levy Function
14. Matyas Function
15. Michalewicz Function
16. Perm Functions
17. Powell Function
18. Power Sum Function
19. Rastrigin Function
20. Rosenbrock Function
21. Schwefel Function
22. Shekel Function
23. Shubert Function
24. Sphere Function
25. Sum Squares Function
26. Trid Function
27. Zakharov Function
23
24. Constrained PC
• Approach 1: Heuristic Approach
Two variations of the MDMTSP and several cases of
the SDMTSP
• Approach 2: Penalty Function Approach
Three Test Problems
• Approach 3: Feasibility-based Rule I
Two cases of the Circle Packing Problem
Feasibility-based Rule II
Two cases and associated cases of the Sensor
Network Coverage Problem
24
25. Constrained PC Approach 1: Heuristic Approach
• Explicitly uses the problem specific information and combines them
with the unconstrained optimization technique to push the objective
function into the feasible region.
• Validated by solving two cases of the Multiple Depot Multiple
Traveling Salesmen Problem (MDMTSP) and several cases of the
Single Depot Multiple Traveling Salesmen Problem (SDMTSP)
– Solve the path planning of Multiple Unmanned Aerial Vehicles
(MUAVs) by modeling it as a MTSP
25
32. Constrained PC (Approach 2): Penalty Function Approach
32
• Penalty based methods are the most generalized constraint handling
methods: simplicity, ability to handle non linear constraints and
compatibility with most of the unconstrained optimization methods
• Converts constrained optimization problem into unconstrained one.
[ ]
( ) ( ) ( ) ( )
2 2
[ ] [ ] [ ]
1 1
s t
r r r r
i i j i j i
j j
G g hφ θ +
= =
= + +
∑ ∑Y Y Y Y
( ) ( )( )[ ] [ ]
max 0,r r
j i j iwhere g g and is scalar parameterθ+
=Y Y
33. Every agent obtains the probability distribution
identifying its favorable strategy
START
Every agent sets up a strategy set. Initialize ‘n’, ‘T’
Every agent forms a combined strategy set for its every
strategy and computes system objectives and
constraints, and corresponding collection of pseudo
system objectives
Every agent assigns uniform probabilities to its
strategies and computes expected collection of system
objectives
Every agent forms a modified Homotopy function
Every agent minimizes the Homotopy function using
Nearest Newton Method/BFGS Method
Compute the global objective function and associated
constraints
1
2
33
34. Accept current objective function and
related favorable strategies
N
Discard current and retain previous objective
function with related favorable strategies
STOP
Accept final values
Convergence ?
Y
YN
Maximum constraint
value ≤
1
2
PC…
34
Every agent updates its sampling interval and
forms corresponding updated strategy set, and
Update the Penalty Parameter
µ
A
B
35. Spring Design
35
( ) ( )
( )
( )
( )
( )
( )
2
3 2 1
3
2 3
1 4
1
2
2 1 2
2 23 4
12 1 1
1
3 2
2 3
1 2
4
1 2 3
Minimize 2
Subject to 1 0
71785
4 1
1 0
510812566
140.45
1 0
1 0
1.5
where 0.05 2, 0.25 1.3, 2 15
f x x x
x x
g
x
x x x
g
xx x x
x
g
x x
x x
g
x x x
= +
= − ≤
−
= + − ≤
−
= − ≤
+
= − ≤
≤ ≤ ≤ ≤ ≤ ≤
X
X
X
X
X
36. Spring Design
No. of runs Avg. CPU time Best Sol. Mean Sol. Worst Sol. % with Best Sol.
10 24.5 Sec 0013500 0.02607 0.05270 6.63
36
Design
variables &
Constrains
Best Solutions Found
Cultural
algorithm
Constraint
correction
algorithm
Self-adaptive
penalty
app.
Multi-obj.
app.
GA
HPSO Proposed
PC
0.050000 0.053390 0.051480 0.051980 0.051700 0.050600
0.317390 0.399180 0.351660 0.363960 0.357120 0.327810
14.031790 9.185400 11.632200 10.890520 11.265080 14.056700
0.000000 0.000010 -0.003300 -0.001900 -0.000000 -0.052900
-0.000070 -0.000010 -0.000100 0.000400 0.000000 -0.007400
-3.967960 -4.123830 -4.026300 -4.060600 -4.054600 -3.704400
-0.755070 -0.698280 -0.731200 -0.722700 -0.727400 -0.747690
0.012720 0.012730 0.012700 0.012680 0.012660 0.013500
Fun. Evals 80000 5214
2
x
3
x
1
g
2
g
3
g
4
g
f
0 100 200 300 400 500
0
10
20
30
40
50
60
70
Iterations
f(X)
1
x
37. Himmelblau Function
No of runs 10
Avg. CPU time 11 Mins
Best Sol. -30641
Mean Sol. -30635
Worst Sol. -30626
% with Best Sol 0.078
37
( )
( )
( )
( )
2
3 1 5 1
1 2 5 1 4 3 5
2 2 5 1 4 3 5
3
Minimize 5.3578547 0.8356891 37.293239 40792.141
Subject to 85.334407 0.0056858 0.0006262 0.0022053 92 0
85.334407 0.0056858 0.0006262 0.0022053 0
80.51249 0
f x x x x
g x x x x x x
g x x x x x x
g
= + + −
= + + − − ≤
= − − − + ≤
= +
X
X
X
X
( )
( )
( )
2
2 5 1 2 3
2
4 2 5 1 2 3
5 3 5 1 3 3 4
6 3 5
.0071317 0.0029955 0.0021813 110 0
80.51249 0.0071317 0.0029955 0.0021813 90 0
9.300961 0.0047026 0.0012547 0.0019085 25 0
9.300961 0.0047026 0.0012547
x x x x x
g x x x x x
g x x x x x x
g x x
+ + − ≤
= − − − − + ≤
= + + + − ≤
= − − −
X
X
X
( )
1 3 3 4
1 2
0.0019085 20 0
where 78 102, 33 45, 27 45, 3,4,5i
x x x x
x x x i
− + ≤
≤ ≤ ≤ ≤ ≤ ≤ =
0 500 1000 1500 2000 2500 3000 3500
-3.1
-3
-2.9
-2.8
-2.7
-2.6
-2.5
-2.4
-2.3
-2.2
x 10
4
Iterations
f(X)
39. Chemical Equilibrium Problem
39
( )
( )
( )
( )
10
1 1 2 10
1 1 2 3 6 10
2 4 5 6 7
3 3 7 8 9 10
1 2 3 4 5
Minimize ln
...
Subject to 2 2 2 0
2 1 0
2 1 0
0.000001, 1,2,...,10
where 6.089 17.164 34.054 5.914 24.721
j
j j
j
i
x
f x c
x x x
h x x x x x
h x x x x
h x x x x x
x i
c c c c c
=
= + ÷
+ + +
= + + + + − =
= + + + − =
= + + + + − =
≥ =
= − = − = − = − = −
∑X
X
X
X
6 7 8 9 1014.986 24.100 10.708 26.662 22.179c c c c c= − = − = − = − = −
40. Chemical Equilibrium Problem
40
Best Solutions Found
Design
Variables
Hock et al.
(1981)
GENOCOP PC
0.01773548 0.04034785 0.0308207485
0.08200180 0.15386976 0.2084261218
0.88256460 0.77497089 0.6708869580
0.0007233256 0.00167479 0.0371668767
0.4907851 0.48468539 0.3510055351
0.0004335469 0.00068965 0.1302810195
0.01727298 0.02826479 0.1214712339
0.007765639 0.01849179 0.0343070642
0.01984929 0.03849563 0.0486302636
0.05269826 0.10128126 0.0486302636
8.6900E-08 6.0000E-08 -0.0089160590
0.0141 1.0000E-08 -0.0090697995
5.9000E-08 -1.0000E-08 -0.0047181958
-47.707579 -47.760765 -46.7080572120
Average FE -- -- 389546
1x
2x
3x
4x
5x
6x
7x
8x
9x
10x
( )1h X
( )2h X
( )3h X
( )f X
0 200 400 600 800 1000
-7
-6
-5
-4
-3
-2
-1
0
x 10
4
Iterations
f(X)
8000 8200 8400 8600 8800 9000
-7000
-6000
-5000
-4000
-3000
-2000
-1000
Iterations
No of runs 10
Avg. CPU time 21.60 Mins
Best Sol. -46.7080572120
Mean Sol. -45.6522267370
Worst Sol. -44.4459333503
% with Best Sol 2.20
41. Constrained PC (Approach 3): Feasibility-based Rule I
• Feasibility-based rule allows the objective and constraint information
to be considered separately.
• The constraint violation tolerance is tightened iteratively to obtain
the fitter solution and further drive the solution towards the
feasibility.
• Convert the equality constraint into inequality constraints
41
Minimize
Subject to 0 , 1,2,...,
0, 1,2,...,
j
j
G
g j s
h j t
≤ =
= =
0 1,2,...,
0
0
Minimize
Subject to 0 , 1,2,...,
s j j
j
s w j j
j
g h j w
h
g h
G
g j t
δ
δ
+
+ +
= − ≤ =
= ⇒
= − − ≤
≤ =
42. Constrained PC (Approach 3): Feasibility-based Rule I
Feasibility-based Rule I:
• Any feasible solution is preferred over any infeasible solution.
• Between two feasible solutions, the one with better objective is
preferred.
• Between two infeasible solutions, the one with fewer violated
constraints is preferred.
42
43. Constrained PC (Approach 3): Feasibility-based Rule I
• Updating of the Sampling Space and Perturbation Approach
In order to jump out of this possible local minimum, every agent
perturbs its current feasible strategy
The value of and +/- sign are selected based on preliminary trials.
Every agent expands the sampling space as follows:
43
i
[ ] [ ] [ ]
( )
( ) [ ]
( ) [ ]
1 1
2 2
1
,
1
,
fav fav fav
i i i i
lower upper
fav
i
i
lower upper
fav
i
X X X fact
randomvalue if
X
where fact
randomvalue if
X
σ σ γ
σ σ γ
= ± ×
∈ ≤
=
∈ >
1 1 2 20 1lower upper lower upper
σ σ σ σ< < ≤ < <
γ
( ) ( ), , 0 1lower upper lower upper upper lower
i i up i i i up i i upλ λ λ Ψ ∈ Ψ − Ψ − Ψ Ψ + Ψ − Ψ < ≤
44. 44
( ) ( )
2 2
1
2 2
Minimize
Subject to
0.001
2
, 1,2,...,
z
i
i
i j i j i j
i i l
i i u
i i l
i i u
i
f L r
x x y y r r
x r x
x r x
y r y
y r y
Lr
i j z i j
π
=
= −
− + − ≥ +
− ≥
+ ≤
− ≥
+ ≤
≤ ≤
= ≠
∑
Circle Packing Problem Formulation
Tragedy of Commons
4 5 6 7 8 9 10 11
5
5.5
6
6.5
7
7.5
8
8.5
9
9.5
10
Shipping, Apparel,
Automobile,
Aerospace, Food
Industry, etc.
48. Constrained PC (Approach 3): Feasibility-based Rule II
• Feasibility-based rule II allows the objective and constraint
information to be considered separately.
• In addition to the iterative tightening of the constraint violation
obtaining the fitter solution and further drive the solution towards the
feasibility, the rule helps the solution jump out of possible local
minima.
• Procedure starts with initializing the number of constraints improved
initialized to , i.e. . The value of is updated iteratively.
48
µ
0 0µ = µ
49. Constrained PC (Approach 3): Feasibility-based Rule II
Feasibility-based Rule II:
• Any feasible solution is preferred over any infeasible solution.
• Between two feasible solutions, the one with better objective is preferred.
• Between two infeasible solutions, the one with more number of improved
constraint violations is preferred.
• If the solution remains feasible and unchanged for successive number of
iterations, and current feasible system objective is worse than the
previous feasible solution, accept the current solution.
49
50. 50
Sensor Network Coverage Problem
• Strategic Applications of Sensor Network
Natural disaster relief, Hostile and Hazardous environment monitoring, critical
infrastructure monitoring and protection, Habitat exploration and surveillance,
Situational awareness in battlefield and target detection, Industrial sensing and
diagnosis, Biomedical health monitoring, Seismic sensing, etc.
• How to best deploy/position the sensors over a FoI to achieve best possible
Coverage and Detection capability, connectivity, etc.
• Coverage directly affects the quality and effectiveness of the surveillance/
monitoring provided by the sensor network
51. 51
(Sweep) Barrier coverageBlanket Coverage
Point Set CoverageComplete coverage
Coverage Classification
Deterministic
Static and systematic deployment of the
sensors over certain (or weighted) FoI.
Sensor Network Coverage Problem
Stochastic
Sensor positions are selected based
on some distributions such as
uniform, Gaussian, Poission, etc.
52. 52
( )( ) ( )( )( )
( )( ) ( )( )( )
( )
( )
1 2 1 2
1 2 1 2
Minimize
max , ,..., ,..., min , ,..., ,...,
max , ,..., ,..., min , ,..., ,...,
Subject to
, 2 , 1,2,..., ,
1,2,...,
, ~
i z s i z s
i z s i z s
s
i s l
i s u
i s l
i s u
A x x x x r x x x x r
y y y y r y y y y r
d i j r i j z i j
x r x
x r x
y r y
y r y
i z
d i j i jγ
= + − −
× + − −
≥ = ≠
− ≥
+ ≤
− ≥
+ ≤
=
≤
X
, , , 1,2,...,i j i j z≠ =
Sensor Network Coverage Problem Formulation
3
2
, ,
1 1
3
z
collective c i c i
i i
A A A rπ
= =
= = =∑ ∑
Deploy a set of
Homogeneous sensors
over a certain FoI to
achieve the maximum
possible Deterministic,
Connected Blanket
Coverage
56. Summary of Sensor Network Coverage Problem Results
SN Particulars Variation 1 Variation 2
1 Cases -- Case 1 Case 2 Case 3
2 Number of Sensors ( ) 5 5 10 20
3 The Sensing Range ( ) 0.5 1.2 1 0.6
4 Average Collective Coverage 3.927 18.5237 19.4856 16.3631
5 Minimum and Maximum
Collective Coverage
3.9270, 3.9270 18.0920, 18.7552 17.5427, 20.8797 15.5347, 17.3377
6 Standard Deviation associated
with Collective Coverage
0.0000 0.1687 1.1837 1.2217
7 Average area of the Enclosing
Rectangle
5.8311 34.3014 49.0938 39.3480
8 Minimum and Maximum area
of the Enclosing Rectangle
5.7046, 5.9750 33.0448, 39.7099 44.7135, 52.6277 34.1334, 43.8683
9 Standard Deviation associated
with the area of Enclosing
Rectangle
0.1040 1.9899 2.6995 2.8829
10 Average CPU time (Approx.) 20 Mins 1 Hr 2Hrs 3.5 Hrs
11 Average number of Function
Evaluations
90417 315063 1172759 3555493
56
z
sr
60. Conclusions and Original Contributions
60
Improvements to the original PC approach:
• The original PC approach was improved with a reduction in the
computational complexity.
- A neighboring scheme developed for updating the solution space
was developed which contributed to faster convergence and
improved efficiency of the overall algorithm.
- the modified PC was successfully validated optimizing
Rosenbrock Function
- Nash Equilibrium successfully formalized and demonstrated
61. Conclusions and Original Contributions
Constraint Handling Techniques
• A number of constraint handling techniques were developed . This
allowed PC to solve practical problems which inevitably are
constrained problems.
• Problem specific heuristics were developed and incorporated into
the PC algorithm for solving the NP-hard problem such as MTSP.
• True optimum solution was achieved for two specially developed
cases of the MDMTSP, several cases of the SDMTSP were also
solved.
• For the first time, the MTSP was solved using a distributed,
decentralized and cooperative approach such as PC.
61
62. Conclusions and Original Contributions
• Penalty function approach was successfully incorporated and tested
by solving variety of test problems with in/equality constraints.
• Feasibility-based rule I was successfully formalized and
demonstrated solving two specially developed cases of the Circle
Packing Problem (CPP).
• In order to make the solution jump out of possible local minima, a
perturbation approach and voting heuristic were developed.
• Demonstrate the desirable and key characteristic of a distributed
approach to avoid the tragedy of commons.
• Important ability of PC to deal with the practically significant agent
failure problem was demonstrated solving the CPP.
62
63. Conclusions and Original Contributions
• Feasibility-based rule II was successfully formalized and
demonstrated solving two variations and associated cases of the
Sensor Network Coverage Problem (SNCP).
• Two variations and associated cases produced sufficiently robust
results.
• BFGS method was successfully used as an alternative to the
Nearest Newton Descent Scheme.
• CPP and SNCP were first time solved using a distributed,
decentralized approach such as PC.
63
64. Recommendations for Future Work
• Make the approach more generalized and increase the efficiency of
the PC algorithm by developing a self adaptive scheme for the
parameters, improving diversification of sampling, etc.
• More realistic path planning problems of the Multiple Unmanned
Vehicles (MUVs) can then be solved with the MTSP and VRP
approaches.
• Multi-Objective Probability Collectives (MOPC)
64
65. Recommendations for Future Work
Solve the Traffic Control Problem using PC
• Distributed, decentralized approach
• Every intersection represents an independent agent dynamically
optimizing the signal durations, cycle time, phase sequence, etc.
• Local traffic optimization → Network traffic optimization
• Traffic simulator will be used to set up the traffic scenario
• Flow rate will be measured at intersections (agents)
• PC will optimize the variables such as signal durations, cycle time,
phase sequence, etc.
• Optimized variables will be fed back to evaluate the performance.
65
69. 69
Nash Equilibrium
The basic concept states that when a social game is being played iteratively
by number of agents, if a state comes when any agent changes its
strategy/state unilaterally without taking into consideration the other agents’
strategy/state, it does not benefit that agent and also does not benefit the
entire game output. If the game is in such state then the agents are assumed
to be in Nash Equilibrium.
It is worth to mention that Nash Equilibrium does not necessarily gives
best payoffs to agents but as a social system best collective / global /
system objective can be achieved.
Formulation of Unconstrained PC
n i
70. 70
Probability Collectives (PC) Comparison
Sampling, Convergence criterion and Neighboring makes the PC presented
here different than the originally proposed by Dr. David Wolpert.
Proposed PC Original PC
Sampling
Pseudorandom scalar
values drawn from uniform
distribution
Fewer number of samples
Monte Carlo sampling
Computationally
expensive and slower
Convergence criterion
Predefined number of
iterations and/or there is no
change in the final goal value for
considerable number of
iterations.
No change in the
probability values for
considerable number of iterations
71. 71
Probability Collectives (PC) Comparison
Proposed PC Original PC [1, 3]
Neighboring
Sample around the
‘favorable strategy values’
and continue from the beginning.
Narrows down the sampling
options of Agents forcing
them to sample only from the
neighbored range.
Increases convergence
speed.
Computationally cheaper
Regression
Data-aging
Computationally
expensive/Large
memory
73. Constrained PC (Approach 3): Feasibility-based Rule I
• Procedure starts with initializing the constraint violation tolerance
where is the cardinality of .
Feasibility-based Rule I
• Any feasible solution is preferred over any infeasible solution
If the current system objective as well as the previous
solution are infeasible, accept the current system objective
and corresponding as the current solution if the number of
constraints violated is less than or equal to , i.e. ,
and then the value of is updated to , i.e. .
73
µ = C [ ]1 2 ... tg g g=CC
[ ]
( )fav
G Y
[ ]fav
Y
[ ]
( )fav
G Y
violatedC µ violatedC µ≤
µ violatedC violatedCµ =
74. Constrained PC (Approach 3): Feasibility-based Rule I
• Between two feasible solutions, the one with better objective is
preferred
If the current system objective is feasible, and the previous
solution is infeasible, accept the current system objective
and corresponding as the current solution and then the value of
is updated to , i.e. .
74
[ ]
( )fav
G Y
[ ]fav
Y
[ ]
( )fav
G Y
0
µ
0violatedCµ = =
75. Constrained PC (Approach 3): Feasibility-based Rule I
• Between two infeasible solutions, the one with fewer violated
constraints is preferred.
If the current system objective is feasible, i.e. and
is not worse than the previous feasible solution, accept the current
system objective and corresponding as the current
solution.
• If all the above conditions are not met, then discard current system
objective and corresponding , and retain the previous
iteration solution.
75
[ ]
( )fav
G Y
[ ]fav
Y
[ ]
( )fav
G Y
0violatedC =
[ ]
( )fav
G Y [ ]fav
Y
76. Constrained PC (Approach 3): Feasibility-based Rule I
Updating of the Sampling Space and Perturbation Approach
• On completion of pre-specified iterations,
• If then shrink the sampling intervals:
• If and are feasible and
the system objective is referred to as .
76
[ ] [ ], ,
( ) ( )testfav n fav n n
G G
−
≤Y Y
testn
[ ]
( ) [ ]
( ), , 0 1
fav favupper lower upper lower
i i down i i i down i i downX Xλ λ λ Ψ ∈ − Ψ − Ψ + Ψ − Ψ < ≤
[ ],
( )
fav n
G Y [ ],
( )testfav n n
G
−
Y
[ ] [ ], ,
( ) ( )testfav n fav n n
G G ε−
− ≤Y Y
[ ],
( )
fav n
G Y [ ],
( )
fav s
G Y
77. Constrained PC (Approach 3): Feasibility-based Rule I
• Updating of the Sampling Space and Perturbation Approach
In order to jump out of this possible local minimum, every agent
perturbs its current feasible strategy
The value of and +/- sign are selected based on preliminary trials.
Every agent expands the sampling space as follows:
77
i
[ ] [ ] [ ]
( )
( ) [ ]
( ) [ ]
1 1
2 2
1
,
1
,
fav fav fav
i i i i
lower upper
fav
i
i
lower upper
fav
i
X X X fact
randomvalue if
X
where fact
randomvalue if
X
σ σ γ
σ σ γ
= ± ×
∈ ≤
=
∈ >
1 1 2 20 1lower upper lower upper
σ σ σ σ< < ≤ < <
γ
( ) ( ), , 0 1lower upper lower upper upper lower
i i up i i i up i i upλ λ λ Ψ ∈ Ψ − Ψ − Ψ Ψ + Ψ − Ψ < ≤
78. Constrained PC (Approach 3): Feasibility-based Rule I
• How about the convergence or the stable solution acceptance
78
80. Constrained PC (Approach 3): Feasibility-based Rule II
Feasibility-based Rule II:
• Any feasible solution is preferred over any infeasible solution
If the current system objective as well as the previous
solution are infeasible, accept the current system objective
and corresponding as the current solution if the number of
improved constraints is greater than or equal to , i.e. ,
and then the value of is updated to , i.e. .
80
[ ]
( )fav
G Y
[ ]fav
Y
[ ]
( )fav
G Y
µ improvedC µ≥
µ improvedC improvedCµ =
81. Constrained PC (Approach 3): Feasibility-based Rule II
• Between two feasible solutions, the one with better objective is
preferred
If the current system objective is feasible, and the previous
solution is infeasible, accept the current system objective
and corresponding as the current solution and then the value of
is updated to , i.e. .
• Between two infeasible solutions, the one with more number of
improved constraint violations is preferred.
If the current system objective is feasible, and is not worse
than the previous feasible solution, accept the current system
objective and corresponding as the current solution.
81
[ ]
( )fav
G Y
[ ]fav
Y
[ ]
( )fav
G Y
0
µ
0improvedCµ = =
[ ]
( )fav
G Y
[ ]fav
Y
[ ]
( )fav
G Y
82. Constrained PC (Approach 3): Feasibility-based Rule II
• If the solution remains feasible and unchanged for successive
predefined number of iterations, and current feasible system
objective is worse than the previous iteration feasible solution,
accept the current solution.
If the solution remains feasible and unchanged for successive pre-
specified iterations i.e. and are feasible and ,
and the current feasible system objective is worse than the previous
iteration feasible solution, accept the current system objective
and corresponding as the current solution.
82
( )[ ],fav n
G Y
[ ]fav
Y
testn ( )[ ], testfav n n
G −
Y
( )[ ]fav
G Y
84. 84
Formulation of Unconstrained PC
{ }[1] [?] [?] [1] [?] [?]
1 2 1, ,..., ,..., ,i i N NX X X X X−=Y
Agent selects its first strategy and samples randomly from other
agents’ strategies as well.
( )[1]
iG Y
[ ] [ ] [ ][1] [2] [1] [2] [1] [2]
1 1 1 1{ , ,..., } ,..., { , ,..., } ,..., { , ,..., }N i Nm m m
i i i i N N N NX X X X X X X X X= = =X X X
{ } ( )
{ } ( )
{ } ( )
{ } ( )
[2] [?] [?] [2] [?] [2]
1 2
[3] [?] [?] [3] [?] [3]
1 2
[ ] [?] [?] [ ] [?] [ ]
1 2
[ ] [ ] [ ][?] [?] [?]
1 2
, ,..., ,...,
, ,..., ,...,
, ,..., ,...,
, ,..., ,...,i i i
i i N i
i i N i
r r r
i i N i
m m m
i i N i
X X X X G
X X X X G
X X X X G
X X X X G
= ⇒
= ⇒
= ⇒
= ⇒
Y Y
Y Y
Y Y
Y Y
M
M
( )[ ]
1
im
r
i
r
G
=
⇒ ∑ Y
i
85. Formulation of Unconstrained PC
85
0
0.05
0.1
0.15
1 2 3 4 5 6 7 8 9 10
0
0.05
0.1
0.15
1 2 3 4 5 6 7 8 9 10
[ ]
( ) [ ]
( )1
1 1 1... 1/im
q X q X m= = = [ ]
( ) [ ]
( )1
... 1/im
N N Nq X q X m= = =
L
0
0.05
0.1
0.15
1 2 3 4 5 6 7 8 9 10
L
[ ]
( ) [ ]
( )1
... 1/im
i i iq X q X m= = =
Agent 1 Agent i Agent N
{ }
{ }
{ }
[2] [?] [2] [?]
1
[ ] [?] [ ] [?]
1
[ ] [ ][?] [?]
1
,..., ,...,
,..., ,...,
,..., ,...,i i
i i N
r r
i i N
m m
i i N
X X X
X X X
X X X
=
=
=
Y
Y
Y
M
M
( )( )[ ]
1
im
r
i
r
E G
=
∑ Y
( ) ( ) ( ){ } [ ]
( ) [ ]
( ) ( )
[ ]
( )( )
( )( )
( ) ( ) ( ){ } [ ]
( ) [ ]
( ) ( )
[ ]
( )( )
( )( )
( ) ( ) ( ){ } [ ]
( ) [ ]
( ) ( )
[ ]
( )( )
( )( )
1 1 1[?] [2] [?] [2]
1
[?] [ ] [?] [ ]
1
[ ] [ ][?] [?]
1
,..., ,..., Y
,..., ,..., Y
,..., ,..., Y i i ii i
i N i i ii
i
r r rr r
i N i i ii
i
m m mm m
i N i i ii
i
q X q X q X G q X q X E G
q X q X q X G q X q X E G
q X q X q X G q X q X E G
⇒ =
⇒ =
⇒ =
∏
∏
∏
Y
Y
Y
M
M
86. 86
[ ]
( )( ) [ ]
( ) [ ]
( ) ( )
[ ]
( )( )
Y
r r r r
i i i i
i
E G G q X q X= ∏Y
[ ]
( ) [ ] [ ]
( )
[ ]
( )
?
1 1
( ) (Y ) ( ) ( )
i im m
r r r
i i i i
r r i
E G G q X q X
= =
=∑ ∑ ∏Y
88. 88
Multiple Unmanned Aerial Vehicles (MUAVs) Path Planning
Related Work
Probabilistic Map Approach
- real-time and local updating of the map
Flock formation
- collision avoidance, obstacle avoidance, formation keeping
- single objective function Vs individual objective function
Gyroscope force
- real-time change in the path avoiding the collision
Magnetic forces
- Attraction and Repulsion
Concept of Auto-pilot – the airplanes with conflicting trajectories change
their ways with local communication avoiding latency in decision making
Limitations of the Heuristic Approach: If the complexity of the problem and related constraints increase, the heuristic techniques may become more tedious and may add further computational load. This may further increase the number of function evaluations as well.
The summary of the algorithm is given in a paper attached in the e-mail.
Pure mathematics literature but limited attention in OR literature
TOC was never addressed before in context of the CPP
Large number of interacting/conflicting objectives, ever growing traffic volume in urban areas posing serious congestion problems, intersections are becoming bottlenecks, so solve in distributed way by decomposing the entire network into its various components such as intersections, vehicles, signals, etc.