The document presents a new algorithm called Simultaneous Task Allocation (STA) that improves upon the earlier ASyMTRe-D algorithm. STA allows multiple tasks to be considered simultaneously and finds all possible mappings of tasks to coalitions of robots. This reduces overall execution time compared to ASyMTRe-D which accepts tasks sequentially. The STA algorithm uses a tree search approach and returns partial solutions if needed, making it an anytime algorithm. Future work aims to optimize STA using heuristics to generate only relevant portions of the search tree to reduce execution time further.
Tracing versus Partial Evaluation: Which Meta-Compilation Approach is Better ...Stefan Marr
Tracing and partial evaluation have been proposed as meta-compilation techniques for interpreters to make just-in-time compilation language-independent. They promise that programs executing on simple interpreters can reach performance of the same order of magnitude as if they would be executed on state-of-the-art virtual machines with highly optimizing just-in-time compilers built for a specific language. Tracing and partial evaluation approach this meta-compilation from two ends of a spectrum, resulting in different sets of tradeoffs.
This study investigates both approaches in the context of self-optimizing interpreters, a technique for building fast abstract-syntax-tree interpreters. Based on RPython for tracing and Truffle for partial evaluation, we assess the two approaches by comparing the impact of various optimizations on the performance of an interpreter for SOM, an object-oriented dynamically-typed language. The goal is to determine whether either approach yields clear performance or engineering benefits. We find that tracing and partial evaluation both reach roughly the same level of performance. SOM based on meta-tracing is on average 3x slower than Java, while SOM based on partial evaluation is on average 2.3x slower than Java. With respect to the engineering, tracing has however significant benefits, because it requires language implementers to apply fewer optimizations to reach the same level of performance.
Open LLMs: Viable for Production or Low-Quality Toy?M Waleed Kadous
Are Open LLMs useful for production applications, or are they low quality toys useful only for experiments? We share our experiences using open LLMs vs proprietary LLMs.
Transfer Learning for Improving Model Predictions in Robotic SystemsPooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
Tracing versus Partial Evaluation: Which Meta-Compilation Approach is Better ...Stefan Marr
Tracing and partial evaluation have been proposed as meta-compilation techniques for interpreters to make just-in-time compilation language-independent. They promise that programs executing on simple interpreters can reach performance of the same order of magnitude as if they would be executed on state-of-the-art virtual machines with highly optimizing just-in-time compilers built for a specific language. Tracing and partial evaluation approach this meta-compilation from two ends of a spectrum, resulting in different sets of tradeoffs.
This study investigates both approaches in the context of self-optimizing interpreters, a technique for building fast abstract-syntax-tree interpreters. Based on RPython for tracing and Truffle for partial evaluation, we assess the two approaches by comparing the impact of various optimizations on the performance of an interpreter for SOM, an object-oriented dynamically-typed language. The goal is to determine whether either approach yields clear performance or engineering benefits. We find that tracing and partial evaluation both reach roughly the same level of performance. SOM based on meta-tracing is on average 3x slower than Java, while SOM based on partial evaluation is on average 2.3x slower than Java. With respect to the engineering, tracing has however significant benefits, because it requires language implementers to apply fewer optimizations to reach the same level of performance.
Open LLMs: Viable for Production or Low-Quality Toy?M Waleed Kadous
Are Open LLMs useful for production applications, or are they low quality toys useful only for experiments? We share our experiences using open LLMs vs proprietary LLMs.
Transfer Learning for Improving Model Predictions in Robotic SystemsPooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
Crude-Oil Scheduling Technology: moving from simulation to optimizationBrenno Menezes
Scheduling technology either commercial or homegrown in today’s crude-oil refining industries relies on a complex simulation of scenarios where the user is solely responsible for making many different decisions manually in the search for feasible solutions over some limited time-horizon i.e., trial-and-error heuristics. As a normal outcome, schedulers abandon these solutions and then return to their simpler spreadsheet simulators due to: (i) time-consuming efforts to configure and manage numerous scheduling scenarios, and (ii) requirements of updating premises and situations that are constantly changing. Moving to solutions based in optimization rather than simulation, the lecture describes the future steps in the refactoring of the scheduling technology in PETROBRAS considering in separate the graphic user interface (GUI) and data communication developments (non-modeling related), and the modeling and process engineering related in an automated decision-making with built-in problem representation facilities and integrated data handling features among other techniques in a smart scheduling frontline.
We try to solve the Vehicle Routing Problem by using the Artificial Bee Colony (ABC) algorithm--an optimisation algorithm that mimics the swarm intelligence of bees in nature. We implement this algorithm in parallel over several cores and present a comparative study of the results.
Paper presented at DOLAP 2020: Towards Conversational OLAP
Link to the presentation: https://youtu.be/IfBc1H46s8Y
Abstract: The democratization of data access and the adoption of OLAP in scenarios requiring hand-free interfaces push towards the creation of smart OLAP interfaces. In this paper, we envisage a conversational framework specifically devised for OLAP applications. The system converts natural language text in GPSJ (Generalized Projection, Selection and Join) queries. The approach relies on an ad-hoc grammar and a knowledge base storing multidimensional metadata and cubes values. In case of ambiguous or incomplete query description, the system is able to obtain the correct query either through automatic inference or through interactions with the user to disambiguate the text. Our tests show very promising results both in terms of effectiveness and efficiency.
Authors: Matteo Francia, Enrico Gallinucci, Matteo Golfarelli
ISC Frankfurt 2015: Good, bad and ugly of accelerators and a complementary pathJohn Holden
Accelerators Vs Adjoint Algorithmic Differentation (AAD).... NONSENSE. It is not a choice. The two can be combined to provide the ultimate accelerator. Accelerators such as NVIDIA GPUs, Intel Xeon Phis CAN be combined with AD. NAG has the software tools and expertise to deliver AD solutions for traditional architectures and accelerarors
Accelerating the Development of Efficient CP Optimizer ModelsPhilippe Laborie
The IBM Constraint Programming optimization system CP Optimizer was designed to provide automatic search and a simple modeling of discrete optimization problems, with a particular focus on scheduling applications. It is used in industry for solving operational planning and scheduling problems. We will give an overview of CP Optimizer and then describe in further detail a set of features such as input/output file format, warm-start or conflict refinement that help accelerate the development of efficient models.
- How to tackle an object detection competition
- Schwert's 6th-place solution on Open Images Challenge 2019
- presented at the lunch workshop of the 26th Symposium on Sensing via Image Information (2020).
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...Pooyan Jamshidi
Modern cyber-physical systems (e.g., robotics systems) are typically composed of physical and software components, the characteristics of which are likely to change over time. Assumptions about parts of the system made at design time may not hold at run time, especially when a system is deployed for long periods (e.g., over decades). Self-adaptation is designed to find reconfigurations of systems to handle such run-time inconsistencies. Planners can be used to find and enact optimal reconfigurations in such an evolving context. However, for systems that are highly configurable, such planning becomes intractable due to the size of the adaptation space. To overcome this challenge, in this paper we explore an approach that (a) uses machine learning to find Pareto-optimal configurations without needing to explore every configuration and (b) restricts the search space to such configurations to make planning tractable. We explore this in the context of robot missions that need to consider task timeliness and energy consumption. An independent evaluation shows that our approach results in high-quality adaptation plans in uncertain and adversarial environments.
Paper: https://arxiv.org/abs/1903.03920
Presenting a new Ant Colony Optimization Algorithm (ACO) for Efficient Job Sc...Editor IJCATR
Grid computing utilizes the distributed heterogeneous resources in order to support complicated computing problems. Job
scheduling in computing grid is a very important problem. To utilize grids efficiently, we need a good job scheduling algorithm to assign
jobs to resources in grids.
In the natural environment, the ants have a tremendous ability to team up to find an optimal path to food resources. An ant algorithm
simulates the behavior of ants. In this paper, a new Ant Colony Optimization (ACO) algorithm is proposed for job scheduling in the
Grid environment. The main contribution of this paper is to minimize the makespan of a given set of jobs. Compared with the other job
scheduling algorithms, the proposed algorithm can outperform them according to the experimental results.
Gatling is Open Source Stress testing tool.
Why Gatling:
- High Performance.
- Multi Threading vs (Akka) Actor Model.
- Synchronous Blocking IOs vs asynchronous Non-blocking IOs Netty.
A race of two compilers: GraalVM JIT versus HotSpot JIT C2. Which one offers ...J On The Beach
Do you want to check the efficiency of the new, state of the art, GraalVM JIT Compiler in comparison to the old but mostly used JIT C2? Let’s have a side by side comparison from a performance standpoint on the same source code.
The talk reveals how traditional Just In Time Compiler (e.g. JIT C2) from HotSpot/OpenJDK internally manages runtime optimizations for hot methods in comparison to the new, state of the art, GraalVM JIT Compiler on the same source code, emphasizing all of the internals and strategies used by each Compiler to achieve better performance in most common situations (or code patterns). For each optimization, there is Java source code and corresponding generated assembly code in order to prove what really happens under the hood.
Each test is covered by a dedicated benchmark (JMH), timings and conclusions. Main topics of the agenda: - Scalar replacement - Null Checks - Virtual calls - Lock coarsening - Lock elision - Virtual calls - Scalar replacement - Lambdas - Vectorization (few cases)
The tools used during my research study are JITWatch, Java Measurement Harness, and perf. All test scenarios will be launched against the latest official Java release (e.g. version 11).
Crude-Oil Scheduling Technology: moving from simulation to optimizationBrenno Menezes
Scheduling technology either commercial or homegrown in today’s crude-oil refining industries relies on a complex simulation of scenarios where the user is solely responsible for making many different decisions manually in the search for feasible solutions over some limited time-horizon i.e., trial-and-error heuristics. As a normal outcome, schedulers abandon these solutions and then return to their simpler spreadsheet simulators due to: (i) time-consuming efforts to configure and manage numerous scheduling scenarios, and (ii) requirements of updating premises and situations that are constantly changing. Moving to solutions based in optimization rather than simulation, the lecture describes the future steps in the refactoring of the scheduling technology in PETROBRAS considering in separate the graphic user interface (GUI) and data communication developments (non-modeling related), and the modeling and process engineering related in an automated decision-making with built-in problem representation facilities and integrated data handling features among other techniques in a smart scheduling frontline.
We try to solve the Vehicle Routing Problem by using the Artificial Bee Colony (ABC) algorithm--an optimisation algorithm that mimics the swarm intelligence of bees in nature. We implement this algorithm in parallel over several cores and present a comparative study of the results.
Paper presented at DOLAP 2020: Towards Conversational OLAP
Link to the presentation: https://youtu.be/IfBc1H46s8Y
Abstract: The democratization of data access and the adoption of OLAP in scenarios requiring hand-free interfaces push towards the creation of smart OLAP interfaces. In this paper, we envisage a conversational framework specifically devised for OLAP applications. The system converts natural language text in GPSJ (Generalized Projection, Selection and Join) queries. The approach relies on an ad-hoc grammar and a knowledge base storing multidimensional metadata and cubes values. In case of ambiguous or incomplete query description, the system is able to obtain the correct query either through automatic inference or through interactions with the user to disambiguate the text. Our tests show very promising results both in terms of effectiveness and efficiency.
Authors: Matteo Francia, Enrico Gallinucci, Matteo Golfarelli
ISC Frankfurt 2015: Good, bad and ugly of accelerators and a complementary pathJohn Holden
Accelerators Vs Adjoint Algorithmic Differentation (AAD).... NONSENSE. It is not a choice. The two can be combined to provide the ultimate accelerator. Accelerators such as NVIDIA GPUs, Intel Xeon Phis CAN be combined with AD. NAG has the software tools and expertise to deliver AD solutions for traditional architectures and accelerarors
Accelerating the Development of Efficient CP Optimizer ModelsPhilippe Laborie
The IBM Constraint Programming optimization system CP Optimizer was designed to provide automatic search and a simple modeling of discrete optimization problems, with a particular focus on scheduling applications. It is used in industry for solving operational planning and scheduling problems. We will give an overview of CP Optimizer and then describe in further detail a set of features such as input/output file format, warm-start or conflict refinement that help accelerate the development of efficient models.
- How to tackle an object detection competition
- Schwert's 6th-place solution on Open Images Challenge 2019
- presented at the lunch workshop of the 26th Symposium on Sensing via Image Information (2020).
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...Pooyan Jamshidi
Modern cyber-physical systems (e.g., robotics systems) are typically composed of physical and software components, the characteristics of which are likely to change over time. Assumptions about parts of the system made at design time may not hold at run time, especially when a system is deployed for long periods (e.g., over decades). Self-adaptation is designed to find reconfigurations of systems to handle such run-time inconsistencies. Planners can be used to find and enact optimal reconfigurations in such an evolving context. However, for systems that are highly configurable, such planning becomes intractable due to the size of the adaptation space. To overcome this challenge, in this paper we explore an approach that (a) uses machine learning to find Pareto-optimal configurations without needing to explore every configuration and (b) restricts the search space to such configurations to make planning tractable. We explore this in the context of robot missions that need to consider task timeliness and energy consumption. An independent evaluation shows that our approach results in high-quality adaptation plans in uncertain and adversarial environments.
Paper: https://arxiv.org/abs/1903.03920
Presenting a new Ant Colony Optimization Algorithm (ACO) for Efficient Job Sc...Editor IJCATR
Grid computing utilizes the distributed heterogeneous resources in order to support complicated computing problems. Job
scheduling in computing grid is a very important problem. To utilize grids efficiently, we need a good job scheduling algorithm to assign
jobs to resources in grids.
In the natural environment, the ants have a tremendous ability to team up to find an optimal path to food resources. An ant algorithm
simulates the behavior of ants. In this paper, a new Ant Colony Optimization (ACO) algorithm is proposed for job scheduling in the
Grid environment. The main contribution of this paper is to minimize the makespan of a given set of jobs. Compared with the other job
scheduling algorithms, the proposed algorithm can outperform them according to the experimental results.
Gatling is Open Source Stress testing tool.
Why Gatling:
- High Performance.
- Multi Threading vs (Akka) Actor Model.
- Synchronous Blocking IOs vs asynchronous Non-blocking IOs Netty.
A race of two compilers: GraalVM JIT versus HotSpot JIT C2. Which one offers ...J On The Beach
Do you want to check the efficiency of the new, state of the art, GraalVM JIT Compiler in comparison to the old but mostly used JIT C2? Let’s have a side by side comparison from a performance standpoint on the same source code.
The talk reveals how traditional Just In Time Compiler (e.g. JIT C2) from HotSpot/OpenJDK internally manages runtime optimizations for hot methods in comparison to the new, state of the art, GraalVM JIT Compiler on the same source code, emphasizing all of the internals and strategies used by each Compiler to achieve better performance in most common situations (or code patterns). For each optimization, there is Java source code and corresponding generated assembly code in order to prove what really happens under the hood.
Each test is covered by a dedicated benchmark (JMH), timings and conclusions. Main topics of the agenda: - Scalar replacement - Null Checks - Virtual calls - Lock coarsening - Lock elision - Virtual calls - Scalar replacement - Lambdas - Vectorization (few cases)
The tools used during my research study are JITWatch, Java Measurement Harness, and perf. All test scenarios will be launched against the latest official Java release (e.g. version 11).
Using Simulation to Investigate Requirements Prioritization Strategies
ARCS Presentation 2008
1. An Anytime Winner Determination AlgorithmforTime-Extended Multi-Robot Task Allocation Dr. Fang Tang & Spondon Saha Intelligent Robotics Lab California State Polytechnic University Pomona, CA
3. Intelligent Robotics Lab, Cal Poly Pomona Introduction: ASyMTRe-D Automated Synthesis of Multi-robot Task solutions through software Reconfiguration. ASyMTRe determines: a coalition of robots. from a heterogeneous group of robots. to carry out a given multi-robot task. 3
4. Intelligent Robotics Lab, Cal Poly Pomona Introduction: ASyMTRe-D Heterogeneous group of robots??? Robots with different functional capabilities. Multi-robot task??? A task that cannot be carried out by a single robot. Requires a “strongly cooperative” solution. 4
5. Intelligent Robotics Lab, Cal Poly Pomona Introduction: ASyMTRe-D ASyMTRe Centralized algorithm. Runs on a central server. Aware of the number of robots and their individual capabilities. ASyMTRe-D Distributed ASyMTRe designed using the Contract Net Protocol. Runs locally on each robot. Uses a group negotiation process to determine coalitions. 5
6. Intelligent Robotics Lab, Cal Poly Pomona Introduction: ASyMTRe-D R1 Task A Idle Robots R5 R2 R4 R3 6
7. Intelligent Robotics Lab, Cal Poly Pomona Introduction: ASyMTRe-D ASyMTRe-D takes into account: Task-Specific cost. Robot-inherent cost. Determines a suitable mapping between: Task. Robots. 7
8. Intelligent Robotics Lab, Cal Poly Pomona Introduction: ASyMTRe-D Problem: Can handle only one task at a time. Each round results in idle robots. Desired behavior: Need to reduce instances of idle robots. Make idle robots work on other tasks (if any). 8
11. Intelligent Robotics Lab, Cal Poly Pomona Introduction: Combinatorial Auctions Complexity: A NP-complete problem. Similar to the Partitioning problem -> NPC. Dynamic Programming: Long execution time. Explores the entire solution space. Scales to only a small number of bids. 11
12. Intelligent Robotics Lab, Cal Poly Pomona Introduction: Combinatorial Auctions Solution: An Anytime tree algorithm. Explores only the relevant solution space. Polynomial in the number of bids submitted. 12
14. Intelligent Robotics Lab, Cal Poly Pomona Quick Review ASyMTRe-D Drawbacks: Accepts tasks sequentially. Order of task execution determine total execution time. Idle robots are waste of resources. Need: A task scheduler. Least robot idle-time -> maximum utilization of resources. Inspiration for STA: Combinatorial Auctions. 14
17. Intelligent Robotics Lab, Cal Poly Pomona Simultaneous Task Allocation (STA) Will always return a partial answer if terminated early. Each path is a set of disjoint coalitions -> Partition. All possible partitions of robots are considered. Run time complexity depends on: Number of bids submitted. Number of tasks considered. 17
18. Intelligent Robotics Lab, Cal Poly Pomona Simultaneous Task Allocation (STA) Task B Task C …… Task N Task D Task E Task F Task A STA Algorithm ASyMTRe-D equipped… 18
19. Intelligent Robotics Lab, Cal Poly Pomona Performance (STA) Test Environment: Dell Inspiron 6400. 1.73 GHz Pentium Dual Core Processors. 2 GB of RAM. Python. Objective: To measure the execution time of the algorithm For increasing bids. For increasing task sizes. 19
23. Intelligent Robotics Lab, Cal Poly Pomona Performance (STA) Problems: Execution time is exceedingly long. The entire tree is generated irrespective. Need: Faster execution time. Generate only relevant parts of tree. 23
24. Intelligent Robotics Lab, Cal Poly Pomona Future Work (STA) Use heuristics to generate only relevant portions of tree. Iterative Deepening A* (IDA*) Optimize the current version of the algorithm. Demonstrate the complete approach using ASyMTRe-D. 24
25. Intelligent Robotics Lab, Cal Poly Pomona Conclusion (STA) Multiple tasks are being considered. All possible mapping of tasks to coalitions. No restriction on coalition size. Reduced overall execution time. STA (Higher level). ASyMTRe-D (Lower level). An Anytime Algorithm!! 25
26. Intelligent Robotics Lab, Cal Poly Pomona References Sandholm, T. 1999. Algorithm for Optimal Winner Determination in Combinatorial Auctions, International Joint Conference on Artificial Intelligence (July). Stockholm, Sweden, 542-547. Parker, L.E., Tang, F., 2005. ASyMTRe: Automated Synthesis of Multi-Robot Task Solutions through Software Reconfiguration. In Proceedings of IEEE International Conference on Robotics and Automation (April). Tang, F., Parker, L. E. , 2005. Distributed Multi-Robot Coalitions through ASyMTRe-D. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (August). Parker, L.E. and Tang, F., 2006. Building Multirobot Coalitions through Automated Task Solution Synthesis. In Proceedings of the IEEE v. 94 No. 7 (July). 1289-1305. Parker, L.E., Tang, F., 2007. A Complete Methodology for Generating Multi-Robot Task Solutions using ASyMTRe-D and Market-Based Task Allocation. To Appear in the IEEE International Conference on Robotics and Automation (April). Wikipedia. http://www.wikipedia.org/ 26
NOTES:ASyMTRe-D is a distributed resource allocation system for multi-robot environments.ASyMTRe-D is used to:determine a suitable coalition of robots from a heterogeneous group of robotsthat can work together to accomplish a given multi-robot task.A copy of ASyMTRe-D runs as a local copy on each robot. This has been shown to scale better than its centralized counterpart for increasing numbers of robots.Definitions: Heterogeneous Group of Robots – Robots that have different sensor and effector capabilities as opposed to a homogeneous group of robots where each robot in the group has the same set of sensors and effectors.Multi-RobotTasks – Tasks that are not trivially serializable and cannot be decomposed into subtasks that can be completed by individual robots operating independently. These tasks require a “strongly cooperative” solution, meaning it requires the robots to act in concert to achieve the task. This type of task is classified as “tightly-coupled” or “tightly-coordinated”.Sensor – A device used by a robot to perceive and measure various features in its surrounding environment. Some examples of sensors are: Sonar Sensors, Laser sensors, GPS, Cameras, etc.Effector – A device used by the robot that produces or causes a change in the surrounding environment for the means of accomplishing a programmed goal or task. Actuators are also known as effectors and examples of them are: motors, servos, robotic arms, grippers, etc.
NOTES:ASyMTRe-D is a distributed resource allocation system for multi-robot environments.ASyMTRe-D is used to:determine a suitable coalition of robots from a heterogeneous group of robotsthat can work together to accomplish a given multi-robot task.A copy of ASyMTRe-D runs as a local copy on each robot. This has been shown to scale better than its centralized counterpart for increasing numbers of robots.Definitions: Heterogeneous Group of Robots – Robots that have different sensor and effector capabilities as opposed to a homogeneous group of robots where each robot in the group has the same set of sensors and effectors.Multi-RobotTasks – Tasks that are not trivially serializable and cannot be decomposed into subtasks that can be completed by individual robots operating independently. These tasks require a “strongly cooperative” solution, meaning it requires the robots to act in concert to achieve the task. This type of task is classified as “tightly-coupled” or “tightly-coordinated”.Sensor – A device used by a robot to perceive and measure various features in its surrounding environment. Some examples of sensors are: Sonar Sensors, Laser sensors, GPS, Cameras, etc.Effector – A device used by the robot that produces or causes a change in the surrounding environment for the means of accomplishing a programmed goal or task. Actuators are also known as effectors and examples of them are: motors, servos, robotic arms, grippers, etc.
NOTES:R1, R2, R3, R4 and R5 are a heterogeneous group of robots.Each Robot has different sensing and effector capabilities.When a multi-robot Task-A is presented to the group, an instance of ASyMTRe-D on each robot initiates a group negotiation process with the other robots.By taking into account the cost of carrying out Task-A and the contributions available from each robot, ASyMTRe-D determines a suitable coalition/group of robots that can carry out Task A.Once ASyMTRe-D determines a coalition, the selected robots carry out the task.ASyMTRe-D is a distributed resource scheduling system for multi-robot environments.s
NOTES:ASyMTRe-D works by allocating robots to a particular multi-robot task.ASyMTRe-D works by taking into account:the cost of performing the task (task-specific cost).the costs induced by each robot for performing that task (robot-inherent cost).Using these two sets of data, ASyMTRe-D tries to find the right coalition of robots whose collective inherent cost is a perfect match for the task-specific cost.ASyMTRe-D can only handle one task at a time, which means that there are frequent cases where some robots are not allocated into the coalition for carrying out a particular task. This results in “idle robots” which is a waste of resource.
NOTES:“Combinatorial Auctions” are like parallel auctions where all the items are presented at once except bidders are allowed to bid on any combinations of items.Bidders in a combinatorial auction are generally “happy bidders” because they can bid on the set of items that they have their eyes on and still compete fairly with other bidders.The problem arises in the case of the auctioneer who has to determine the winning combination of items because each submitted bid can have overlapping items and so there is no simple method to determine the winner.Definitions:Sequential Auctions – An auction where the items are presented sequentially and bidders place bids as the items are presented.Parallel Auctions – An auction where all the items are presented before bidding ensues and then bidders can bid on individual items.
NOTES:Dynamic Programming is one approach to determine the winners but that can result in exploring the entire solution space, compared to exploring only the relevant solution space. So the execution time can be unnecessarily long, which means that this approach is valid for only a small number of bids.This is a NP-Complete problem which is very similar to the Partitioning Problem (also NP-complete).An appropriate solution is an Anytime Algorithm that explores only the relevant solution space for winner determination. The algorithm is polynomial in the number of bids submitted.Definitions:Anytime Algorithm – Also known as an “interruptible algorithm”, is one that can generate a partial answer even if the algorithm is terminated prior to completion.Partitioning Problem – The partition problem is an NP-complete problem in computer science. The problem is to decide whether a given multi-set of integers can be partitioned into two "halves" that have the same sum.
NOTES:Dynamic Programming is one approach to determine the winners but that can result in exploring the entire solution space, compared to exploring only the relevant solution space. So the execution time can be unnecessarily long, which means that this approach is valid for only a small number of bids.This is a NP-Complete problem which is very similar to the Partitioning Problem (also NP-complete).An appropriate solution is an Anytime Algorithm that explores only the relevant solution space for winner determination. The algorithm is polynomial in the number of bids submitted.Definitions:Anytime Algorithm – Also known as an “interruptible algorithm”, is one that can generate a partial answer even if the algorithm is terminated prior to completion.Partitioning Problem – The partition problem is an NP-complete problem in computer science. The problem is to decide whether a given multi-set of integers can be partitioned into two "halves" that have the same sum.
NOTES:Items are added to the tree starting with the least indexed bid.Each path in the tree is from the root node to the leaf node.Bids that contain items that already have been used in the path are not added.Each path is a collection of disjoint bids.Dummy bids are used so that all possible sets of combination bids are represented in the tree.The winning set of bids are the set of bids on a path that have the highest total bid value.Definitions:Partition of a Set – In mathematics, a partition of a set X is a division of X into non-overlapping “parts” or “blocks” or “cells” that cover all of X. More formally, these “cells” are both collectively exhaustive and mutually exclusive with respect to the set being partitioned.
NOTES:The current working version of ASyMTRe-D is designed to only handle one task at a time.So if there were a batch of tasks to be executed, the order in they get executed will affect the total execution timeThis evokes the need for developing a task-scheduler that can assign the tasks in an efficient manner such that the total execution time of tasks is minimized. This will call for maximum utilization of robots, resulting in minimum robot idle-time.Based on the anytime algorithm for winner determination in combinatorial auctions, an algorithm has been designed that will assign tasks simultaneously to ASyMTRe-D. The goal of the winner determination process is to find the set of coalitions for executing the batch of tasks such that the total contribution from the winner coalitions is maximized and the members in the winning coalitions do not overlap.
NOTES:Suppose 3 tasks (Task1, Task2, Task3) has been presented to a heterogeneous group of robots (R1, R2, R3).Using the approach of Combinatorial auctions, for each task ASyMTRe-D performs a group negotiation process and submits bids to each task. Each bid consists of the coalition of robots and the price of the bid (a dollar amount).In our current setting, the bid value is a weighted combination of both the “robot-inherent” cost (such as sensing and computational cost) and the “task-specific” cost (such as task completion time and success probability). For example, the coalition that can complete a task with a shorter time, higher success probability, and lower cost results in a higher bid value.
NOTES:All submitted bids are first arranged by their respective tasks.For each task, bids are added to the tree in a sequential order.Only those bids are added whose set of Robots form a disjoint set from the remaining bids already assigned on the current path. So this results in each path containing a disjoint set of bids.As bids are added, the total accumulated bid value is stored in each leaf node.An additional dummy bid of bid-value zero is added after each iteration, so that the space of possible sets of coalitions are well represented. This dummy bid is not added for the last iteration.Once the tree has been constructed, the path with the highest accumulated bid value is the winning partition of robots that are allocated to their respective tasks. This is done using depth-first search.In this example, robot coalition {R3} is assigned to “Task1” and coalition {R1,R2} is assigned to “Task2”. No robots could be allocated for “Task3” for the current round and so this task is saved for the next round.
NOTES:Due to the very nature of Anytime algorithms, this solution will always return an answer, even if execution is terminated prior to completion time.So, given enough time, this algorithm will generate the optimal solution.The algorithm also guarantees that each path is a disjoint set of bids, also known as a Partition.All possible partitions of robots are considered instead of restricting the size of a coalition for the sake of improving efficiency.Run time complexity of the algorithm depends on the size of the search space and therefore depends on the number of bids submitted and the number of tasks being considered.Definitions:Partition of a Set – In mathematics, a partition of a set X is a division of X into non-overlapping “parts” or “blocks” or “cells” that cover all of X. More formally, these “cells” are both collectively exhaustive and mutually exclusive with respect to the set being partitioned.
NOTES:Due to the very nature of Anytime algorithms, this solution will always return an answer, even if execution is terminated prior to completion time.So, given enough time, this algorithm will generate the optimal solution.The algorithm also guarantees that each path is a disjoint set of bids, also known as a Partition.All possible partitions of robots are considered instead of restricting the size of a coalition for the sake of improving efficiency.Run time complexity of the algorithm depends on the size of the search space and therefore depends on the number of bids submitted and the number of tasks being considered.Definitions:Partition of a Set – In mathematics, a partition of a set X is a division of X into non-overlapping “parts” or “blocks” or “cells” that cover all of X. More formally, these “cells” are both collectively exhaustive and mutually exclusive with respect to the set being partitioned.
NOTES:Each distribution aims at measuring the execution time of the algorithm against the number of bids submitted.Bids were not generated by ASyMTRe-D and so a input generator was used.We designed these 2 distributions so that we could test the robustness of the algorithm and measure the sparseness of the trees created, by observing their execution time for increasing number of bids.
NOTES:Random Distributions are supposed to create sparser trees because the coalition size should limit the number of successive bids that can be added in a tree. If a bid with a coalition has 9 robots, then there is 1 out of 10 chances that a bid submitted by only 1 robot not repeated on the current path will be added. Therefore, this should result in sparser trees and lesser execution time.The maximum execution time recorded for Random Distribution is 1.23 seconds.
NOTES:Uniform Distributions are the real deal. Due to their small coalition size, the resulting trees are denser because of the small coalition sizes. Hence, this would result in increased execution time, which depends on the number of tasks submitted.The highest reported execution time has been for a task size of 20 with execution time at 7.31 seconds.
NOTES:The current problem with the algorithm is that it uses Depth First Search to calculate the winning partitions. This involves looking at every path and so a better idea of the winning partition is required so that this exploration is minimized.In order to reduce the chances of extended execution time, we need to stress on generating sparser trees and hence require generating only the relevant parts of the tree.
NOTES:Since execution time is directly correlated with the density of the tree, the execution time can be substantially minimized by generating only the relevant parts of the tree that involves the winning partition of robots.Iterative Deepening A* is one such algorithm that has been considered for this approach where a heuristic analysis will hint on generating only the relevant partitions in the tree.This will encourage generating only portions of the tree, thereby reducing execution time.Other work also includes optimizing the current version of the algorithm by conducting a more granular analysis of the algorithm and fix areas that can impede execution time.In the end, we wish to test this in a real-world application using ASyMTRe-D and a group of robots so that tasks can be assigned concurrently to the coalitions of robots.
NOTES:The approach using the STA aims at a more concurrent level of operation, as opposed to the instantaneous approach followed by ASyMTRe-DThis algorithm aims at being the task scheduler for ASyMTRe-D, where the objective is to maximize the use of resources and minimize the total execution time.STA will therefore operate at the higher level while ASyMTRe-D will be operating on the lower level.So when a batch of tasks are presented to STA, it will first call ASyMTRe-D to generate all the possible coalitions using its resource pool of robots. Once the coalitions are generated with an accompanying bid-value signifying their interest in accomplishing a particular task, STA determines the winning partition of robots that are suitable for the current round. All pending tasks and idle robots are then carried over to the next round.The best feature about STA is that it is an Anytime algorithm and hence will always return a sub-optimal solution even if it is terminated prior to completion.