Towards a Framework for Multidirectional Model TransformationsNuno Macedo
This document proposes extensions to the QVT-R language to support multidirectional model transformations. It introduces the concept of consistency dependencies that specify the relationships between source and target models. This allows transformations to be defined between any number of models rather than just two. It also discusses how the checking and enforcement semantics of QVT-R can be adapted to support this multidirectional approach through the use of consistency dependencies and model distances.
High Speed Decoding of Non-Binary Irregular LDPC Codes Using GPUs (Paper)Enrique Monzo Solves
Implementation of a high speed decoding of non-binary irregular LDPC codes using CUDA GPUs.
Moritz Beermann, Enrique Monzó, Laurent Schmalen, Peter Vary
IEEE SiPS, Oct. 2013, Taipei, Taiwan
This paper proposes a multiple query optimization (MQO) scheme for change point detection (CPD) that can significantly reduce the number of operators needed. CPD is used to detect anomalies in time series data but requires tuning parameters, which leads to running multiple CPDs with different parameters. The paper identifies four patterns for sharing CPD operators between queries based on whether parameter values are the same. Experiments show the proposed MQO approach reduces the number of operators by up to 80% compared to running each CPD independently, thus improving performance. Integrating MQO with hardware accelerators is suggested as future work.
The document provides an overview of implementing 802.11 wireless networks in the ns-2 network simulator. It discusses:
1) Key components of ns-2 including the event scheduler, network components, and support for wireless extensions.
2) Implementation of the 802.11 physical layer including propagation models and sending/receiving packets.
3) Implementation of the 802.11 MAC layer including states, timers, CSMA/CA, and a capture model.
4) An example of adding Rayleigh fading to analyze its impact on TCP and UDP performance over distance.
This document summarizes a formal analysis of the TESLA authentication protocol using the Timed OTS/CafeOBJ method. It provides an overview of the modeling and verification process, including:
1) Modeling the protocol, messages, and network as a timed observational transition system (TOTS) in the CafeOBJ algebraic specification language.
2) Formally specifying security properties to verify, such as an invariant that the receiver only accepts authentic messages from the actual sender.
3) Outlining the verification procedure using induction, lemmas, and proof scores executed with the CafeOBJ system.
1) The document describes deriving the probability of blocking for a blocking system using an LCC infinite source traffic model. It shows the derivation of the blocking probability formula using Markov chains.
2) The blocking probability formula is then simulated using MATLAB to generate a graph of blocking probability versus traffic intensity for different numbers of channels.
3) The graph can be used for network planning to determine the maximum traffic or required channels for a given blocking probability goal.
Traffic Class Assignment for Mixed-Criticality Frames in TTEthernetVoica Gavrilut
The document discusses assigning traffic classes to messages in mixed-criticality systems using TTEthernet. It presents:
1) The motivation for integrating systems with applications of different criticality on a shared platform and focuses on mixed time-criticality systems.
2) The architecture and application models considered, including hard real-time, soft real-time, and non-critical messages that must be assigned traffic classes.
3) The problem of determining the optimal traffic class assignment to maximize schedulability of hard real-time messages and total utility of soft real-time messages.
Towards a Framework for Multidirectional Model TransformationsNuno Macedo
This document proposes extensions to the QVT-R language to support multidirectional model transformations. It introduces the concept of consistency dependencies that specify the relationships between source and target models. This allows transformations to be defined between any number of models rather than just two. It also discusses how the checking and enforcement semantics of QVT-R can be adapted to support this multidirectional approach through the use of consistency dependencies and model distances.
High Speed Decoding of Non-Binary Irregular LDPC Codes Using GPUs (Paper)Enrique Monzo Solves
Implementation of a high speed decoding of non-binary irregular LDPC codes using CUDA GPUs.
Moritz Beermann, Enrique Monzó, Laurent Schmalen, Peter Vary
IEEE SiPS, Oct. 2013, Taipei, Taiwan
This paper proposes a multiple query optimization (MQO) scheme for change point detection (CPD) that can significantly reduce the number of operators needed. CPD is used to detect anomalies in time series data but requires tuning parameters, which leads to running multiple CPDs with different parameters. The paper identifies four patterns for sharing CPD operators between queries based on whether parameter values are the same. Experiments show the proposed MQO approach reduces the number of operators by up to 80% compared to running each CPD independently, thus improving performance. Integrating MQO with hardware accelerators is suggested as future work.
The document provides an overview of implementing 802.11 wireless networks in the ns-2 network simulator. It discusses:
1) Key components of ns-2 including the event scheduler, network components, and support for wireless extensions.
2) Implementation of the 802.11 physical layer including propagation models and sending/receiving packets.
3) Implementation of the 802.11 MAC layer including states, timers, CSMA/CA, and a capture model.
4) An example of adding Rayleigh fading to analyze its impact on TCP and UDP performance over distance.
This document summarizes a formal analysis of the TESLA authentication protocol using the Timed OTS/CafeOBJ method. It provides an overview of the modeling and verification process, including:
1) Modeling the protocol, messages, and network as a timed observational transition system (TOTS) in the CafeOBJ algebraic specification language.
2) Formally specifying security properties to verify, such as an invariant that the receiver only accepts authentic messages from the actual sender.
3) Outlining the verification procedure using induction, lemmas, and proof scores executed with the CafeOBJ system.
1) The document describes deriving the probability of blocking for a blocking system using an LCC infinite source traffic model. It shows the derivation of the blocking probability formula using Markov chains.
2) The blocking probability formula is then simulated using MATLAB to generate a graph of blocking probability versus traffic intensity for different numbers of channels.
3) The graph can be used for network planning to determine the maximum traffic or required channels for a given blocking probability goal.
Traffic Class Assignment for Mixed-Criticality Frames in TTEthernetVoica Gavrilut
The document discusses assigning traffic classes to messages in mixed-criticality systems using TTEthernet. It presents:
1) The motivation for integrating systems with applications of different criticality on a shared platform and focuses on mixed time-criticality systems.
2) The architecture and application models considered, including hard real-time, soft real-time, and non-critical messages that must be assigned traffic classes.
3) The problem of determining the optimal traffic class assignment to maximize schedulability of hard real-time messages and total utility of soft real-time messages.
The document discusses various techniques for unit testing concurrent code, including using JUnit, TestNG, and a custom ThreadWeaver framework to test for threading issues in a FlawedList implementation. It also covers using jcstress for concurrency stress testing and notes that jcstress results may vary between runs and depend on processor architecture memory access semantics.
CarFast: Achieving Higher Statement Coverage FasterSangmin Park
CarFast is a technique for achieving high statement coverage faster when testing software. It works by first statically ranking branches based on the number of statements they contain. It then selects test inputs that target higher ranked branches in order to cover more statements with fewer iterations. An implementation of CarFast was able to achieve baseline coverage with significantly fewer iterations and less time compared to random testing, adaptive random testing, and dynamic symbolic execution on several subject programs ranging from 1KLOC to 1MLOC in size. Future work aims to improve CarFast's scalability and investigate its fault detection abilities.
Falcon: Fault Localization in Concurrent ProgramsSangmin Park
The document describes a technique called Falcon for fault localization in concurrent programs. Falcon uses patterns of variable accesses and thread interactions to identify suspicious patterns. It records patterns dynamically from multiple program executions. It then computes the suspiciousness of each pattern based on whether the pattern appears in passing or failing runs. The most suspicious patterns that appear frequently in failing runs but not passing runs help locate likely faults. The technique is evaluated on concurrency violations like order violations and atomicity violations.
Griffin: Grouping Suspicious Memory-Access Patterns to Improve Understanding...Sangmin Park
Griffin is a technique that aims to improve the understanding of concurrency bugs by grouping suspicious memory access patterns from failing tests. It first performs fault localization to generate ranked lists of memory access patterns, then clusters related tests together based on similarity of patterns. Finally, it reconstructs bugs by clustering patterns based on call stack similarity and identifying suspicious methods and a bug graph.
This document summarizes the key features and concepts of the Datomic database. It discusses Datomic's immutable and transactional data model based on values, identities, states and time. Key concepts covered include datoms, queries using Datalog, indexing schemas, and functions. It also overviewed capabilities like history and filtering of the database value. Overall, the document provided a high-level overview of Datomic's core features and data model.
Effective Fault-Localization Techniques for Concurrent SoftwareSangmin Park
The document describes techniques for fault localization in concurrent software. It presents three techniques: FALCON, which localizes single-variable faults; UNICORN, which extends FALCON to localize multi-variable faults; and GRIFFIN, which provides fault explanations by identifying memory access patterns, calling contexts, and suspicious methods. These techniques are implemented in debugging tools Tracer, Unicorn and Griffin. An empirical user study is conducted to evaluate whether the tools help developers understand and fix concurrency bugs.
Learn about Hitchhiker Trees from David Greenberg, a new functional, immutable, persistent variation of a fractal tree. In these slides, we'll learn how to understand immutable data strucutres and a variety of trees, introducing new concepts as we build up to the hitchhiker tree.
Wes McKinney gave the keynote presentation at PyCon APAC 2016 in Seoul. He discussed his work on Python data analysis tools like pandas, Apache Arrow, and Feather. He also talked about open source sustainability and governance. McKinney is working on the second edition of his book Python for Data Analysis, which is scheduled for release in 2017.
This document discusses Apache Arrow, an open source project that aims to standardize in-memory data representations to enable efficient data sharing across systems. It summarizes Arrow's goals of improving performance by 10-100x on many workloads through a common data layer, reducing serialization overhead. The document outlines Arrow's language bindings for Java, C++, Python, R, and Julia and efforts to integrate Arrow with systems like Spark, Drill and Impala to enable faster analytics. It encourages involvement in the Apache Arrow community.
The document discusses object-oriented modeling and design. It introduces object-oriented concepts like objects, classes, attributes, operations, associations, and aggregation. It explains how object-oriented analysis involves building models using these concepts to represent the structure and behavior of a system. The analysis model is then used during the design stage to create optimized implementation models before programming. Graphical notations are used to express the object-oriented models.
Huohua: A Distributed Time Series Analysis Framework For SparkJen Aman
This document summarizes Wenbo Zhao's presentation on Huohua, a distributed time series analysis framework for Spark. Huohua addresses issues with existing time series solutions by introducing the TimeSeriesRDD data structure that preserves temporal ordering across operations like grouping and temporal join. The group function groups time series locally without shuffling to maintain order, and temporal join uses partitioning to perform localized stream joins across partitions.
Python Data Ecosystem: Thoughts on Building for the FutureWes McKinney
Wes McKinney gives a presentation on the Python data ecosystem and building open source communities. He discusses his background working on Python data tools like pandas and Apache projects. McKinney emphasizes the importance of transparency, consensus building, and valuing all contributions when developing open source software. He also examines challenges in Python packaging and sees opportunities in building bridges between data science languages and tools for analyzing new data types and storage technologies.
Python Data Wrangling: Preparing for the FutureWes McKinney
The document is a slide deck for a presentation on Python data wrangling and the future of the pandas project. It discusses the growth of the Python data science community and key projects like NumPy, pandas, and scikit-learn that have contributed to pandas' popularity. It outlines some issues with the current pandas codebase and proposes a new C++-based core called libpandas for pandas 2.0 to improve performance and interoperability. Benchmark results show serialization formats like Arrow and Feather outperforming pickle and CSV for transferring data.
Raising the Tides: Open Source Analytics for Data ScienceWes McKinney
The document discusses trends in open source analytics for data science. It notes that industry giants are opening core AI and machine learning technologies. There is also open source "disruption" in data science languages and tools. Two Sigma aims to build a collaborative data science platform through open source contributions to scale access to data and computational capabilities while enhancing productivity and collaboration. Two Sigma participates in open source to drive innovation, increase value of proprietary systems, raise awareness of challenges at scale, and attract talent. Areas of investment include Apache Arrow, Parquet, Pandas, and projects for resource management, distributed computing, and collaboration.
Improving Python and Spark (PySpark) Performance and InteroperabilityWes McKinney
Slides from Spark Summit East 2017 — February 9, 2017 in Boston. Discusses ongoing development work to accelerate Python-on-Spark performance using Apache Arrow and other tools
Datomic is a database that uses an immutable data model where data is never updated or erased, only added to over time. It uses distributed transactions where transactions are processed serially by transactors and changes are transmitted to peers. Peers cache data and submit transactions, handling queries using a merged view of the cache and latest data from the transactor. The data model consists of immutable datoms that are entities, attributes, values, and the transaction time, allowing queries over time.
Mesos: The Operating System for your DatacenterDavid Greenberg
Maybe you’ve heard of Mesos—that thing that you can run Hadoop on. I think it powers Twitter? Isn’t it an Apache project, or something?
In this talk, we’ll learn all about Mesos—what it is, how you can leverage it to simplify your infrastructure and reduce AWS/cloud computing costs, and why you should develop your next application on top of it. This talk will give you the tools you need to understand whether Mesos is the right fit for your infrastructure, and several starting points for learning more about Mesos.
The document discusses component protocols and coherence between protocols specified at the design level versus those followed in implementation. It presents an approach to extract the implementation-level protocol, compare it to the design-level protocol, and modify the protocols if needed to ensure coherence. Tools are proposed to help with protocol reification, extraction, verification of coherence, and potential modification to enforce coherence. Future work includes integrating protocol specifications into component models and extending automated analysis and modification capabilities.
This paper presents a technique to create panoramic video in real-time by stitching together video frames from multiple webcams. The system has two stages: an initialization stage and a real-time stage. In the initialization stage, features are detected and matched between webcam frames to compute a perspective matrix describing their geometric relationship. In the real-time stage, frames are registered and blended using the perspective matrix to display the combined wide field of view in real-time. The technique was demonstrated using two ordinary webcams and allows for inexpensive panoramic video without specialized hardware.
TMPA-2015: Implementing the MetaVCG Approach in the C-light SystemIosif Itkin
Alexei Promsky, Dmitry Kondtratyev, A.P. Ershov Institute of Informatics Systems, Novosibirsk
12 - 14 November 2015
Tools and Methods of Program Analysis in St. Petersburg
This paper presents a hybrid parallel breadth-first search (BFS) algorithm for distributed memory systems that uses two stacks. BFS is important for graph algorithms and parallelizing it is important for large graphs. The paper's contributions include a 1D partitioning approach for graph representation. The hybrid algorithm assigns vertices to processors and uses local stacks to parallelize edge visits, balancing load. Experimental results on large systems show the hybrid 1D approach scales better than a 2D approach and is faster than a flat 1D implementation for higher processor counts. The paper concludes the algorithm can implement BFS without errors using relaxed queues and poses questions about bounding errors without level synchronization.
The document discusses various techniques for unit testing concurrent code, including using JUnit, TestNG, and a custom ThreadWeaver framework to test for threading issues in a FlawedList implementation. It also covers using jcstress for concurrency stress testing and notes that jcstress results may vary between runs and depend on processor architecture memory access semantics.
CarFast: Achieving Higher Statement Coverage FasterSangmin Park
CarFast is a technique for achieving high statement coverage faster when testing software. It works by first statically ranking branches based on the number of statements they contain. It then selects test inputs that target higher ranked branches in order to cover more statements with fewer iterations. An implementation of CarFast was able to achieve baseline coverage with significantly fewer iterations and less time compared to random testing, adaptive random testing, and dynamic symbolic execution on several subject programs ranging from 1KLOC to 1MLOC in size. Future work aims to improve CarFast's scalability and investigate its fault detection abilities.
Falcon: Fault Localization in Concurrent ProgramsSangmin Park
The document describes a technique called Falcon for fault localization in concurrent programs. Falcon uses patterns of variable accesses and thread interactions to identify suspicious patterns. It records patterns dynamically from multiple program executions. It then computes the suspiciousness of each pattern based on whether the pattern appears in passing or failing runs. The most suspicious patterns that appear frequently in failing runs but not passing runs help locate likely faults. The technique is evaluated on concurrency violations like order violations and atomicity violations.
Griffin: Grouping Suspicious Memory-Access Patterns to Improve Understanding...Sangmin Park
Griffin is a technique that aims to improve the understanding of concurrency bugs by grouping suspicious memory access patterns from failing tests. It first performs fault localization to generate ranked lists of memory access patterns, then clusters related tests together based on similarity of patterns. Finally, it reconstructs bugs by clustering patterns based on call stack similarity and identifying suspicious methods and a bug graph.
This document summarizes the key features and concepts of the Datomic database. It discusses Datomic's immutable and transactional data model based on values, identities, states and time. Key concepts covered include datoms, queries using Datalog, indexing schemas, and functions. It also overviewed capabilities like history and filtering of the database value. Overall, the document provided a high-level overview of Datomic's core features and data model.
Effective Fault-Localization Techniques for Concurrent SoftwareSangmin Park
The document describes techniques for fault localization in concurrent software. It presents three techniques: FALCON, which localizes single-variable faults; UNICORN, which extends FALCON to localize multi-variable faults; and GRIFFIN, which provides fault explanations by identifying memory access patterns, calling contexts, and suspicious methods. These techniques are implemented in debugging tools Tracer, Unicorn and Griffin. An empirical user study is conducted to evaluate whether the tools help developers understand and fix concurrency bugs.
Learn about Hitchhiker Trees from David Greenberg, a new functional, immutable, persistent variation of a fractal tree. In these slides, we'll learn how to understand immutable data strucutres and a variety of trees, introducing new concepts as we build up to the hitchhiker tree.
Wes McKinney gave the keynote presentation at PyCon APAC 2016 in Seoul. He discussed his work on Python data analysis tools like pandas, Apache Arrow, and Feather. He also talked about open source sustainability and governance. McKinney is working on the second edition of his book Python for Data Analysis, which is scheduled for release in 2017.
This document discusses Apache Arrow, an open source project that aims to standardize in-memory data representations to enable efficient data sharing across systems. It summarizes Arrow's goals of improving performance by 10-100x on many workloads through a common data layer, reducing serialization overhead. The document outlines Arrow's language bindings for Java, C++, Python, R, and Julia and efforts to integrate Arrow with systems like Spark, Drill and Impala to enable faster analytics. It encourages involvement in the Apache Arrow community.
The document discusses object-oriented modeling and design. It introduces object-oriented concepts like objects, classes, attributes, operations, associations, and aggregation. It explains how object-oriented analysis involves building models using these concepts to represent the structure and behavior of a system. The analysis model is then used during the design stage to create optimized implementation models before programming. Graphical notations are used to express the object-oriented models.
Huohua: A Distributed Time Series Analysis Framework For SparkJen Aman
This document summarizes Wenbo Zhao's presentation on Huohua, a distributed time series analysis framework for Spark. Huohua addresses issues with existing time series solutions by introducing the TimeSeriesRDD data structure that preserves temporal ordering across operations like grouping and temporal join. The group function groups time series locally without shuffling to maintain order, and temporal join uses partitioning to perform localized stream joins across partitions.
Python Data Ecosystem: Thoughts on Building for the FutureWes McKinney
Wes McKinney gives a presentation on the Python data ecosystem and building open source communities. He discusses his background working on Python data tools like pandas and Apache projects. McKinney emphasizes the importance of transparency, consensus building, and valuing all contributions when developing open source software. He also examines challenges in Python packaging and sees opportunities in building bridges between data science languages and tools for analyzing new data types and storage technologies.
Python Data Wrangling: Preparing for the FutureWes McKinney
The document is a slide deck for a presentation on Python data wrangling and the future of the pandas project. It discusses the growth of the Python data science community and key projects like NumPy, pandas, and scikit-learn that have contributed to pandas' popularity. It outlines some issues with the current pandas codebase and proposes a new C++-based core called libpandas for pandas 2.0 to improve performance and interoperability. Benchmark results show serialization formats like Arrow and Feather outperforming pickle and CSV for transferring data.
Raising the Tides: Open Source Analytics for Data ScienceWes McKinney
The document discusses trends in open source analytics for data science. It notes that industry giants are opening core AI and machine learning technologies. There is also open source "disruption" in data science languages and tools. Two Sigma aims to build a collaborative data science platform through open source contributions to scale access to data and computational capabilities while enhancing productivity and collaboration. Two Sigma participates in open source to drive innovation, increase value of proprietary systems, raise awareness of challenges at scale, and attract talent. Areas of investment include Apache Arrow, Parquet, Pandas, and projects for resource management, distributed computing, and collaboration.
Improving Python and Spark (PySpark) Performance and InteroperabilityWes McKinney
Slides from Spark Summit East 2017 — February 9, 2017 in Boston. Discusses ongoing development work to accelerate Python-on-Spark performance using Apache Arrow and other tools
Datomic is a database that uses an immutable data model where data is never updated or erased, only added to over time. It uses distributed transactions where transactions are processed serially by transactors and changes are transmitted to peers. Peers cache data and submit transactions, handling queries using a merged view of the cache and latest data from the transactor. The data model consists of immutable datoms that are entities, attributes, values, and the transaction time, allowing queries over time.
Mesos: The Operating System for your DatacenterDavid Greenberg
Maybe you’ve heard of Mesos—that thing that you can run Hadoop on. I think it powers Twitter? Isn’t it an Apache project, or something?
In this talk, we’ll learn all about Mesos—what it is, how you can leverage it to simplify your infrastructure and reduce AWS/cloud computing costs, and why you should develop your next application on top of it. This talk will give you the tools you need to understand whether Mesos is the right fit for your infrastructure, and several starting points for learning more about Mesos.
The document discusses component protocols and coherence between protocols specified at the design level versus those followed in implementation. It presents an approach to extract the implementation-level protocol, compare it to the design-level protocol, and modify the protocols if needed to ensure coherence. Tools are proposed to help with protocol reification, extraction, verification of coherence, and potential modification to enforce coherence. Future work includes integrating protocol specifications into component models and extending automated analysis and modification capabilities.
This paper presents a technique to create panoramic video in real-time by stitching together video frames from multiple webcams. The system has two stages: an initialization stage and a real-time stage. In the initialization stage, features are detected and matched between webcam frames to compute a perspective matrix describing their geometric relationship. In the real-time stage, frames are registered and blended using the perspective matrix to display the combined wide field of view in real-time. The technique was demonstrated using two ordinary webcams and allows for inexpensive panoramic video without specialized hardware.
TMPA-2015: Implementing the MetaVCG Approach in the C-light SystemIosif Itkin
Alexei Promsky, Dmitry Kondtratyev, A.P. Ershov Institute of Informatics Systems, Novosibirsk
12 - 14 November 2015
Tools and Methods of Program Analysis in St. Petersburg
This paper presents a hybrid parallel breadth-first search (BFS) algorithm for distributed memory systems that uses two stacks. BFS is important for graph algorithms and parallelizing it is important for large graphs. The paper's contributions include a 1D partitioning approach for graph representation. The hybrid algorithm assigns vertices to processors and uses local stacks to parallelize edge visits, balancing load. Experimental results on large systems show the hybrid 1D approach scales better than a 2D approach and is faster than a flat 1D implementation for higher processor counts. The paper concludes the algorithm can implement BFS without errors using relaxed queues and poses questions about bounding errors without level synchronization.
This paper presents a hybrid parallel breadth-first search (BFS) algorithm for distributed memory systems that uses two stacks. BFS is important for graph algorithms and parallelizing it is important for large graphs. The paper's contributions include using a sparse matrix representation and a hybrid 1D partitioning approach. Experimental results on large systems show the 1D hybrid algorithm scales better than flat 1D partitioning for larger concurrencies and is up to 1.8 times faster than 2D partitioning algorithms. The paper concludes by conjecturing that level synchronous BFS can be implemented without errors using relaxed queues.
ZK Study Club: Sumcheck Arguments and Their ApplicationsAlex Pruden
Talk given at the ZK Study Club by Jonathan Bootle and Katerina Sotiraki about the universality of sumcheck arguments and their importance in zero-knowledge cryptography.
Model-counting Approaches For Nonlinear Numerical ConstraintsQuoc-Sang Phan
This document summarizes research on using model counting approaches to analyze nonlinear numerical constraints that arise in applications like probabilistic inference, reliability analysis, and side-channel analysis. It presents two implementations of modular exponentiation with nonlinear constraints and evaluates the performance of various exact and approximate model counting tools on the path conditions extracted from symbolic execution. The results show that for small domains, brute force counting works best, while approximate model counting scales better to larger problems.
The document contains details about experiments performed in a Digital Signal Processing practical course. It includes the aims, apparatus required, theory, source code and results for experiments involving MATLAB programs to generate basic signals like impulse, step, ramp and exponential signals; sine and cosine signals; quantization; sampling theorem; linear convolution; autocorrelation; and cross-correlation. Programs were written in MATLAB to perform the various digital signal processing tasks and the output was verified.
Show off the efficiency of dai-liao method in merging technology for monotono...nooriasukmaningtyas
In this article, we give a new modification for the Dai-Liao method to solve
monotonous nonlinear problems. In our modification, we relied on two
important procedures, one of them was the projection method and the second
was the method of damping the quasi-Newton condition. The new approach
of derivation yields two new parameters for the conjugated gradient direction
which, through some conditions, we have demonstrated the sufficient descent
property for them. Under some necessary conditions, the new approach
achieved global convergence property. Numerical results show how efficient
the new approach is when compared with basic similar classic methods.
Design and Implementation of a Load Balancing Algorithm for a Clustered SDN C...Daniel Gheorghita
1) The document describes the design and implementation of a load balancing algorithm for a clustered SDN control plane.
2) The algorithm dynamically assigns switches to controllers based on CPU load to balance load across the cluster. It migrates switches between controllers to address load imbalances.
3) The algorithm was implemented in OpenDaylight and evaluated based on response times and throughput under different test scenarios with a single controller and multiple controllers.
The document discusses profiling and debugging dynamic object-oriented programs. It describes how traditional profilers that use execution sampling do not provide enough information, such as which specific objects are causing performance issues. New techniques are proposed that track object identities and message passing to better pinpoint problems. These include domain-specific profilers for visualizations, back-in-time debuggers using object histories, and meta-level tools that can evolve and reconfigure objects and classes at runtime. The goal is to "let Smalltalk loose" by making its object model and meta-level more flexible and evolution-friendly to support advanced debugging and profiling.
The meeting material for SSLAB in NCTU,Taiwan.
Introducing OpenVX to those who doesn't have any ideas about OpenVX.
OpenVX is a very new standard,the contents may expire with the new incoming version.
This slide is based on OpenVX v1.0.1
This document provides an overview of various scientific programming models for distributed computing. It introduces reference parallel programming models like MPI and OpenMP, and discusses their strengths and weaknesses. Novel programming models are also covered, such as Microsoft Dryad, MapReduce, and COMP Superscalar (COMPSs). The document concludes that while scientific problems are complex, reference models are often unsuitable, leading to new flexible models that aim to simplify programming workflows for distributed systems.
From RNN to neural networks for cyclic undirected graphstuxette
This document discusses different neural network methods for processing graph-structured data. It begins by describing recurrent neural networks (RNNs) and their limitations for graphs, such as an inability to handle undirected or cyclic graphs. It then summarizes two alternative approaches: one that uses contraction maps to allow recurrent updates on arbitrary graphs, and one that employs a constructive architecture with frozen neurons to avoid issues with cycles. Both methods aim to make predictions at the node or graph level on relational data like molecules or web pages.
The document discusses technology mapping for area minimization without breaking DAGs into trees. It proposes two approaches - 1) Extending an existing algorithm called Flowmap-r by forming Maximum Fanout Free Cones (MFFCs) for the DAG and 2) A divide and conquer approach that recursively divides the problem into subproblems until reaching leaf nodes. The key steps involve generating MFFCs, finding mappable MFFCs based on a logic library, and using a weighted set cover algorithm to determine the minimum area cover.
White box testing involves close examination of a program's internal structure and logic flows. Test cases are designed to exercise specific conditions and loops by following logical paths through the code. This ensures all independent paths, logical decisions, loops, and internal data structures are tested. Basis path testing is a white box technique that uses flow graphs and cyclomatic complexity to identify the minimum number of test cases needed to guarantee every statement is executed at least once. Cyclomatic complexity represents the number of independent paths in a program and is used to derive a basis set of test cases that force execution of every path.
This document discusses object interaction and modularization in Java. It introduces a digital clock example to demonstrate how objects can interact. The clock display is modularized into separate number display objects for hours and minutes. Classes are created for NumberDisplay and ClockDisplay, with ClockDisplay using two NumberDisplay objects. A ClockDemo class is used to test the clock by incrementing the time and printing the current time string. This provides an example of how objects can be designed to encapsulate behavior and interact through well-defined interfaces.
The aim of this talk is to introduce two pieces of software, ``Lattice Builder'' and ``Stochastic Simulation in Java'' (SSJ). They allow to easily conduct experiments that rely on Monte Carlo (MC), quasi-Monte Carlo (QMC), or randomized QMC (RQMC). Lattice Builder is a C++ library designed to efficiently search, produce, and examine rank-1-lattices as well as polynomial lattices. It allows the user to choose from a broad palette of search criteria, types of weights, and construction methods, which can be accessed through a graphical interface as well as through a command line tool. Its structure also facilitates the implementation of own extensions and encourages to combine Lattice Builder with other programming languages.
SSJ is a Java library covering an extensive set of tools for stochastic simulations. It is particularly useful for experiments relying on MC and (R)QMC, ranging from integration problems over RQMC for Markov Chains (Array-RQMC) to density estimation.
This talk gives an introductory tour through the interfaces of Lattice Builder, shows how to integrate it into SSJ, and provides first steps in SSJ based on an example for mean estimation with RQMC.
The document discusses using machine learning techniques like Gaussian processes (GPs) to optimize the configuration of software systems. It notes that software performance landscapes are often complex, with non-linear interactions between parameters and non-convex response surfaces. Measurements are also subject to noise. The document introduces an approach called TL4CO that uses multi-task Gaussian processes to model software performance across different versions/deployments, allowing it to leverage data from other versions to improve optimization. This helps address challenges in DevOps where new versions are continuously delivered.
Similar to Testing Concurrent Programs to Achieve High Synchronization Coverage (20)
GlobalLogic Java Community Webinar #18 “How to Improve Web Application Perfor...GlobalLogic Ukraine
Під час доповіді відповімо на питання, навіщо потрібно підвищувати продуктивність аплікації і які є найефективніші способи для цього. А також поговоримо про те, що таке кеш, які його види бувають та, основне — як знайти performance bottleneck?
Відео та деталі заходу: https://bit.ly/45tILxj
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
Lee Barnes - Path to Becoming an Effective Test Automation Engineer.pdfleebarnesutopia
So… you want to become a Test Automation Engineer (or hire and develop one)? While there’s quite a bit of information available about important technical and tool skills to master, there’s not enough discussion around the path to becoming an effective Test Automation Engineer that knows how to add VALUE. In my experience this had led to a proliferation of engineers who are proficient with tools and building frameworks but have skill and knowledge gaps, especially in software testing, that reduce the value they deliver with test automation.
In this talk, Lee will share his lessons learned from over 30 years of working with, and mentoring, hundreds of Test Automation Engineers. Whether you’re looking to get started in test automation or just want to improve your trade, this talk will give you a solid foundation and roadmap for ensuring your test automation efforts continuously add value. This talk is equally valuable for both aspiring Test Automation Engineers and those managing them! All attendees will take away a set of key foundational knowledge and a high-level learning path for leveling up test automation skills and ensuring they add value to their organizations.
AI in the Workplace Reskilling, Upskilling, and Future Work.pptxSunil Jagani
Discover how AI is transforming the workplace and learn strategies for reskilling and upskilling employees to stay ahead. This comprehensive guide covers the impact of AI on jobs, essential skills for the future, and successful case studies from industry leaders. Embrace AI-driven changes, foster continuous learning, and build a future-ready workforce.
Read More - https://bit.ly/3VKly70
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
QR Secure: A Hybrid Approach Using Machine Learning and Security Validation F...AlexanderRichford
QR Secure: A Hybrid Approach Using Machine Learning and Security Validation Functions to Prevent Interaction with Malicious QR Codes.
Aim of the Study: The goal of this research was to develop a robust hybrid approach for identifying malicious and insecure URLs derived from QR codes, ensuring safe interactions.
This is achieved through:
Machine Learning Model: Predicts the likelihood of a URL being malicious.
Security Validation Functions: Ensures the derived URL has a valid certificate and proper URL format.
This innovative blend of technology aims to enhance cybersecurity measures and protect users from potential threats hidden within QR codes 🖥 🔒
This study was my first introduction to using ML which has shown me the immense potential of ML in creating more secure digital environments!
Discover the Unseen: Tailored Recommendation of Unwatched ContentScyllaDB
The session shares how JioCinema approaches ""watch discounting."" This capability ensures that if a user watched a certain amount of a show/movie, the platform no longer recommends that particular content to the user. Flawless operation of this feature promotes the discover of new content, improving the overall user experience.
JioCinema is an Indian over-the-top media streaming service owned by Viacom18.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Getting the Most Out of ScyllaDB Monitoring: ShareChat's TipsScyllaDB
ScyllaDB monitoring provides a lot of useful information. But sometimes it’s not easy to find the root of the problem if something is wrong or even estimate the remaining capacity by the load on the cluster. This talk shares our team's practical tips on: 1) How to find the root of the problem by metrics if ScyllaDB is slow 2) How to interpret the load and plan capacity for the future 3) Compaction strategies and how to choose the right one 4) Important metrics which aren’t available in the default monitoring setup.
Getting the Most Out of ScyllaDB Monitoring: ShareChat's Tips
Testing Concurrent Programs to Achieve High Synchronization Coverage
1. Testing Concurrent Programs to Achieve
High Synchronization Coverage
Shin Hong†, Jaemin Ahn†, Sangmin Park*, Moonzoo Kim†, Mary Jean Harrold*
Provable SW Lab† Aristotle Research Group*
KAIST, South Korea GeorgiaTech
Shin Hong @ PSWLAB 1 / 19
2. Overview
• A testing framework for concurrent programs
– To achieve high test coverage fast
• Key idea
1. Utilize coverage to test concurrent programs systematically
2. Manipulate thread scheduler to achieve high coverage fast
Measure Test run
thread1() { coverage
10: lock(m) Thread scheduling
...
{(10,20), …} controller
15: unlock(m)
... Coverage {(10,20),
Covered SPs
thread2() { estimator
20: lock(m) (20,10),…}
...
30: unlock(m) Estimated {(20,10), …}
target SPs Uncovered SPs Threads
Estimation phase Testing phase
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 2 / 19
3. Motivation: Concurrent Program is Error-Prone
• Multi-core processors make concurrent program popular
Concurrency at Microsoft – An Exploratory Survey [P. Godefroid et al., EC22008]
• 60 % of survey respondents are writing concurrent programs
• Reasons for concurrency in MS products
• However, correctness of concurrent programs is hard to achieve
– Interactions between threads should be carefully performed
– A large # of thread executions due to non-deterministic thread scheduling
– Testing technique for sequential programs do not properly work
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 3 / 19
4. Techniques to Test Concurrent Programs
Pattern-based bug detection Systematic testing
• Find predefined patterns of • Explore all possible thread
suspicious synchronizations scheduling cases [CHESS, Fusion]
[Eraser, Atomizer, CalFuzzer] • Limitation: limited scalability
• Limitation: focused on specific
bug
Random testing
• Generate random thread
scheduling [ConTest] Direct thread scheduling
• Limitation: may not investigate for high test coverage
new interleaving
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 4 / 19
5. Code Coverage for Concurrent Programs
• Test requirements of code 01: int data ;
coverage for concurrent …
10: thread1() { 20: thread2() {
programs capture different 11: lock(m); 21: lock(m);
thread interaction cases 12: if (data …){ 22: data = 0;
13: data = 1 ; ...
• Several metrics ... 29: unlock(m);
18: unlock(m); ...
– Synchronization coverage: ...
blocking, blocked, follows, Sync.-Pair: Stmt.-Pair:
synchronization-pair, etc. {(11, 21), {(12, 22),
– Statement-based coverage: (21,11), … } (22,13), … }
PSet, all-use, LR-DEF,
access-pair, statement-pair, etc.
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 5 / 19
8. Testing Framework for Concurrent Programs
(1) Estimates SP requirements,
(2) Generates test scheduling by
– monitor running thread status, and measure SP coverage
– suspend/resume threads to cover new coverage req.
Measure Test run
thread1() { coverage
10: lock(m) Thread scheduling
...
{(10,20), …} controller
15: unlock(m)
... Coverage {(10,20),
Covered SPs
thread2() { estimator
20: lock(m) (20,10),…}
...
30: unlock(m) Estimated {(20,10), …}
target SPs Uncovered SPs Threads
Estimation phase Testing phase
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 8 / 19
9. Thread Scheduling Controller
• Coordinates thread executions to satisfy new SP requirements
• Invokes an operation
(1) before every lock operation, and
(2) after every unlock operation
• Controls thread scheduling by
(1) suspend a thread before a lock operation
(2) select one of suspended threads to resume using three rules
... {(10,20), …}
Decide whether Covered SPs
invoke thread scheduler suspend, or {(20,10), …}
10: synchronized(m) { resume a Uncovered SPs
11: if (t > 0) { current thread
12: ...
Other threads’
status
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 9 / 19
11. Thread Schedule Decision Algorithm (2/3)
• Rule 2: Choose a thread to cover uncovered SP in next decision
Thread1 Thread 2
- Covered SPs:
20:
lock(m) (20,10),(20,22),
(10,22)
21: - Uncovered SPs:
unlock(m) (22,10)
10: 22:
lock(m) lock(m)
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 11 / 19
12. Thread Schedule Decision Algorithm (2/3)
• Rule 2: Choose a thread to cover uncovered SP in next decision
Thread1 Thread 2
20: - Covered SPs:
lock(m) (20,10),(20,22),
(10,22)
21:
25: - Uncovered SPs:
unlock(m)
(22,10)
22:
lock(m)
10:
lock(m)
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 12 / 19
13. Thread Schedule Decision Algorithm (3/3)
• Rule 3: Choose a thread that is unlikely to cover uncovered SPs
Thread1 Thread 2
20:
- Covered SPs:
lock(m) (20,10),(10,20)
(10,22),(22,10)
21: (20,22),
unlock(m)
- Uncovered SPs:
(10,60),(70,22),
10: 22: (80,22),(10,50),
lock(m) lock(m) (22,60)
schedule Thread 1: schedule Thread 2:
Since Thread 2 with line 22 remains Since Thread1 with line 10 remains
under control, more chance to under control, more chance to
cover (70, 22), (80, 22), (22, 60) (10, 60), (10, 50)
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 13 / 19
14. Empirical Evaluation
• Implementation [Thread Scheduling Algorithm, TSA]
– Used Soot for estimation phase
– Extended CalFuzzer 1.0 for testing phase
– Built in Java (about 2KLOC)
• Subjects
– 7 Java library benchmarks (e.g. Vector, HashTable, etc.) (< 11 KLOC)
– 3 Java server programs (cache4j, pool, VFS) (< 23 KLOC)
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 14 / 19
15. Empirical Evaluation
• Compared techniques
– We compared TSA to random testing
– We inserted probes at every read, write, and lock operations
– Each probe makes a time-delay d with probability p
• d: sleep(1ms), sleep(1~10ms), sleep (1~100ms)
• p : 0.1, 0.2, 0.3, 0.4,0.5
– We use 15 (= 3 x 5) different versions of random testing
• Experiment setup
– Executed the program 500 times for each technique
– Measured accumulated coverage and time cost
– Repeated the experiment 30 times for statistical significance in results
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 15 / 19
16. Study 1: Effectiveness
• TSA covers more SPs than random testings
– for accumulated SP coverage after 500 executions
Our technique
RND-sleep(<100ms) avg.
RND-sleep(<10ms) avg.
SP coverage
RND-sleep(1ms) avg.
ArrayList 1
test executions
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 16 / 19
17. Study 2: Efficiency
• TSA reaches the saturation point faster and higher
– A saturation point is computed by r2
(coefficient: 0.1, window size: 120 sec.) [Sherman et al., FSE 2009]
Saturation point
Our technique
SP coverage
RND-sleep(<10ms) avg.
RND-sleep(<100ms) avg.
RND-sleep(1ms) avg.
ArrayList 1
time (sec)
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 17 / 19
18. Study 3: Impact of Estimation-based Heuristics (Rule 3)
• TSA with Rule3 reaches higher coverage at faster saturation point
– Executes the program for 30 minutes, and computed the saturation points
• > 90% of thread scheduling decisions are made by the Rule 3
TSA w/o Rule 3 TSA with Rule 3
Program
Coverage time (sec) Coverage time (sec)
ArrayList1 177.6
177.6 274.4
274.4 181.2
181.2 184.2
184.2
ArrayList2 130.8 246.3 141.4 159.7
HashSet1 151.3 271.5 151.7 172.4 TSA
HashSet2 98.0 198.9 120.8 139.3 TSA w/o Rule 3
HashTable1 23.7 120.0 24.0 120.0
HashTable2 539.6 388.8 538.0 165.4
LinkedList1 179.9 278.2 181.2 155.0
LinkedList2 129.9 237.7 141.2 161.2
TreeSet1 151.6 258.4 151.4 191.2
TreeSet2 98.8 237.5 120.5 139.8
cache4j 201.9 205.8 202.2 146.1
pool T/O T/O 2950.5 431.1
VFS 246.7 478.2 260.1 493.9
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 18 / 19
19. Contributions and Future Work
• Contributions
– Thread scheduling technique that achieves high SP coverage for concurrent
programs fast
– Empirical studies that show the technique is more effective and efficient
to achieve high SP coverage than random testing
• Future works
– Study correlation between coverage achievement and fault finding in
testing of multithreaded programs
– Extend the technique to work with other concurrent coverage criteria
Testing Concurrent Programs to Achieve High Synchronization Coverage Shin Hong @ PSWLAB 19 / 19
Editor's Notes
We have developed a testing framework for concurrent Java programs.As you know, a concurrent program has a large number of differentbehaviors depending on non-deterministic thread scheduling. To check various concurrent program executions systematically, Our technique tries to achieve high test coverage through testing. As test cases to achieve high branch coverage for sequential programs can detect many bugs, achieving high test coverage for concurrent programs can detect many concurrency bugs. In other words, similar to sequential program testing, where coverage metric is widely used for generating effective test inputs, our technique systematicallygenerates test scheduling to achieve high synchronization coverage. Two key features of our testing framework is, first, our framework explicitly utilize synchronization coverage for effective testing. Second, our testing framework utilizes a coverage-directed thread scheduler, which controls thread scheduling to achieve high synchronization coverage, based on estimated target coverage. To our knowledge, our framework is the first testing framework which aims to achieve high synchronization coverage by controlling thread scheduling directly.
As you see, synchronization-pair coverage properly representdifferent thread scheduling cases. Although many code coveragemetrics for concurrent programs seem useful, there are onlyfew technique that generate testing to achieve high coverage.So, we have developed a testing framework to maximizetest coverage achievement by manipulating thread scheduling.To generate high coverage testing framework, our technique first estimatewhich coverage requirement can be achieved in a programin this estimation phase.As you see the synchronization pair definition, the technique may not guess simply that every two synchronized blocks in a code can make coverage.So, our coverage estimator determine feasible synchronization-pair coverageBy checking aliasing and effects of other synchronizations by dynamic analysis.You can find detail description how coverage estimator works in our paper.The testing phase utilizes the estimation result as target to achieve.In testing phase, thread scheduling controller dynamically makedecision which thread executes at a moment. The thread scheduling controller manipulate two coveragedata structure. One is Covered SP which records set of already achieved coverage requirements in a testing. Another is Uncovered SP, a set of coverage requirement that is reachablebut not yet covered in a testing.Thread scheduling controller tries to increase Covered SPs and reduce Uncovered SPs by a greedy algorithm.