SlideShare a Scribd company logo
1 of 20
Download to read offline
A Feedback Arc Set for Spot
Alexandre Lewkowicz
(supervisor: Alexandre Duret-Lutz)
Technical Report no
1406, July 2014
revision 1dd1ab2
Spot is an extensible model checking library using transition-based generalized Büchi automata (TGBA). It contains
many state-of-the-art algorithms. In this paper, we focus on two algorithms that create automata with more transitions
than necessary. These constructions can be enhanced by computing a feedback arc set (FAS): a set of edges which,
when removed from the graph, leave a directed acyclic graph. Ideally, we want a minimal FAS, but this problem is
NP-hard.
We adapt and improve a heuristic proposed by Eades et al. that approximates a minimal FAS in linear time. We
then show that the integration of this heuristic in the complementation of deterministic Büchi automata and in the
conversion of Rabin automata to Büchi automata reduces the size of the output up to 31% in our experiments. These
results depend greatly on the number of cycles and accepting states in the input automaton.
Spot est une bibliothèque extensible pour le model checking qui utilise les automates de Büchi généralisés à transi-
tions acceptantes. Elle contient de nombreux algorithmes avancés. Dans ce rapport, on se concentre sur deux de ces
algorithmes qui construisent des automates avec plus de transitions que nécessaire.
En pratique ces constructions utiliseraient moins de transitions si elles pouvaient calculer un feedback arc set
(FAS), c’est-à-dire un ensemble de transitions à retirer du graphe pour le rendre acyclique. Dans l’absolu, on veut un
FAS minimal, mais ce problème est NP-difficile.
On adapte et améliore une heuristique proposée par Eades et al. qui permet une construction en temps linéaire.
On montre ensuite comment cet algorithme bénéficie à la complémentation d’automates de Büchi déterministes et la
traduction d’automates de Rabin en automates de Büchi.
En fonction de l’automate traité on remarque une amélioration montant jusqu’à 31%. Ces résultats varient beau-
coup selon le nombre de cycles et d’états acceptants.
Keywords
Spot, feedback arc set, acyclic graph, automata, model checking, deterministic TGBA complementation
Laboratoire de Recherche et Développement de l’EPITA
14-16, rue Voltaire – FR-94276 Le Kremlin-Bicêtre CEDEX – France
Tél. +33 1 53 14 59 22 – Fax. +33 1 53 14 59 13
alewkowicz@lrde.epita.fr – http://www.lrde.epita.fr/
2
Copying this document
Copyright c 2014 LRDE.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free
Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with
the Invariant Sections being just “Copying this document”, no Front-Cover Texts, and no Back-Cover
Texts.
A copy of the license is provided in the file COPYING.DOC.
Contents
1 Introduction 4
2 Solving the feedback arc set 5
2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 The feedback arc set problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 The GR heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Generalization: from graph to automata . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Applying a FAS to DTGBACOMP 12
3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Improving complementation by using a FAS . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 Implementation consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Conclusion 19
5 Bibliography 20
Chapter 1
Introduction
This report presents work done in Spot (Duret-Lutz and Poitrenaud, 2004), an extensible model checking
library using transition-based generalized Büchi automata.
Two algorithms, the deterministic TGBA complementation and the translation fro Rabin automaton to
Büchi automaton (RA to BA), create automata with a similar construction pattern. The newly created
automaton is composed of slightly modified sub-clones of the original.
A main clone, which is connected to the sub-clones, has the same transitions and states as the original
automaton except that the accepting states/transitions are no longer accepting. These algorithms require
that each cycle in the main clone gets linked to each sub-clones. There are many ways to achieve this
connection and the objective is finding a method that only links the required cycles and uses the right
transitions to do so.
By handling ω-words (words of infinite size), an input can iterate through a cycle in the automaton an
infinite number of times. This input can chose at any time to non-deterministically jump to a sub-clone.
Since the input infinitely travels through every state of a certain cycle, it does not matter which state of the
cycle in the main clone allows to non-deterministically jump to the corresponding cycle of a sub-clone.
Thanks to this property, picking one transition per cycle is sufficient for connecting the master clone to
each sub-clones.
Currently, the deterministic TGBA complementation uses a depth first search and each back edge
computed is used to link the main clone to each sub-clone. This technique computes more transitions
than necessary as any cycle of the main clone might have more than one transition allowing it to non-
deterministically jump to the corresponding loop in a sub-clone. The solution provided in this paper to
minimize the number of transitions is to compute a feedback arc set (FAS), a set of arcs which if removed
leave the resultant graph free of directed cycles.
This work is mostly based on the heuristic proposed by Eades et al. (1993) which computes a decent
FAS in θ(m) with m the number of arcs. By choosing a solution in linear time, the altered algorithms will
maintain their complexities. Moreover, using a sufficiently good FAS reduces the number of transitions
from the main clone to the sub-clones enhancing the resulting automata.
This report is organized as follows. In chapter 2, we present Eades et al. (1993) heuristic to the FAS
problem followed by improvements and some implementation suggestions. In chapter 3, we show how
the computed FAS can be applied to the complementation of deterministic TGBA.
Chapter 2
Solving the feedback arc set
In this chapter we start by giving a formal definition of the feedback arc set problem followed by an
implementation of a heuristic. We will then show how to improve this heuristic.
2.1 Notation
• s represents a vertex of G
• δ+
(s) is the out degree of s
• δ−
(s) is the in degree of s
• δ(s) = δ+
(s) − δ−
(s)
• Sinks are vertices such as δ+
(s) = 0
• Sources are vertices such as δ−
(s) = 0.
2.2 The feedback arc set problem
Given a directed graph G = (V, A) where V represents the vertices and A ⊆ V × V the arcs, a feedback
arc set consists of finding a set of arcs A ⊆ A such that the directed graph G = (V, A  A ) is acyclic
(Demetrescu and Finocchi, 2003).
The FAS problem can be seen as being able to find an order over the states of V . This order will be used
to redraw the graph horizontally and any leftward arc will be considered as a member of the FAS. For
instance, given the graph on Figure 2.1, an ordering on its states shown on Figure 2.2 allows to determine
a valid FAS represented by the leftward arcs drawn in red.
More formally, given a state ordering noted s1, s2, . . . , sn and a function ρ : V −→ N where ρ(s)
returns the position of s in the list of ordered states, the feedback arc set T is defined as:
T = {(s, d) ∈ A | ρ(s) > ρ(d)}
Many orderings can be found. For instance the first order shown Figure 2.3 simply uses the number
of each node as an ordering criteria whilst the second one uses the prefix order of a depth first search to
order the states. When looking at the arc 5 → 6, one can see that it is only present in the FAS when
using the second ordering. Clearly both results could be improved and show that finding a good sorting
algorithm is not trivial. However, it has been proved by Younger (1963) that there exists an ordering such
that computes a minimal feedback arc set. Such an ordering is called an optimum ordering.
2.2 The feedback arc set problem 6
5 2 3
46 1
Figure 2.1 – An example graph
5 2 3 6 1 4
Figure 2.2 – A state ordering of the graph on Figure 2.1
5 2 3
46 1 1 2 3 4 5 6
(a) Basic order
5 2 3
46 1 1 6 2 3 4 5
(b) Prefix order of DFS
Figure 2.3 – Two state ordering
7 Solving the feedback arc set
1
Source
... i
3
... k
-2
... n
Sink
Figure 2.4 – GR’s state ordering
The interest in FAS dates back to as early as 1957 (Unger, 1957) and it was proved in 1979 that the
computation of a minimal FAS is an NP-Hard problem (Michael and Johnson, 1979). However, many
heuristics for the FAS problem exist allowing to compute a good FAS in polynomial time. For instance,
the solution proposed by Younger (1963) is computed in θ(n4
).
2.3 The GR heuristic
Eades et al. (1993) developed a greedy algorithm called GR, which is a heuristic to finding a FAS in
θ(m), where m stands for the number of arcs in the graph. GR requires a simple connected directed
graph G = (V, A) such as:
• n = |V |
• ∃(si → sj) ∈ A ⇒ si = sj, i.e. no self loops
• ∃(si → sj) ∈ A ⇒ si → sj is unique
• ∀s ∈ V, δ+
(s) < n and δ−
(s) < n.
• ∀s ∈ V, −(n − 1) < δ(s) < n − 1
When all nodes on the left are sources and all nodes on the right are sinks, the number of leftward arcs
is null. Having one or more leftward arcs means that a node on the right has δ+
(s) > 0, and a node on
the left has δ−
(s) > 0. Because of this, we want to have nodes with the highest out degree on the left
and nodes with the highest in degree on the right. To achieve this the difference between the in and out
degree δ(s) is computed. This value is then used to order the rest of the nodes when there are no sinks
and no sources. Figure 2.4 shows a possible state ordering and has put nodes with a high δ(s) on the left,
and nodes with a low δ(s) on the right. If nodes i and k were to be swapped then there would be a node
with a high out degree on the right and a node with a high in degree on the left and it is exactly that kind
of situation that generates leftward arcs. Therefore the resulting FAS would most likely be bigger.
Algorithm [1] defines a state ordering considering sources, sinks and the δ-value of each node. At each
iteration, GR puts each sink at the end of the ordering list and each source at the beginning and removes
them from the graph. Then GR searches the node with the highest δ-value and inserts at the beginning of
the list and removes it from the graph. This is repeated until no more nodes are left in the graph.
The function hasSink(G) returns True iff at least one vertex of G has its out degree equal to zero.
The function hasSource(G) returns True iff at least one vertex of G has its in degree equal to zero.
The notation G − u represents the removal of the vertex u in G along with every arc incident to u.
The function getSource(G) returns one vertex of G that is a source.
The function getSink(G) returns one vertex of G that is a sink.
The function arg max
v∈G
δ(v) returns the vertex with the highest δ-value.
To achieve an algorithm in θ(m), GR partitions the vertex set of G into sources, sinks, and δ-classes as
follows: 


Vd = {u ∈ V | d = δ(u); δ+
(u) > 0; δ−
(u) > 0}, −n + 3 ≤ d ≤ n − 3
Vn−2 = {u ∈ V | δ−
(u) = 0; δ+
(u) > 0}
V−n+2 = {u ∈ V | δ+
(u) = 0}
2.4 Generalization: from graph to automata 8
Algorithm 1 GR feedback arc set
1: function FAS(Graph G)
2: s1 ← ∅
3: s2 ← ∅
4: while G = ∅ do
5: while hasSink(G) do
6: u ← getSink(G)
7: s2 ← us2
8: G ← G − u
9: end while
10: while hasSource(G) do
11: u ← getSource(G)
12: s1 ← s1u
13: G ← G − u
14: end while
15: u ← arg max
v∈G
δ(v)
16: s1 ← s1u
17: G ← G − u
18: end while
19: return s1s2
20: end function
There are at most 2n − 3 δ-classes thanks to the graph definition (required by GR) given at the beginning
of this section. Now any vertex u ∈ V falls into exactly one of the 2n − 3 δ-classes. The vertices in
each δ-classes are connected together by a doubly linked list, which makes it easier to remove vertices
form their corresponding δ-class. This construction can be done θ(m) time by computing the in and out
degree of each vertex and inserting at the same time that vertex in its corresponding δ-class. Thanks to
these δ-classes, the selection of the sources, sinks and a vertex with the highest δ-value is done in θ(1).
Whenever a vertex is removed from the graph, each of its predecessors and successors are updated in θ(1)
by computing a new δ-value and inserting each of vertices in another δ-class. The number of updates is
therefore equivalent to twice the number of edges as each edge is treated once as a successor and once as
a predecessor. The complexity of GR is indeed θ(m).
2.4 Generalization: from graph to automata
The constraints on the type of graph that GR works on allows it to know ahead of time the cardinality of
all the δ-classes : (2n − 3). This information is needed for implementing GR in an efficient way. As it
was shown in Section 2.2, each every vertex of G is sorted according to its δ-value in a corresponding
bin. This allows to compute the next vertex with the greatest δ-value in θ(1).
To generalize GR to automata, the definition of G needs modifications. Indeed, self-loops, multiple
transitions from one state to another need to be handled. For this, the prerequisite stated in (Eades et al.,
1993) is modified. By allowing a larger range of graphs, the cardinality of the δ-classes is no longer
bound. Indeed, a vertex can now have an infinite number of arcs.
It is necessary to define a new method that computes the number of δ-classes required for implement-
ing GR. This quantity, noted δc, can be defined as follows:
δc = max{δ+
(s) | s ∈ V } + max{δ−
(s) | s ∈ V } + 1
9 Solving the feedback arc set
Now, the number of δ-classes is δc instead of the original 2n − 3.
Self-loops are ignored since they have no effect at all on the δ-value of a vertex. Indeed, a self loop
adds one to the out degree and one to the in degree of any given vertex.
2.5 Implementation considerations
Let bidigraph denote a bidirectional and directed graph with the following properties:
• ∀s ∈ V, δ+
(s) and δ−
(s) is computed in θ(1).
• ∀s ∈ V, Succ(s) represents the list of successors of s
• ∀s ∈ V, Pred(s) represents the list of predecessors of s.
• |δ-classes| = δc = max{δ+
(s) | s ∈ V } + max{δ−
(s) | s ∈ V } + 1
• δ-classes are indexed between 0 and δc − 1
• There is a vector that contains a pointer to a vertex. This pointer is the head of a doubly linked list
of vertices with the same δ-value. Each index of the vector corresponds to a different δ-class
Figure 2.5 represents a possible memory layout that respects the definition of a bidigraph. The corre-
sponding graph is found on Figure 2.1. The vertex si of Figure 2.5 correspond to the vertex labeled with
i on Figure 2.1. The notation &si represents the address of the vertex si. In this example, five different
δ-classes can be found, four of these are composed of one vertex, the last one has two vertices. The blue
arrows represents the list of successors for each vertices whilst the red arrow represents the list of prede-
cessors for each vertices. Each elements in the list of deltas is a pointer to a vertex. Each of these vertices
are linked to the next element in the δ-list (if any) as shown by the green arrow. This implementation
enables the computation of a vertex with the highest δ-value in θ(1). Moreover, when removing a vertex
s, each predecessor/successor has its out/in degree decremented and is then inserted as the head of its new
δ-class list in θ(1). Two counters for the in and out degree for each nodes are used to be able to compute
the δ-class the vertex should be in. The removal of s from its δ-class list in done in θ(1). Finally, to avoid
loosing time on reordering every vector, a bit in the structure containing the vertex s can be set to identify
it as removed.
Eades et al. (1993) state that the removal of a vertex followed by the update of the δ-classes is done in
θ(1) and that computing the next vertex with the greatest δ-value is also in θ(1). If this is not respected,
GR no longer computes in linear time.
Being able to remove a vertex and update all its successors and predecessors in θ(δ+
(s) + δ−
(s))
proves that GR is computed in θ(m) where m = |A|. Indeed, since a vertex is chosen in θ(1) and is
removed in θ(δ+
(s) + δ−
(s)) and that each vertex is treated only once, GR’s complexity is defined by:
n
i=1
(δ+
(si) + δ−
(si)) = 2|A|.
2.6 Discussion
The results of GR applied to the graph of Figure 2.1 is shown on Figure 2.6. The red arcs represent the
FAS computed. However, when removing the arc s3 → s1 from the FAS computed by GR, the FAS
becomes minimal. This example shows one of the limitations of GR and can be improved by computing
the strongly connected components (SCC) of G.
A procedure in Spot returns a sequence (G1, G2, . . . , Gk) of the strongly connected components of
a directed graph G. The topological order given by that sequence guarantees that there are no leftward
2.6 Discussion 10
&s2 &s5 &s6
&s6
&s1 &s4
&s3
&s6
&s5
Vector of succs
s1
s2
s3
s4
s5
s6
Vector of vertices
s1
s2
s3
s4
s5
s6
&s3
&s1
&s4
&s3
&s1&s6
&s1&s2
Vector of preds
&s5
nextδ-vertex
Null
δ0
&s6
δ1
&s5
δ2
&s2
δ3
&s3
δ4
&s1
δ5
Null
δ6
Vector of heads of δ-classes
Null
δ0
&s6
δ1
&s5
δ2
&s2
δ3
&s3
δ4
&s1
δ5
Null
δ6
Figure 2.5 – Memory layout of a bidigraph
6 2 3
46 1
Figure 2.6 – FAS computed by GR
11 Solving the feedback arc set
2 1
4 3
(a) Possible FAS using GR
2 1
4 3
(b) FASH’s FAS
Figure 2.7 – Peter Eades’ graph
arcs between the components (that is, no arcs from Gj to Gi for i < j). The SCC can be used in three
different manners. The first way consists in starting by computing an order over all the states using GR.
Then, group up the states by SCC and use the topological order to make sure no arcs between a group
on the right hand side goes to a group on the left hand side. The second method consists in computing a
different FAS for each SCC and joining the results together using once again the topological order. The
last technique computes the state ordering first. Then, when a query is made to find out if a transition
is part of the FAS, you check if the transition is in an SCC and then check if it is a leftward arc when
considering the state ordering. However, if the arc is not part of any SCC, you immediately know that it
cannot be part of a FAS.
This optimisation allows the computation of a minimal FAS for the graph shown on Figure 2.1 and
since computing the SCC of G is done in linear time the overall complexity of GR is not affected.
Peter Eades (1995) also uses this technique in the algorithm FASH, but goes even further. It was
noticed that GR could be improved during the choice of the next δ-vertex. Indeed, when several vertices
have the same δ-value, GR picks the first one it finds. Doing so can increase the size of the resulting FAS
since it is when a δ-vertex is chosen that one or several arcs are being added into the FAS. When there are
several vertices with the same δ-value that are candidates for removal, algorithm FASH considers each
one of them separately. The objective is finding an edge whose removal will generate a sink or a source.
On Figure 2.7, the edge 4 → 2 and 2 → 1 are candidates for removal. However, when removing 2 → 1,
the node 2 becomes a source. Therefore, FASH will choose that edge for removal. This explains why
algorithm FASH is able to compute a minimal FAS on the graph shown on Figure 2.7.
With GR, when picking the first δ-vertex either vertex 2 or vertex 1 can be chosen. If vertex 1 is
chosen, then the FAS will be composed of two arcs, whilst if vertex 2 is chosen first, only one arc will be
present in the FAS. In algorithm FASH, vertex 2 will always be chosen first. However, because FASH is
computed in θ(mn), more studies should be made to see if the extra cost is worth it.
Chapter 3
Applying a FAS to DTGBACOMP
This chapter defines what Büchi automata are, defines how to complement a transition based Büchi au-
tomaton (TGBA), and shows how a FAS can be used to improve the complementation. The chapter will
end by presenting some benchmarks.
3.1 Preliminaries
Let AP designate the finite set of atomic propositions and Σ = 2AP
denote the set of these valuations.
For instance, if AP = {a, b}, then Σ = 2AP
= {{a, b} , {a} , {b} , ∅} The following automata are fed
infinite sequences of words over Σ i.e. elements of Σω
.
Definition 1 (TGBA) A transition-based generalized Büchi automaton over the alphabet Σ = 2AP
is a
tuple B = Q, Q0, δ, F where
– Q is a finite set of states,
– Q0 ⊆ Q is a set of initial states,
– δ ⊆ Q × Σ × Q is a transition relation, where each element (q, l, q ) represents a transition
from state q to state q labeled by the valuation l,
– F = {F1, F2, . . . , Fk} is a set of acceptance sets of transitions where each Fi ⊆ 2δ
.
B accepts an execution l0l1 . . . ∈ Σω
if there exists an infinite path (q0, l0, q1)(q1, l1, q2) . . . ∈ δω
that visits each acceptance set infinitely often:
q0 ∈ Q0 and ∀f ∈ F, ∀i ∈ N, ∃j ≥ i, (qj, lj, qj+1) ∈ f
.
1 2 a
•¯a¯b
•
Figure 3.1 – DTGBA
13 Applying a FAS to DTGBACOMP
On Figure 3.1, the transitions with colored bullets represent elements of the different set of accep-
tance sets (noted {F1, F2}). In this case F contains two sets of one element. An input sequence
would be accepted if the green and red bullets are infinitely visited. The arrow towards the state
1, means that the state 1 is our initial state q0. The labels over the transitions, represent different
atomic proposition and accepts any atomic proposition.
Definition 2 (DTGBA) A deterministic transition-based generalized Büchi automaton over the alphabet
Σ = 2AP
a tuple B = Q, q0, δ, F where
– Q is a finite set of states,
– q0 ∈ Q is a unique initial states,
– δ ⊆ Q × Σ × Q is a transition relation, where for each state q ∈ Q and for each symbol
a ∈ Σ, δ(q, a) is unique but not necessarily defined,
– F = {F1, F2, . . . , Fk} is a set of acceptance sets of transitions where each Fi ⊆ 2δ
.
B accepts an execution l0l1 . . . ∈ Σω
if there exists an infinite path (q0, l0, q1)(q1, l1, q2) . . . ∈ δω
that visits each acceptance set infinitely often:
q0 ∈ Q0 and ∀f ∈ F, ∀i ∈ N, ∃j ≥ i, (qj, lj, qj+1) ∈ f
.
On the other hand, a word is rejected iff one of the acceptance set is visited finitely or there is no
path that reads the word.
Definition 3 (DTGBACOMP) The complementation of a DTGBA is a tuple B = Q , q0, δ , F cre-
ated from the original DTGBA B = Q, q0, δ, F where
– α = |F|
– q0 ∈ Q is the same initial state as the q0 of B.
– Q = {qs} ∪ Q ∪ (Q × 1, α ) where |Q| = |Q1| = |Q2| = . . . = |Qα| and qs denotes a sink
state.
– δs = {(q, l, qs) | ∀q ∈ Q, ∀l ∈ Σ, q ∈ Q | δ(q, l) = q } ∪ {(qs, l, qs) | l ∈ Σ} corresponds
to the set of transitions associated with the sink state qs
– ∀i ∈ 1, α , δi = {(qi, l, qi) | ∃(q, l, q ) ∈ δ ∪ δs} ∪ {(q, l, qi) | ∃(q, l, q ∈ δ ∪ δs} such as
qi ∈ (Q × 1, α ), qi ∈ (Q × 1, α ) ∪ {qs}, and q, q ∈ Q2
– δ = δ ∪ δs ∪i∈[|1,α|] (δi  Fi)
– F = ∪i∈[|1,α|]δi ∪ δs
Figure 3.2 gives a general idea of how the automaton looks after complementation. G represents the
original automaton from which each accepting transition labeled by •, • or • has become non accept-
ing. The black bullet • represents another set of accepting sets. The notation Acc[•] denotes the set of
transitions containing a black bullet.
Every accepting conditions is associated to a sub-clone called GAcc[•] where every transition marked
with • are removed. The remaining transitions become accepting. Note that if a transition has one or
several accepting marks, each sub-clone ignoring that specific accepting set does not need to duplicate
that transition. Once the main clone and the sub-clones are created, G gets connected to the sub-clones.
A naive way for doing this is taking every transition of the main clone and duplicating it as many times
as there are sub-clones. These newly created transitions can become accepting if desired. A word will
3.1 Preliminaries 14
G
G  Acc[•]G  Acc[•] G  Acc[•]
qs •
•
•
•
• ••
•
•
Figure 3.2 – Main clone connected to sub-clones
1
2
a
•¯a¯b•
(a) Initial DTGBA
-1
1
2
1a
2a
1b
2b -1
QQ × {1} Q × {2}
1
2
1a
2a
1b
2b -1aa
¯a¯b ¯a¯b
¯ab
•
¯a¯b
a
¯ab
¯ab
• •¯a¯b
•a
•a
(b) Application of DTGBACOMP
Figure 3.3 – Illustration of definition 3
15 Applying a FAS to DTGBACOMP
only be accepted if it visits a cycle and visits each set of an acceptance set an infinite amount of times,
however the non-deterministic transitions from the main clone to sub-clone do not generate any cycles.
Figure 3.3 illustrates the third definition. In this case F contains two sets of one element and the
complemented version contains one set of twelve elements. On this example the original DTGBA accepts
infinite words that eventually always assert ¯a and ¯b or that when ¯a¯b is seen the proposition after the next
proposition cannot be ¯ab. The complemented automata recognizes any words that eventually no longer
sees the green bullet or the red bullet or gets the atomic proposition ¯ab after passing by a transition labeled
by . Of course a simpler complemented automaton could recognize the same language, e.g. state 1b can
be removed, which leads us to the improvement section.
3.2 Improving complementation by using a FAS
The third definition given in section section 3.1 is a generalization of the same algorithm studied in Kur-
shan (1987) which complements a deterministic transition based Büchi automaton (TBA). The difference
is that there is only one set of acceptance conditions meaning that the bullets drawn on the automata are
all the same color. The complementation’s definition can be refined to create a more condensed automa-
ton. Indeed, creating a transition on the automaton of Figure 3.3 between the states 2 → 1b would lead
to a useless state. Since in the complemented automaton it is desired to stay in a cycle of one of the sub-
clone, computing a FAS for each sub-clone will help find the useful transitions that allow to go towards
an accepting cycle from the main clone to a sub-clone. Ideally, by using a FAS, only one transition per
cycle will be connected.
3.3 Implementation consideration
The only focus in this paper is to diminish the number of transitions between the main clone and the
sub-clones.
G  Acc[•] and G have the same global form as they have the same number states and share the same
transitions except for those marked with •. Therefore, if the transition labeled with • generates a cycle
in G, G  Acc[•] will not have that cycle since that transitions will be ignored by construction. However,
when constructing the complemented automaton, the sub-clones have not been fully created yet. Because
of this, a FAS can only be computed on the original automaton. Therefore, to be able to simulate a
computation of a FAS in each sub-clone, a different mask of the corresponding acceptance set for each
sub-clone is used to hide those accepting transitions in the original automaton allowing us to compute the
desired FAS.
3.4 Evaluation
For the tests, 744 linear temporal logic (LTL) formulas are used to output DTGBA using a translation
algorithm from LTL to DTGBA implemented in Spot. These formulas are all composed in one of the
following manners : A ∧ GF(p ⇐⇒ q) or B ∧ GF(p ⇐⇒ Xq) where G stands for globally, F
stands for in the future and A and B are sub-LTL formulas. Most of the formulas A and B correspond
to generated formulas by Spot’s randltl binary. The rest of the formulas come from Spot’s benchmarking
formulas. These form of LTL formulas were chosen to help create as many cycles as possible. The results
provided on Figure 3.4 are based on the number of transitions created from DTGBACOMP. For a given
automaton, a complementation using the FAS and a complementation using the back edges as a FAS is
used. The state ordering of the back edges corresponds to the prefix order of a DFS presented in Chapter
2. The number of created transitions of both methods is used to produce a percentage representing the
3.5 Discussion 16
0 5 10 15 20 25 30
0
50
100
Percentage improvement on number of transitions
Numberofautomata
Figure 3.4 – Results on automata from 744 LTL formulas
improvement. The range varies between 0% and 31% and can be seen on the horizontal axis of Figure
3.4. The number of automata with the same amount of improvement is represented by the vertical axis.
For instance, the sixth column shows that about 140 automata have 5% fewer transitions than before.
One great aspect, is noticing that the newly created automata are always smaller than before or at worst
the same size. The reason there are so many automaton with a 0% improvement is because many of the
created automata were either very small, or had almost no cycles.
After adjusting GR and taking into consideration the new definition for the complementation of a
TGBA, the computation of a FAS for DTGBACOMP clearly improves the outputted automaton by di-
minishing the number of transitions. The average improvement is around 5% to 6% and for one test in
particular an improvement of 31% can be noticed. Its corresponding formula is
Ga ∨ Gc ∨ (G(a ∧ GFb) ∧ G(c ∨ GF¬b)) ∧ GF(p ⇐⇒ Xq). The number of transitions passed from 2190
to 1499.
3.5 Discussion
In Spot transitions are represented in two ways. Transitions correspond to the number of atomic propo-
sition that are physically present. For example, when the atomic propositions are {a, b}, a transition
labeled by a encodes two edges, {ab, a¯b}. In a cycle composed of three states, each transition might
encode a different number of edges but have the same number of transitions. By considering the edges
when computing a FAS, the vertex with the least number of edges will be chosen.
However, when considering edges rather than transitions a new issue arises. Figure 3.5 displays the two
different FAS that can be computed. The FAS with the red arc is computed considering the transitions
whilst the FAS with green arcs is computed considering the edges. There is room for discussion in
deciding which method is best. The resulting graph after applying DTGBACOMP is displayed on Figure
3.6.
When using transitions, the state q0 can immediately non deterministically jump to an accepting sub-
clone and any atomic propositions allows this. However the result considereding the edges has created
more non deterministic states. It is clearly non trivial to determine which graph is best.
One might consider being deterministic the main criteria, however it is challenging to determine an
17 Applying a FAS to DTGBACOMP
1 2
3
•¯a
a¯b ∨ ¯ab
ab
a
(a) FAS on transitions
1 2
3
•¯a
a¯b ∨ ¯ab
ab
a
(b) FAS on edges
Figure 3.5 – Transitions vs edges
1 2
3
3a
1a
2a
-1
¯a
•
a
¯b ∨
¯ab
•
•
¯a¯b
•¯a¯b
ab
a
•a¯b ∨ ¯ab
•ab •a•
(a) Complementing considering transitions
1 2
3
3a
1a
2a
-1
¯a
•
a
¯b ∨
¯ab
•ab
•a
•
¯a¯b
•¯a¯b
ab
a
•a¯b ∨ ¯ab
•ab •a•
(b) Complementing considering edges
Figure 3.6 – DTGBACOMP results of figure 3.2
3.5 Discussion 18
accurate method in defining how deterministic an automata is. Nonetheless if a method is found, more
studies can be done to help determine which technique is better. If the results vary depending on the
automaton, then a heuristic should be implement to determine when to consider edges and when not to.
Chapter 4
Conclusion
Algorithm GR is a very quick heuristic for finding a feedback arc set. It tries to find an accurate order-
ing on the vertices by comparing the in and out degrees of each vertex. Vertices with the highest out
degree are chosen first whilst vertices with a high in degree are chosen last. The transitions from a less
recently chosen vertex to a more recently chosen vertex form the feedback arc set. After adjusting GR,
by considering the strongly connected components, an even better FAS is computed without changing the
complexity of the algorithm.
There are however many other great heuristics that compute better results. For instance algorithm
FASH spends time on choosing the next δ-vertex when there is more than one candidate. By verifying
each candidate, FASH searches the one that will generate a sink or a source after its removal. This allows
FASH to handle more complex graphs. However, its execution is slower than GR as it requires θ(mn)
time.
One must find the best compromise between efficiency and performance. To avoid slowing down the
algorithm that computes the complementation of an automata, a FAS computed in linear time is chosen.
Even if GR does not always find a minimal FAS, the final results is still quite satisfying. It would be
interesting to see the differences between our optimized GR algorithm and FASH.
For the complementation of a DTGBA, it was seen that using a FAS allowed to connect the cycles of
the main clone to its sub-clone. Moreover, by only considering the cycles in the sub-clones it was shown
that even less transitions were necessary to connect the clones.
Thanks to these improvements, some automata have up to 31% less transitions than before. Moreover,
the computed automata is always more deterministic than before. This is because the original method
used a DFS ordering which tends to find a poor FAS. The other reason is that the cycles were only
searched for on the main clone which meant that cycles from the main clone would sometimes be connect
to a sink in a sub-clone.
One last thing needs to be taken into consideration, and that is finding a method that dictates how
deterministic an automaton is. This method will allow to determine whether or not in some situations
edges or transitions should be considered. If the best solution depends on the automaton, another heuristic
should be considered to choose between edges or transitions. Finally, another algorithm in Spot, the
conversion of a deterministic Rabin automaton to a Büchi automaton, can benefit from a FAS. That
algorithm uses a construction pattern similar to the one as the one used for the complementation.
Chapter 5
Bibliography
Demetrescu, C. and Finocchi, I. (2003). Combinatorial algorithms for feedback problems in directed
graphs. Information Processing Letters, 86(3):129–136.
Duret-Lutz, A. and Poitrenaud, D. (2004). Spot: an extensible model checking library using transition-
based generalized Büchi automata. In Modeling, Analysis, and Simulation of Computer and Telecom-
munications Systems, 2004.(MASCOTS 2004). Proceedings. The IEEE Computer Society’s 12th Annual
International Symposium on, pages 76–83. IEEE.
Eades, P., Lin, X., and Smyth, W. F. (1993). A fast effective heuristic for the feedback arc set problem.
Information Processing Letters, 47:319–323.
Kurshan, R. P. (1987). Complementing deterministic Büchi automata in polynomial time. J. Comput.
Syst. Sci., 35(1):59–71.
Michael, R. G. and Johnson, D. S. (1979). Computers and intractability: A guide to the theory of
NP-completeness. WH Freeman & Co., San Francisco.
Peter Eades, X. L. (1995). A Heuristic for the Feedback Arc Set Problem. Centre for Discrete Mathe-
matics and Computing.
Unger, S. (1957). A Study of Asynchronous Logical Feedback Networks. Massachusetts Institute of
Technology. Research Laboratory of Electronics. Technical report. Research Laboratory of Electronics,
Massachusetts Inst. of Technology.
Younger, D. (1963). Minimum feedback arc sets for a directed graph. Circuit Theory, IEEE Transactions
on, 10(2):238–245.

More Related Content

What's hot

Block diagrams and signal flow graphs
Block diagrams and signal flow graphsBlock diagrams and signal flow graphs
Block diagrams and signal flow graphsHussain K
 
Fast Algorithm for Computing the Discrete Hartley Transform of Type-II
Fast Algorithm for Computing the Discrete Hartley Transform of Type-IIFast Algorithm for Computing the Discrete Hartley Transform of Type-II
Fast Algorithm for Computing the Discrete Hartley Transform of Type-IIijeei-iaes
 
Hamming Distance and Data Compression of 1-D CA
Hamming Distance and Data Compression of 1-D CAHamming Distance and Data Compression of 1-D CA
Hamming Distance and Data Compression of 1-D CAcsitconf
 
Hamming Distance and Data Compression of 1-D CA
Hamming Distance and Data Compression of 1-D CAHamming Distance and Data Compression of 1-D CA
Hamming Distance and Data Compression of 1-D CAcscpconf
 
Low Power Adaptive FIR Filter Based on Distributed Arithmetic
Low Power Adaptive FIR Filter Based on Distributed ArithmeticLow Power Adaptive FIR Filter Based on Distributed Arithmetic
Low Power Adaptive FIR Filter Based on Distributed ArithmeticIJERA Editor
 
Reduction of multiple subsystem [compatibility mode]
Reduction of multiple subsystem [compatibility mode]Reduction of multiple subsystem [compatibility mode]
Reduction of multiple subsystem [compatibility mode]azroyyazid
 
Introduction to Cache-Oblivious Algorithms
Introduction to Cache-Oblivious AlgorithmsIntroduction to Cache-Oblivious Algorithms
Introduction to Cache-Oblivious AlgorithmsChristopher Gilbert
 
On selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series predictionOn selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series predictioncsandit
 
Optimal Chain Matrix Multiplication Big Data Perspective
Optimal Chain Matrix Multiplication Big Data PerspectiveOptimal Chain Matrix Multiplication Big Data Perspective
Optimal Chain Matrix Multiplication Big Data Perspectiveপল্লব রায়
 
0006.scheduling not-ilp-not-force
0006.scheduling not-ilp-not-force0006.scheduling not-ilp-not-force
0006.scheduling not-ilp-not-forcesean chen
 
Mathematical Relations
Mathematical RelationsMathematical Relations
Mathematical Relationsmohull
 
Second Genetic algorithm and Job-shop scheduling presentation
Second Genetic algorithm and Job-shop scheduling presentationSecond Genetic algorithm and Job-shop scheduling presentation
Second Genetic algorithm and Job-shop scheduling presentationAccenture
 
An Improvement to the Brent’s Method
An Improvement to the Brent’s MethodAn Improvement to the Brent’s Method
An Improvement to the Brent’s MethodWaqas Tariq
 
Parallel sorting algorithm
Parallel sorting algorithmParallel sorting algorithm
Parallel sorting algorithmRicha Kumari
 
Parellelism in spectral methods
Parellelism in spectral methodsParellelism in spectral methods
Parellelism in spectral methodsRamona Corman
 

What's hot (20)

Fourier Transform Assignment Help
Fourier Transform Assignment HelpFourier Transform Assignment Help
Fourier Transform Assignment Help
 
Fortran induction project. DGTSV DGESV
Fortran induction project. DGTSV DGESVFortran induction project. DGTSV DGESV
Fortran induction project. DGTSV DGESV
 
Block diagrams and signal flow graphs
Block diagrams and signal flow graphsBlock diagrams and signal flow graphs
Block diagrams and signal flow graphs
 
Digital Signal Processing Assignment Help
Digital Signal Processing Assignment HelpDigital Signal Processing Assignment Help
Digital Signal Processing Assignment Help
 
Fast Algorithm for Computing the Discrete Hartley Transform of Type-II
Fast Algorithm for Computing the Discrete Hartley Transform of Type-IIFast Algorithm for Computing the Discrete Hartley Transform of Type-II
Fast Algorithm for Computing the Discrete Hartley Transform of Type-II
 
Computation Assignment Help
Computation Assignment Help Computation Assignment Help
Computation Assignment Help
 
Digital Signal Processing Assignment Help
Digital Signal Processing Assignment HelpDigital Signal Processing Assignment Help
Digital Signal Processing Assignment Help
 
Hamming Distance and Data Compression of 1-D CA
Hamming Distance and Data Compression of 1-D CAHamming Distance and Data Compression of 1-D CA
Hamming Distance and Data Compression of 1-D CA
 
Hamming Distance and Data Compression of 1-D CA
Hamming Distance and Data Compression of 1-D CAHamming Distance and Data Compression of 1-D CA
Hamming Distance and Data Compression of 1-D CA
 
Low Power Adaptive FIR Filter Based on Distributed Arithmetic
Low Power Adaptive FIR Filter Based on Distributed ArithmeticLow Power Adaptive FIR Filter Based on Distributed Arithmetic
Low Power Adaptive FIR Filter Based on Distributed Arithmetic
 
Reduction of multiple subsystem [compatibility mode]
Reduction of multiple subsystem [compatibility mode]Reduction of multiple subsystem [compatibility mode]
Reduction of multiple subsystem [compatibility mode]
 
Introduction to Cache-Oblivious Algorithms
Introduction to Cache-Oblivious AlgorithmsIntroduction to Cache-Oblivious Algorithms
Introduction to Cache-Oblivious Algorithms
 
On selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series predictionOn selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series prediction
 
Optimal Chain Matrix Multiplication Big Data Perspective
Optimal Chain Matrix Multiplication Big Data PerspectiveOptimal Chain Matrix Multiplication Big Data Perspective
Optimal Chain Matrix Multiplication Big Data Perspective
 
0006.scheduling not-ilp-not-force
0006.scheduling not-ilp-not-force0006.scheduling not-ilp-not-force
0006.scheduling not-ilp-not-force
 
Mathematical Relations
Mathematical RelationsMathematical Relations
Mathematical Relations
 
Second Genetic algorithm and Job-shop scheduling presentation
Second Genetic algorithm and Job-shop scheduling presentationSecond Genetic algorithm and Job-shop scheduling presentation
Second Genetic algorithm and Job-shop scheduling presentation
 
An Improvement to the Brent’s Method
An Improvement to the Brent’s MethodAn Improvement to the Brent’s Method
An Improvement to the Brent’s Method
 
Parallel sorting algorithm
Parallel sorting algorithmParallel sorting algorithm
Parallel sorting algorithm
 
Parellelism in spectral methods
Parellelism in spectral methodsParellelism in spectral methods
Parellelism in spectral methods
 

Similar to Spot Feedback Arc Set Reduces Automata Size by 31

Shor's discrete logarithm quantum algorithm for elliptic curves
 Shor's discrete logarithm quantum algorithm for elliptic curves Shor's discrete logarithm quantum algorithm for elliptic curves
Shor's discrete logarithm quantum algorithm for elliptic curvesXequeMateShannon
 
Local Model Checking Algorithm Based on Mu-calculus with Partial Orders
Local Model Checking Algorithm Based on Mu-calculus with Partial OrdersLocal Model Checking Algorithm Based on Mu-calculus with Partial Orders
Local Model Checking Algorithm Based on Mu-calculus with Partial OrdersTELKOMNIKA JOURNAL
 
DESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORM
DESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORMDESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORM
DESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORMsipij
 
bachelors_thesis_stephensen1987
bachelors_thesis_stephensen1987bachelors_thesis_stephensen1987
bachelors_thesis_stephensen1987Hans Jacob Teglbj
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Jgrass-NewAge: Kriging component
Jgrass-NewAge: Kriging componentJgrass-NewAge: Kriging component
Jgrass-NewAge: Kriging componentNiccolò Tubini
 
Fault detection based on novel fuzzy modelling
Fault detection based on novel fuzzy modelling Fault detection based on novel fuzzy modelling
Fault detection based on novel fuzzy modelling csijjournal
 
Learning Convolutional Neural Networks for Graphs
Learning Convolutional Neural Networks for GraphsLearning Convolutional Neural Networks for Graphs
Learning Convolutional Neural Networks for Graphspione30
 
Lec 4 design via frequency response
Lec 4 design via frequency responseLec 4 design via frequency response
Lec 4 design via frequency responseBehzad Farzanegan
 
Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...
Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...
Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...CSCJournals
 
Paper on experimental setup for verifying - &quot;Slow Learners are Fast&quot;
Paper  on experimental setup for verifying  - &quot;Slow Learners are Fast&quot;Paper  on experimental setup for verifying  - &quot;Slow Learners are Fast&quot;
Paper on experimental setup for verifying - &quot;Slow Learners are Fast&quot;Robin Srivastava
 
A New Method For Solving Kinematics Model Of An RA-02
A New Method For Solving Kinematics Model Of An RA-02A New Method For Solving Kinematics Model Of An RA-02
A New Method For Solving Kinematics Model Of An RA-02IJERA Editor
 
A Review: Compensation of Mismatches in Time Interleaved Analog to Digital Co...
A Review: Compensation of Mismatches in Time Interleaved Analog to Digital Co...A Review: Compensation of Mismatches in Time Interleaved Analog to Digital Co...
A Review: Compensation of Mismatches in Time Interleaved Analog to Digital Co...IJERA Editor
 
TR-CIS-0420-09 BobZigon
TR-CIS-0420-09 BobZigonTR-CIS-0420-09 BobZigon
TR-CIS-0420-09 BobZigonBob Zigon
 
A lexisearch algorithm for the Bottleneck Traveling Salesman Problem
A lexisearch algorithm for the Bottleneck Traveling Salesman ProblemA lexisearch algorithm for the Bottleneck Traveling Salesman Problem
A lexisearch algorithm for the Bottleneck Traveling Salesman ProblemCSCJournals
 

Similar to Spot Feedback Arc Set Reduces Automata Size by 31 (20)

Shor's discrete logarithm quantum algorithm for elliptic curves
 Shor's discrete logarithm quantum algorithm for elliptic curves Shor's discrete logarithm quantum algorithm for elliptic curves
Shor's discrete logarithm quantum algorithm for elliptic curves
 
Local Model Checking Algorithm Based on Mu-calculus with Partial Orders
Local Model Checking Algorithm Based on Mu-calculus with Partial OrdersLocal Model Checking Algorithm Based on Mu-calculus with Partial Orders
Local Model Checking Algorithm Based on Mu-calculus with Partial Orders
 
1108.1170
1108.11701108.1170
1108.1170
 
DESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORM
DESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORMDESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORM
DESIGN OF DELAY COMPUTATION METHOD FOR CYCLOTOMIC FAST FOURIER TRANSFORM
 
Lwrb ms
Lwrb msLwrb ms
Lwrb ms
 
bachelors_thesis_stephensen1987
bachelors_thesis_stephensen1987bachelors_thesis_stephensen1987
bachelors_thesis_stephensen1987
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Final Report
Final ReportFinal Report
Final Report
 
Jgrass-NewAge: Kriging component
Jgrass-NewAge: Kriging componentJgrass-NewAge: Kriging component
Jgrass-NewAge: Kriging component
 
Fault detection based on novel fuzzy modelling
Fault detection based on novel fuzzy modelling Fault detection based on novel fuzzy modelling
Fault detection based on novel fuzzy modelling
 
Learning Convolutional Neural Networks for Graphs
Learning Convolutional Neural Networks for GraphsLearning Convolutional Neural Networks for Graphs
Learning Convolutional Neural Networks for Graphs
 
Lec 4 design via frequency response
Lec 4 design via frequency responseLec 4 design via frequency response
Lec 4 design via frequency response
 
Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...
Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...
Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...
 
Paper on experimental setup for verifying - &quot;Slow Learners are Fast&quot;
Paper  on experimental setup for verifying  - &quot;Slow Learners are Fast&quot;Paper  on experimental setup for verifying  - &quot;Slow Learners are Fast&quot;
Paper on experimental setup for verifying - &quot;Slow Learners are Fast&quot;
 
6. Implementation
6. Implementation6. Implementation
6. Implementation
 
post119s1-file3
post119s1-file3post119s1-file3
post119s1-file3
 
A New Method For Solving Kinematics Model Of An RA-02
A New Method For Solving Kinematics Model Of An RA-02A New Method For Solving Kinematics Model Of An RA-02
A New Method For Solving Kinematics Model Of An RA-02
 
A Review: Compensation of Mismatches in Time Interleaved Analog to Digital Co...
A Review: Compensation of Mismatches in Time Interleaved Analog to Digital Co...A Review: Compensation of Mismatches in Time Interleaved Analog to Digital Co...
A Review: Compensation of Mismatches in Time Interleaved Analog to Digital Co...
 
TR-CIS-0420-09 BobZigon
TR-CIS-0420-09 BobZigonTR-CIS-0420-09 BobZigon
TR-CIS-0420-09 BobZigon
 
A lexisearch algorithm for the Bottleneck Traveling Salesman Problem
A lexisearch algorithm for the Bottleneck Traveling Salesman ProblemA lexisearch algorithm for the Bottleneck Traveling Salesman Problem
A lexisearch algorithm for the Bottleneck Traveling Salesman Problem
 

Spot Feedback Arc Set Reduces Automata Size by 31

  • 1. A Feedback Arc Set for Spot Alexandre Lewkowicz (supervisor: Alexandre Duret-Lutz) Technical Report no 1406, July 2014 revision 1dd1ab2 Spot is an extensible model checking library using transition-based generalized Büchi automata (TGBA). It contains many state-of-the-art algorithms. In this paper, we focus on two algorithms that create automata with more transitions than necessary. These constructions can be enhanced by computing a feedback arc set (FAS): a set of edges which, when removed from the graph, leave a directed acyclic graph. Ideally, we want a minimal FAS, but this problem is NP-hard. We adapt and improve a heuristic proposed by Eades et al. that approximates a minimal FAS in linear time. We then show that the integration of this heuristic in the complementation of deterministic Büchi automata and in the conversion of Rabin automata to Büchi automata reduces the size of the output up to 31% in our experiments. These results depend greatly on the number of cycles and accepting states in the input automaton. Spot est une bibliothèque extensible pour le model checking qui utilise les automates de Büchi généralisés à transi- tions acceptantes. Elle contient de nombreux algorithmes avancés. Dans ce rapport, on se concentre sur deux de ces algorithmes qui construisent des automates avec plus de transitions que nécessaire. En pratique ces constructions utiliseraient moins de transitions si elles pouvaient calculer un feedback arc set (FAS), c’est-à-dire un ensemble de transitions à retirer du graphe pour le rendre acyclique. Dans l’absolu, on veut un FAS minimal, mais ce problème est NP-difficile. On adapte et améliore une heuristique proposée par Eades et al. qui permet une construction en temps linéaire. On montre ensuite comment cet algorithme bénéficie à la complémentation d’automates de Büchi déterministes et la traduction d’automates de Rabin en automates de Büchi. En fonction de l’automate traité on remarque une amélioration montant jusqu’à 31%. Ces résultats varient beau- coup selon le nombre de cycles et d’états acceptants. Keywords Spot, feedback arc set, acyclic graph, automata, model checking, deterministic TGBA complementation Laboratoire de Recherche et Développement de l’EPITA 14-16, rue Voltaire – FR-94276 Le Kremlin-Bicêtre CEDEX – France Tél. +33 1 53 14 59 22 – Fax. +33 1 53 14 59 13 alewkowicz@lrde.epita.fr – http://www.lrde.epita.fr/
  • 2. 2 Copying this document Copyright c 2014 LRDE. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with the Invariant Sections being just “Copying this document”, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is provided in the file COPYING.DOC.
  • 3. Contents 1 Introduction 4 2 Solving the feedback arc set 5 2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 The feedback arc set problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 The GR heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Generalization: from graph to automata . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.5 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3 Applying a FAS to DTGBACOMP 12 3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Improving complementation by using a FAS . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3 Implementation consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4 Conclusion 19 5 Bibliography 20
  • 4. Chapter 1 Introduction This report presents work done in Spot (Duret-Lutz and Poitrenaud, 2004), an extensible model checking library using transition-based generalized Büchi automata. Two algorithms, the deterministic TGBA complementation and the translation fro Rabin automaton to Büchi automaton (RA to BA), create automata with a similar construction pattern. The newly created automaton is composed of slightly modified sub-clones of the original. A main clone, which is connected to the sub-clones, has the same transitions and states as the original automaton except that the accepting states/transitions are no longer accepting. These algorithms require that each cycle in the main clone gets linked to each sub-clones. There are many ways to achieve this connection and the objective is finding a method that only links the required cycles and uses the right transitions to do so. By handling ω-words (words of infinite size), an input can iterate through a cycle in the automaton an infinite number of times. This input can chose at any time to non-deterministically jump to a sub-clone. Since the input infinitely travels through every state of a certain cycle, it does not matter which state of the cycle in the main clone allows to non-deterministically jump to the corresponding cycle of a sub-clone. Thanks to this property, picking one transition per cycle is sufficient for connecting the master clone to each sub-clones. Currently, the deterministic TGBA complementation uses a depth first search and each back edge computed is used to link the main clone to each sub-clone. This technique computes more transitions than necessary as any cycle of the main clone might have more than one transition allowing it to non- deterministically jump to the corresponding loop in a sub-clone. The solution provided in this paper to minimize the number of transitions is to compute a feedback arc set (FAS), a set of arcs which if removed leave the resultant graph free of directed cycles. This work is mostly based on the heuristic proposed by Eades et al. (1993) which computes a decent FAS in θ(m) with m the number of arcs. By choosing a solution in linear time, the altered algorithms will maintain their complexities. Moreover, using a sufficiently good FAS reduces the number of transitions from the main clone to the sub-clones enhancing the resulting automata. This report is organized as follows. In chapter 2, we present Eades et al. (1993) heuristic to the FAS problem followed by improvements and some implementation suggestions. In chapter 3, we show how the computed FAS can be applied to the complementation of deterministic TGBA.
  • 5. Chapter 2 Solving the feedback arc set In this chapter we start by giving a formal definition of the feedback arc set problem followed by an implementation of a heuristic. We will then show how to improve this heuristic. 2.1 Notation • s represents a vertex of G • δ+ (s) is the out degree of s • δ− (s) is the in degree of s • δ(s) = δ+ (s) − δ− (s) • Sinks are vertices such as δ+ (s) = 0 • Sources are vertices such as δ− (s) = 0. 2.2 The feedback arc set problem Given a directed graph G = (V, A) where V represents the vertices and A ⊆ V × V the arcs, a feedback arc set consists of finding a set of arcs A ⊆ A such that the directed graph G = (V, A A ) is acyclic (Demetrescu and Finocchi, 2003). The FAS problem can be seen as being able to find an order over the states of V . This order will be used to redraw the graph horizontally and any leftward arc will be considered as a member of the FAS. For instance, given the graph on Figure 2.1, an ordering on its states shown on Figure 2.2 allows to determine a valid FAS represented by the leftward arcs drawn in red. More formally, given a state ordering noted s1, s2, . . . , sn and a function ρ : V −→ N where ρ(s) returns the position of s in the list of ordered states, the feedback arc set T is defined as: T = {(s, d) ∈ A | ρ(s) > ρ(d)} Many orderings can be found. For instance the first order shown Figure 2.3 simply uses the number of each node as an ordering criteria whilst the second one uses the prefix order of a depth first search to order the states. When looking at the arc 5 → 6, one can see that it is only present in the FAS when using the second ordering. Clearly both results could be improved and show that finding a good sorting algorithm is not trivial. However, it has been proved by Younger (1963) that there exists an ordering such that computes a minimal feedback arc set. Such an ordering is called an optimum ordering.
  • 6. 2.2 The feedback arc set problem 6 5 2 3 46 1 Figure 2.1 – An example graph 5 2 3 6 1 4 Figure 2.2 – A state ordering of the graph on Figure 2.1 5 2 3 46 1 1 2 3 4 5 6 (a) Basic order 5 2 3 46 1 1 6 2 3 4 5 (b) Prefix order of DFS Figure 2.3 – Two state ordering
  • 7. 7 Solving the feedback arc set 1 Source ... i 3 ... k -2 ... n Sink Figure 2.4 – GR’s state ordering The interest in FAS dates back to as early as 1957 (Unger, 1957) and it was proved in 1979 that the computation of a minimal FAS is an NP-Hard problem (Michael and Johnson, 1979). However, many heuristics for the FAS problem exist allowing to compute a good FAS in polynomial time. For instance, the solution proposed by Younger (1963) is computed in θ(n4 ). 2.3 The GR heuristic Eades et al. (1993) developed a greedy algorithm called GR, which is a heuristic to finding a FAS in θ(m), where m stands for the number of arcs in the graph. GR requires a simple connected directed graph G = (V, A) such as: • n = |V | • ∃(si → sj) ∈ A ⇒ si = sj, i.e. no self loops • ∃(si → sj) ∈ A ⇒ si → sj is unique • ∀s ∈ V, δ+ (s) < n and δ− (s) < n. • ∀s ∈ V, −(n − 1) < δ(s) < n − 1 When all nodes on the left are sources and all nodes on the right are sinks, the number of leftward arcs is null. Having one or more leftward arcs means that a node on the right has δ+ (s) > 0, and a node on the left has δ− (s) > 0. Because of this, we want to have nodes with the highest out degree on the left and nodes with the highest in degree on the right. To achieve this the difference between the in and out degree δ(s) is computed. This value is then used to order the rest of the nodes when there are no sinks and no sources. Figure 2.4 shows a possible state ordering and has put nodes with a high δ(s) on the left, and nodes with a low δ(s) on the right. If nodes i and k were to be swapped then there would be a node with a high out degree on the right and a node with a high in degree on the left and it is exactly that kind of situation that generates leftward arcs. Therefore the resulting FAS would most likely be bigger. Algorithm [1] defines a state ordering considering sources, sinks and the δ-value of each node. At each iteration, GR puts each sink at the end of the ordering list and each source at the beginning and removes them from the graph. Then GR searches the node with the highest δ-value and inserts at the beginning of the list and removes it from the graph. This is repeated until no more nodes are left in the graph. The function hasSink(G) returns True iff at least one vertex of G has its out degree equal to zero. The function hasSource(G) returns True iff at least one vertex of G has its in degree equal to zero. The notation G − u represents the removal of the vertex u in G along with every arc incident to u. The function getSource(G) returns one vertex of G that is a source. The function getSink(G) returns one vertex of G that is a sink. The function arg max v∈G δ(v) returns the vertex with the highest δ-value. To achieve an algorithm in θ(m), GR partitions the vertex set of G into sources, sinks, and δ-classes as follows:    Vd = {u ∈ V | d = δ(u); δ+ (u) > 0; δ− (u) > 0}, −n + 3 ≤ d ≤ n − 3 Vn−2 = {u ∈ V | δ− (u) = 0; δ+ (u) > 0} V−n+2 = {u ∈ V | δ+ (u) = 0}
  • 8. 2.4 Generalization: from graph to automata 8 Algorithm 1 GR feedback arc set 1: function FAS(Graph G) 2: s1 ← ∅ 3: s2 ← ∅ 4: while G = ∅ do 5: while hasSink(G) do 6: u ← getSink(G) 7: s2 ← us2 8: G ← G − u 9: end while 10: while hasSource(G) do 11: u ← getSource(G) 12: s1 ← s1u 13: G ← G − u 14: end while 15: u ← arg max v∈G δ(v) 16: s1 ← s1u 17: G ← G − u 18: end while 19: return s1s2 20: end function There are at most 2n − 3 δ-classes thanks to the graph definition (required by GR) given at the beginning of this section. Now any vertex u ∈ V falls into exactly one of the 2n − 3 δ-classes. The vertices in each δ-classes are connected together by a doubly linked list, which makes it easier to remove vertices form their corresponding δ-class. This construction can be done θ(m) time by computing the in and out degree of each vertex and inserting at the same time that vertex in its corresponding δ-class. Thanks to these δ-classes, the selection of the sources, sinks and a vertex with the highest δ-value is done in θ(1). Whenever a vertex is removed from the graph, each of its predecessors and successors are updated in θ(1) by computing a new δ-value and inserting each of vertices in another δ-class. The number of updates is therefore equivalent to twice the number of edges as each edge is treated once as a successor and once as a predecessor. The complexity of GR is indeed θ(m). 2.4 Generalization: from graph to automata The constraints on the type of graph that GR works on allows it to know ahead of time the cardinality of all the δ-classes : (2n − 3). This information is needed for implementing GR in an efficient way. As it was shown in Section 2.2, each every vertex of G is sorted according to its δ-value in a corresponding bin. This allows to compute the next vertex with the greatest δ-value in θ(1). To generalize GR to automata, the definition of G needs modifications. Indeed, self-loops, multiple transitions from one state to another need to be handled. For this, the prerequisite stated in (Eades et al., 1993) is modified. By allowing a larger range of graphs, the cardinality of the δ-classes is no longer bound. Indeed, a vertex can now have an infinite number of arcs. It is necessary to define a new method that computes the number of δ-classes required for implement- ing GR. This quantity, noted δc, can be defined as follows: δc = max{δ+ (s) | s ∈ V } + max{δ− (s) | s ∈ V } + 1
  • 9. 9 Solving the feedback arc set Now, the number of δ-classes is δc instead of the original 2n − 3. Self-loops are ignored since they have no effect at all on the δ-value of a vertex. Indeed, a self loop adds one to the out degree and one to the in degree of any given vertex. 2.5 Implementation considerations Let bidigraph denote a bidirectional and directed graph with the following properties: • ∀s ∈ V, δ+ (s) and δ− (s) is computed in θ(1). • ∀s ∈ V, Succ(s) represents the list of successors of s • ∀s ∈ V, Pred(s) represents the list of predecessors of s. • |δ-classes| = δc = max{δ+ (s) | s ∈ V } + max{δ− (s) | s ∈ V } + 1 • δ-classes are indexed between 0 and δc − 1 • There is a vector that contains a pointer to a vertex. This pointer is the head of a doubly linked list of vertices with the same δ-value. Each index of the vector corresponds to a different δ-class Figure 2.5 represents a possible memory layout that respects the definition of a bidigraph. The corre- sponding graph is found on Figure 2.1. The vertex si of Figure 2.5 correspond to the vertex labeled with i on Figure 2.1. The notation &si represents the address of the vertex si. In this example, five different δ-classes can be found, four of these are composed of one vertex, the last one has two vertices. The blue arrows represents the list of successors for each vertices whilst the red arrow represents the list of prede- cessors for each vertices. Each elements in the list of deltas is a pointer to a vertex. Each of these vertices are linked to the next element in the δ-list (if any) as shown by the green arrow. This implementation enables the computation of a vertex with the highest δ-value in θ(1). Moreover, when removing a vertex s, each predecessor/successor has its out/in degree decremented and is then inserted as the head of its new δ-class list in θ(1). Two counters for the in and out degree for each nodes are used to be able to compute the δ-class the vertex should be in. The removal of s from its δ-class list in done in θ(1). Finally, to avoid loosing time on reordering every vector, a bit in the structure containing the vertex s can be set to identify it as removed. Eades et al. (1993) state that the removal of a vertex followed by the update of the δ-classes is done in θ(1) and that computing the next vertex with the greatest δ-value is also in θ(1). If this is not respected, GR no longer computes in linear time. Being able to remove a vertex and update all its successors and predecessors in θ(δ+ (s) + δ− (s)) proves that GR is computed in θ(m) where m = |A|. Indeed, since a vertex is chosen in θ(1) and is removed in θ(δ+ (s) + δ− (s)) and that each vertex is treated only once, GR’s complexity is defined by: n i=1 (δ+ (si) + δ− (si)) = 2|A|. 2.6 Discussion The results of GR applied to the graph of Figure 2.1 is shown on Figure 2.6. The red arcs represent the FAS computed. However, when removing the arc s3 → s1 from the FAS computed by GR, the FAS becomes minimal. This example shows one of the limitations of GR and can be improved by computing the strongly connected components (SCC) of G. A procedure in Spot returns a sequence (G1, G2, . . . , Gk) of the strongly connected components of a directed graph G. The topological order given by that sequence guarantees that there are no leftward
  • 10. 2.6 Discussion 10 &s2 &s5 &s6 &s6 &s1 &s4 &s3 &s6 &s5 Vector of succs s1 s2 s3 s4 s5 s6 Vector of vertices s1 s2 s3 s4 s5 s6 &s3 &s1 &s4 &s3 &s1&s6 &s1&s2 Vector of preds &s5 nextδ-vertex Null δ0 &s6 δ1 &s5 δ2 &s2 δ3 &s3 δ4 &s1 δ5 Null δ6 Vector of heads of δ-classes Null δ0 &s6 δ1 &s5 δ2 &s2 δ3 &s3 δ4 &s1 δ5 Null δ6 Figure 2.5 – Memory layout of a bidigraph 6 2 3 46 1 Figure 2.6 – FAS computed by GR
  • 11. 11 Solving the feedback arc set 2 1 4 3 (a) Possible FAS using GR 2 1 4 3 (b) FASH’s FAS Figure 2.7 – Peter Eades’ graph arcs between the components (that is, no arcs from Gj to Gi for i < j). The SCC can be used in three different manners. The first way consists in starting by computing an order over all the states using GR. Then, group up the states by SCC and use the topological order to make sure no arcs between a group on the right hand side goes to a group on the left hand side. The second method consists in computing a different FAS for each SCC and joining the results together using once again the topological order. The last technique computes the state ordering first. Then, when a query is made to find out if a transition is part of the FAS, you check if the transition is in an SCC and then check if it is a leftward arc when considering the state ordering. However, if the arc is not part of any SCC, you immediately know that it cannot be part of a FAS. This optimisation allows the computation of a minimal FAS for the graph shown on Figure 2.1 and since computing the SCC of G is done in linear time the overall complexity of GR is not affected. Peter Eades (1995) also uses this technique in the algorithm FASH, but goes even further. It was noticed that GR could be improved during the choice of the next δ-vertex. Indeed, when several vertices have the same δ-value, GR picks the first one it finds. Doing so can increase the size of the resulting FAS since it is when a δ-vertex is chosen that one or several arcs are being added into the FAS. When there are several vertices with the same δ-value that are candidates for removal, algorithm FASH considers each one of them separately. The objective is finding an edge whose removal will generate a sink or a source. On Figure 2.7, the edge 4 → 2 and 2 → 1 are candidates for removal. However, when removing 2 → 1, the node 2 becomes a source. Therefore, FASH will choose that edge for removal. This explains why algorithm FASH is able to compute a minimal FAS on the graph shown on Figure 2.7. With GR, when picking the first δ-vertex either vertex 2 or vertex 1 can be chosen. If vertex 1 is chosen, then the FAS will be composed of two arcs, whilst if vertex 2 is chosen first, only one arc will be present in the FAS. In algorithm FASH, vertex 2 will always be chosen first. However, because FASH is computed in θ(mn), more studies should be made to see if the extra cost is worth it.
  • 12. Chapter 3 Applying a FAS to DTGBACOMP This chapter defines what Büchi automata are, defines how to complement a transition based Büchi au- tomaton (TGBA), and shows how a FAS can be used to improve the complementation. The chapter will end by presenting some benchmarks. 3.1 Preliminaries Let AP designate the finite set of atomic propositions and Σ = 2AP denote the set of these valuations. For instance, if AP = {a, b}, then Σ = 2AP = {{a, b} , {a} , {b} , ∅} The following automata are fed infinite sequences of words over Σ i.e. elements of Σω . Definition 1 (TGBA) A transition-based generalized Büchi automaton over the alphabet Σ = 2AP is a tuple B = Q, Q0, δ, F where – Q is a finite set of states, – Q0 ⊆ Q is a set of initial states, – δ ⊆ Q × Σ × Q is a transition relation, where each element (q, l, q ) represents a transition from state q to state q labeled by the valuation l, – F = {F1, F2, . . . , Fk} is a set of acceptance sets of transitions where each Fi ⊆ 2δ . B accepts an execution l0l1 . . . ∈ Σω if there exists an infinite path (q0, l0, q1)(q1, l1, q2) . . . ∈ δω that visits each acceptance set infinitely often: q0 ∈ Q0 and ∀f ∈ F, ∀i ∈ N, ∃j ≥ i, (qj, lj, qj+1) ∈ f . 1 2 a •¯a¯b • Figure 3.1 – DTGBA
  • 13. 13 Applying a FAS to DTGBACOMP On Figure 3.1, the transitions with colored bullets represent elements of the different set of accep- tance sets (noted {F1, F2}). In this case F contains two sets of one element. An input sequence would be accepted if the green and red bullets are infinitely visited. The arrow towards the state 1, means that the state 1 is our initial state q0. The labels over the transitions, represent different atomic proposition and accepts any atomic proposition. Definition 2 (DTGBA) A deterministic transition-based generalized Büchi automaton over the alphabet Σ = 2AP a tuple B = Q, q0, δ, F where – Q is a finite set of states, – q0 ∈ Q is a unique initial states, – δ ⊆ Q × Σ × Q is a transition relation, where for each state q ∈ Q and for each symbol a ∈ Σ, δ(q, a) is unique but not necessarily defined, – F = {F1, F2, . . . , Fk} is a set of acceptance sets of transitions where each Fi ⊆ 2δ . B accepts an execution l0l1 . . . ∈ Σω if there exists an infinite path (q0, l0, q1)(q1, l1, q2) . . . ∈ δω that visits each acceptance set infinitely often: q0 ∈ Q0 and ∀f ∈ F, ∀i ∈ N, ∃j ≥ i, (qj, lj, qj+1) ∈ f . On the other hand, a word is rejected iff one of the acceptance set is visited finitely or there is no path that reads the word. Definition 3 (DTGBACOMP) The complementation of a DTGBA is a tuple B = Q , q0, δ , F cre- ated from the original DTGBA B = Q, q0, δ, F where – α = |F| – q0 ∈ Q is the same initial state as the q0 of B. – Q = {qs} ∪ Q ∪ (Q × 1, α ) where |Q| = |Q1| = |Q2| = . . . = |Qα| and qs denotes a sink state. – δs = {(q, l, qs) | ∀q ∈ Q, ∀l ∈ Σ, q ∈ Q | δ(q, l) = q } ∪ {(qs, l, qs) | l ∈ Σ} corresponds to the set of transitions associated with the sink state qs – ∀i ∈ 1, α , δi = {(qi, l, qi) | ∃(q, l, q ) ∈ δ ∪ δs} ∪ {(q, l, qi) | ∃(q, l, q ∈ δ ∪ δs} such as qi ∈ (Q × 1, α ), qi ∈ (Q × 1, α ) ∪ {qs}, and q, q ∈ Q2 – δ = δ ∪ δs ∪i∈[|1,α|] (δi Fi) – F = ∪i∈[|1,α|]δi ∪ δs Figure 3.2 gives a general idea of how the automaton looks after complementation. G represents the original automaton from which each accepting transition labeled by •, • or • has become non accept- ing. The black bullet • represents another set of accepting sets. The notation Acc[•] denotes the set of transitions containing a black bullet. Every accepting conditions is associated to a sub-clone called GAcc[•] where every transition marked with • are removed. The remaining transitions become accepting. Note that if a transition has one or several accepting marks, each sub-clone ignoring that specific accepting set does not need to duplicate that transition. Once the main clone and the sub-clones are created, G gets connected to the sub-clones. A naive way for doing this is taking every transition of the main clone and duplicating it as many times as there are sub-clones. These newly created transitions can become accepting if desired. A word will
  • 14. 3.1 Preliminaries 14 G G Acc[•]G Acc[•] G Acc[•] qs • • • • • •• • • Figure 3.2 – Main clone connected to sub-clones 1 2 a •¯a¯b• (a) Initial DTGBA -1 1 2 1a 2a 1b 2b -1 QQ × {1} Q × {2} 1 2 1a 2a 1b 2b -1aa ¯a¯b ¯a¯b ¯ab • ¯a¯b a ¯ab ¯ab • •¯a¯b •a •a (b) Application of DTGBACOMP Figure 3.3 – Illustration of definition 3
  • 15. 15 Applying a FAS to DTGBACOMP only be accepted if it visits a cycle and visits each set of an acceptance set an infinite amount of times, however the non-deterministic transitions from the main clone to sub-clone do not generate any cycles. Figure 3.3 illustrates the third definition. In this case F contains two sets of one element and the complemented version contains one set of twelve elements. On this example the original DTGBA accepts infinite words that eventually always assert ¯a and ¯b or that when ¯a¯b is seen the proposition after the next proposition cannot be ¯ab. The complemented automata recognizes any words that eventually no longer sees the green bullet or the red bullet or gets the atomic proposition ¯ab after passing by a transition labeled by . Of course a simpler complemented automaton could recognize the same language, e.g. state 1b can be removed, which leads us to the improvement section. 3.2 Improving complementation by using a FAS The third definition given in section section 3.1 is a generalization of the same algorithm studied in Kur- shan (1987) which complements a deterministic transition based Büchi automaton (TBA). The difference is that there is only one set of acceptance conditions meaning that the bullets drawn on the automata are all the same color. The complementation’s definition can be refined to create a more condensed automa- ton. Indeed, creating a transition on the automaton of Figure 3.3 between the states 2 → 1b would lead to a useless state. Since in the complemented automaton it is desired to stay in a cycle of one of the sub- clone, computing a FAS for each sub-clone will help find the useful transitions that allow to go towards an accepting cycle from the main clone to a sub-clone. Ideally, by using a FAS, only one transition per cycle will be connected. 3.3 Implementation consideration The only focus in this paper is to diminish the number of transitions between the main clone and the sub-clones. G Acc[•] and G have the same global form as they have the same number states and share the same transitions except for those marked with •. Therefore, if the transition labeled with • generates a cycle in G, G Acc[•] will not have that cycle since that transitions will be ignored by construction. However, when constructing the complemented automaton, the sub-clones have not been fully created yet. Because of this, a FAS can only be computed on the original automaton. Therefore, to be able to simulate a computation of a FAS in each sub-clone, a different mask of the corresponding acceptance set for each sub-clone is used to hide those accepting transitions in the original automaton allowing us to compute the desired FAS. 3.4 Evaluation For the tests, 744 linear temporal logic (LTL) formulas are used to output DTGBA using a translation algorithm from LTL to DTGBA implemented in Spot. These formulas are all composed in one of the following manners : A ∧ GF(p ⇐⇒ q) or B ∧ GF(p ⇐⇒ Xq) where G stands for globally, F stands for in the future and A and B are sub-LTL formulas. Most of the formulas A and B correspond to generated formulas by Spot’s randltl binary. The rest of the formulas come from Spot’s benchmarking formulas. These form of LTL formulas were chosen to help create as many cycles as possible. The results provided on Figure 3.4 are based on the number of transitions created from DTGBACOMP. For a given automaton, a complementation using the FAS and a complementation using the back edges as a FAS is used. The state ordering of the back edges corresponds to the prefix order of a DFS presented in Chapter 2. The number of created transitions of both methods is used to produce a percentage representing the
  • 16. 3.5 Discussion 16 0 5 10 15 20 25 30 0 50 100 Percentage improvement on number of transitions Numberofautomata Figure 3.4 – Results on automata from 744 LTL formulas improvement. The range varies between 0% and 31% and can be seen on the horizontal axis of Figure 3.4. The number of automata with the same amount of improvement is represented by the vertical axis. For instance, the sixth column shows that about 140 automata have 5% fewer transitions than before. One great aspect, is noticing that the newly created automata are always smaller than before or at worst the same size. The reason there are so many automaton with a 0% improvement is because many of the created automata were either very small, or had almost no cycles. After adjusting GR and taking into consideration the new definition for the complementation of a TGBA, the computation of a FAS for DTGBACOMP clearly improves the outputted automaton by di- minishing the number of transitions. The average improvement is around 5% to 6% and for one test in particular an improvement of 31% can be noticed. Its corresponding formula is Ga ∨ Gc ∨ (G(a ∧ GFb) ∧ G(c ∨ GF¬b)) ∧ GF(p ⇐⇒ Xq). The number of transitions passed from 2190 to 1499. 3.5 Discussion In Spot transitions are represented in two ways. Transitions correspond to the number of atomic propo- sition that are physically present. For example, when the atomic propositions are {a, b}, a transition labeled by a encodes two edges, {ab, a¯b}. In a cycle composed of three states, each transition might encode a different number of edges but have the same number of transitions. By considering the edges when computing a FAS, the vertex with the least number of edges will be chosen. However, when considering edges rather than transitions a new issue arises. Figure 3.5 displays the two different FAS that can be computed. The FAS with the red arc is computed considering the transitions whilst the FAS with green arcs is computed considering the edges. There is room for discussion in deciding which method is best. The resulting graph after applying DTGBACOMP is displayed on Figure 3.6. When using transitions, the state q0 can immediately non deterministically jump to an accepting sub- clone and any atomic propositions allows this. However the result considereding the edges has created more non deterministic states. It is clearly non trivial to determine which graph is best. One might consider being deterministic the main criteria, however it is challenging to determine an
  • 17. 17 Applying a FAS to DTGBACOMP 1 2 3 •¯a a¯b ∨ ¯ab ab a (a) FAS on transitions 1 2 3 •¯a a¯b ∨ ¯ab ab a (b) FAS on edges Figure 3.5 – Transitions vs edges 1 2 3 3a 1a 2a -1 ¯a • a ¯b ∨ ¯ab • • ¯a¯b •¯a¯b ab a •a¯b ∨ ¯ab •ab •a• (a) Complementing considering transitions 1 2 3 3a 1a 2a -1 ¯a • a ¯b ∨ ¯ab •ab •a • ¯a¯b •¯a¯b ab a •a¯b ∨ ¯ab •ab •a• (b) Complementing considering edges Figure 3.6 – DTGBACOMP results of figure 3.2
  • 18. 3.5 Discussion 18 accurate method in defining how deterministic an automata is. Nonetheless if a method is found, more studies can be done to help determine which technique is better. If the results vary depending on the automaton, then a heuristic should be implement to determine when to consider edges and when not to.
  • 19. Chapter 4 Conclusion Algorithm GR is a very quick heuristic for finding a feedback arc set. It tries to find an accurate order- ing on the vertices by comparing the in and out degrees of each vertex. Vertices with the highest out degree are chosen first whilst vertices with a high in degree are chosen last. The transitions from a less recently chosen vertex to a more recently chosen vertex form the feedback arc set. After adjusting GR, by considering the strongly connected components, an even better FAS is computed without changing the complexity of the algorithm. There are however many other great heuristics that compute better results. For instance algorithm FASH spends time on choosing the next δ-vertex when there is more than one candidate. By verifying each candidate, FASH searches the one that will generate a sink or a source after its removal. This allows FASH to handle more complex graphs. However, its execution is slower than GR as it requires θ(mn) time. One must find the best compromise between efficiency and performance. To avoid slowing down the algorithm that computes the complementation of an automata, a FAS computed in linear time is chosen. Even if GR does not always find a minimal FAS, the final results is still quite satisfying. It would be interesting to see the differences between our optimized GR algorithm and FASH. For the complementation of a DTGBA, it was seen that using a FAS allowed to connect the cycles of the main clone to its sub-clone. Moreover, by only considering the cycles in the sub-clones it was shown that even less transitions were necessary to connect the clones. Thanks to these improvements, some automata have up to 31% less transitions than before. Moreover, the computed automata is always more deterministic than before. This is because the original method used a DFS ordering which tends to find a poor FAS. The other reason is that the cycles were only searched for on the main clone which meant that cycles from the main clone would sometimes be connect to a sink in a sub-clone. One last thing needs to be taken into consideration, and that is finding a method that dictates how deterministic an automaton is. This method will allow to determine whether or not in some situations edges or transitions should be considered. If the best solution depends on the automaton, another heuristic should be considered to choose between edges or transitions. Finally, another algorithm in Spot, the conversion of a deterministic Rabin automaton to a Büchi automaton, can benefit from a FAS. That algorithm uses a construction pattern similar to the one as the one used for the complementation.
  • 20. Chapter 5 Bibliography Demetrescu, C. and Finocchi, I. (2003). Combinatorial algorithms for feedback problems in directed graphs. Information Processing Letters, 86(3):129–136. Duret-Lutz, A. and Poitrenaud, D. (2004). Spot: an extensible model checking library using transition- based generalized Büchi automata. In Modeling, Analysis, and Simulation of Computer and Telecom- munications Systems, 2004.(MASCOTS 2004). Proceedings. The IEEE Computer Society’s 12th Annual International Symposium on, pages 76–83. IEEE. Eades, P., Lin, X., and Smyth, W. F. (1993). A fast effective heuristic for the feedback arc set problem. Information Processing Letters, 47:319–323. Kurshan, R. P. (1987). Complementing deterministic Büchi automata in polynomial time. J. Comput. Syst. Sci., 35(1):59–71. Michael, R. G. and Johnson, D. S. (1979). Computers and intractability: A guide to the theory of NP-completeness. WH Freeman & Co., San Francisco. Peter Eades, X. L. (1995). A Heuristic for the Feedback Arc Set Problem. Centre for Discrete Mathe- matics and Computing. Unger, S. (1957). A Study of Asynchronous Logical Feedback Networks. Massachusetts Institute of Technology. Research Laboratory of Electronics. Technical report. Research Laboratory of Electronics, Massachusetts Inst. of Technology. Younger, D. (1963). Minimum feedback arc sets for a directed graph. Circuit Theory, IEEE Transactions on, 10(2):238–245.