1. Mariem Harmassi, Daniela Grigori, Khalid
Belhajjame
LAMSADE, Université Paris Dauphine
Mining Workflow Repositories for
Improving Fragments Reuse
2. Workflows
A business process specified
using the BPMN notation
A Scientific Workflow system
(Taverna)
A workflow consists of an orchestrated and repeatable pattern of business
activity enabled by the systematic organization of resources into
processes that transform materials, provide services, or process
information (Workflow Coalition)
IKC 20152
3. Scientific Workflows
Scientific workflows are
increasingly used by scientists
as a means for specifying and
enacting their experiments.
They tend to be data intensive
The data sets obtained as a
result of their enactment can
be stored in public repositories
to be queried, analyzed and
used to feed the execution of
other workflows.IKC 20153
4. Workflows are difficult to design
The design of scientific workflows, just like
business process, can be a difficult task
Deep knowledge of the domain
Awareness of the resources, e.g., programs and
web services, that can enact the steps of the
workflow
Publish and share workflows, and promote
their reuse.
myExperiment, CrowldLab, Galaxy, and other
various business process repository
Reuse is still an aim.
There are no capabilities that support the user in
identifying the workflows, or fragments thereof, that
are relevant for the task at hand.IKC 20154
5. Fragment look-up in the life cycle of
workflow design
Design Workflow Search Fragments
Run Workflow
PublishWorkflow
Workflow
repositories
IKC 20155
6. Workflow Fragments Search
Why is it useful for?
The workflow designer knows the steps of the
fragment and their dependencies, but does not
know the resources (programs or web services) that
can be used for their implementation.
The designer may want to know how colleagues
and third parties designed the fragment (best
practices)
Elements of the solution
1. Filtering: Instead of search the whole repository,
we limit the number of workflows in the repository
to be examined to those that are relevant to the
user
2. Identify the fragments that are reccurrent in the
workflows retrieved in (1)
IKC 20156
7. 1 - Filtering step
Workflow
XML
Workflow
graph
List of
keywords
List of
keywords &
synonyms
Wordnet
BP
Repository
Filter
Else
IKC 20157
8. 2- Identify Recurrent Fragments
We use graph mining algorithms to identify the
fragments in the repository that are recurrent.
We use the SUBDUE algorithm.
Which graph representation to use to represent
(workflow) fragments?
We examined a number of workflow representation
IKC 20158
12. Experiments
1st experiment: To assess the suitability of the
graph representations for mining workflow graphs
Effectiveness : Precision/ Recall
Memory space : Disk space, DIV
Execution time
2nd experiment: To assess the impact of the
filtering step in narrowing the search to relevant
workflow fragments.
IKC 201512
13. Experiment 1: Dataset
We created three datasets of workflow
specifications, containing respectively 30, 42, and
71 workflows.
9 out of these workflows are similar to each other
and, as uch contain recurrent structures, that
should be detected by the mining algorithm.
Despite the small size of the collection, these
datasets allowed to distinguish to a certain extent
between the different representations.
IKC 201513
19. Experiment1: Summary
control nodes : recurrent patterns typical coding scheme
related to the model rule
Recall
Labeling the edges: specializations of the same abstract
workflow.
Precision
Xor as a set of alternatives: duplication , loss of
informations
Recall Precision
The Representation D1 seems to be therefore the one that
performs best
IKC 201519
20. Experiment 2
Data sets: All Taverna 1 workflows (498
workflows) from myExperiment
User query: We use a small fragment from a
workflow in myExperiment.
IKC 201520
21. Conclusion
Methodology for improving the reusability
Model of representation D + Filter
Improve the filter
Test others similarity measures
Need to assess the usefulness of the technics
presented in practice. And how they can be
incorporated in the workflow design life cycle.
In the context of the Contextual and Aggregrated
Information Retrieval (CAIR) project
IKC 201521
22. Mariem Harmassi, Daniela Grigori, Khalid
Belhajjame
LAMSADE, Université Paris Dauphine
Mining Workflow Repositories for
Improving Fragments Reuse
Editor's Notes
Workflows are increasingly used by scientists as a means for specifying and enacting their experiments. Such workflows are often data intensive [5]. The data sets obtained by their enactment have several applications, e.g., they can be used to understand new phenomena or confirm known facts, and therefore such data sets are worth storing (or preserving) for future analyzes.
-scientific workflows have been used to encode in-silico experiments.
-The design of scientific workflows can be a difficult task . It requires a deep knowledge of the domain as well as awareness of the programs and services available for implementing the workflow steps.
-In 2009, De Roure and coauthors pointed out the advantages of sharing and reusing workflows from scientific workflows repositores like MyExperiment, Crowdlabs, Galaxy and others.
-The problem is that the size of these repositories is continuously growing and many problems relating to the reuse of available workflows emerged, example it become difficlut to distinguish a special use case from a usage pattern
-So using mining techniques forms a goos solution.
Lets discuss the most important contributions in mining workflows.
Filtering
Notre système extrait de ce fichier graphe( Workflow de l'utilisateur) un ensemble de mot ( c'est l'ensemble de mots existant das les labels des noeuds d'activité; attention un label peut comporter plus d'un mot concatenés par un séparateur. on extrait la liste de mots completes.
puis nous la soumettons a JaW api de wordnet il nous renvois la liste de tous les synonymes pour chaque mot.
la nous avons une liste sémantiquement enrichie.
on fait une recherche à partir de cette derniere liste, si un workflow contient mot de cette liste il est retenu.
The concept is simple; Firstly the user enter its workflow (sub-workflow) in an XML format, we transform it into graph format then we extract the list of unique words mentionned in all the labels of the workflow. We estbalish a list of the kaywords and their synonymsthanks to wordnet (Java API for WordNet Searching (JAWS) to retrieve the synsets of a given label from WordNet ).
After what we select from the repository only the BP/Workflows that matches at least one from the last list.
The challenges to be addressed are the following :
– Which mining algorithm to employ for finding frequent patterns in the repository?
– Which graph representation is best suited for formatting workflows for mining frequent fragments?
–how to deal with the heterogeneity of the labels used by different users to
model the activities of their workflows within the repository?
We conducted two experiments. The first aims to validate our proposed representation model D/D1 and to show the drawbacks of the other models. The second experiment aims to validate the filter.
We compare the efficiency and effectiveness of the models.On the effectiveness plan, We focus on proving the drawback of the representation model C when it comes to extract recurrent fragments that contain the XOR link .SO, we manually created a synthetic dataset which ensures that the following sub-structure is the most recurrent. As the size of the synthetic dataset is limited ( 9 BP) we extend it to three dataset by adding some workflows from the Taverna 1 repository, while preserving the goal that the most recurrent sub-workflow is the one already presented .
we compared the efficiency and effectiveness of the representation models.
The second experiment assesses the impact of the semantic filter.
A is the most expensive in term of space disk required to encode the base in graph format.
Concerning the C model as expected: it required more than twice (the number of edges and nodes) the bits required by the model that we propose, namely D and D1, however this ratio decreases to rich between a quarter to the tenth with larger bases. This decrease is due to the content of these bases, with a low percentage
of BP with XOR nodes.
In third position comes the Model B, it requires less than between 25% up to 40% more than the model D and D1 in terms of number of nodes, edges and bits used.
Models D and D1 require the same number of edges and nodes to encode the input data, however the labeling of edges consumes more bits to be encoded.
.We don't care about correctly classifying negative instances, you just don't want too many of them polluting our results.
Model C: concerning these experimentation, as expected the Model C led to the worst qualitative performances. C performs a recall rate that varies between 0% and 61.54% and an average recall around 35%; The model C can, at best, discover only one alternative at time(in our case there is 2 alternatives attached to the XOR node) .
Model A:The top extracted substructures are more significant than that of model C, and less significant than other models. However when it comes to larger sized databases, results show a dramatic decline in the quality of its sub-structures reaching 0% in terms of precision and recall; which means there is no extracted substructure related to the user expectation.
This limitation, can be explained by the excessive use of control nodes. On large input data, their percentage becomes quite significant leading Subdue algorithm to consider them as important sub-structures.
Model B :The model B performs much better than the previous two models, A and C. In fact, The model B retrieved successfully almost 67% of the BP elements of the target sub-structure. more than two time than model C and between 13 to 66% more than model A.
Comparing model B to model D; In the other side, models B and D led to very similar accuracy
performances. Although, the Model B was able to discover more relevant BP elements than model D (about 10% more), it returned more useless or irrelevant BP elements(around 7%).
labeling the edges lead to specializations of the same abstract workflow template and consequently affects the quality of results returned (decrease recall).
Model D: We can notice a common performance between models D and D1,which distinguish them from other models. Both of them led to a good precision rate. This performance is due to the fact that these two
models do not use control nodes and thereby avoid a negative inference on the results.
On large input data their percentage becomes quite significant leading Subdue algorithm to consider as significant typical sub-structures of the coding scheme of the model rules (decrease Precison).
The results of the first experiment show clearly that the model D1 records the best performances on all the levels without exception.
TP+TN/TP+TN+FP+FN
.We don't care about correctly classifying negative instances, you just don't want too many of them polluting our results.
Model C: concerning these experimentation, as expected the Model C led to the worst qualitative performances. C performs a recall rate that varies between 0% and 61.54% and an average recall around 35%; The model C can, at best, discover only one alternative at time(in our case there is 2 alternatives attached to the XOR node) .
Model A:The top extracted substructures are more significant than that of model C, and less significant than other models. However when it comes to larger sized databases, results show a dramatic decline in the quality of its sub-structures reaching 0% in terms of precision and recall; which means there is no extracted substructure related to the user expectation.
This limitation, can be explained by the excessive use of control nodes. On large input data, their percentage becomes quite significant leading Subdue algorithm to consider them as important sub-structures.
Model B :The model B performs much better than the previous two models, A and C. In fact, The model B retrieved successfully almost 67% of the BP elements of the target sub-structure. more than two time than model C and between 13 to 66% more than model A.
Comparing model B to model D; In the other side, models B and D led to very similar accuracy
performances. Although, the Model B was able to discover more relevant BP elements than model D (about 10% more), it returned more useless or irrelevant BP elements(around 7%).
labeling the edges lead to specializations of the same abstract workflow template and consequently affects the quality of results returned (decrease recall).
Model D: We can notice a common performance between models D and D1,which distinguish them from other models. Both of them led to a good precision rate. This performance is due to the fact that these two
models do not use control nodes and thereby avoid a negative inference on the results.
On large input data their percentage becomes quite significant leading Subdue algorithm to consider as significant typical sub-structures of the coding scheme of the model rules (decrease Precison).
The results of the first experiment show clearly that the model D1 records the best performances on all the levels without exception.
TP+TN/TP+TN+FP+FN
The model A is the most expensive in terms of execution time, around 55 up to 25 more time than model D and D1.
Let compare the other models. Although on the qualitative level, model B performs better than model C model C seems to be far less expensive.
As expected the model D and D1 led to very performances, whereas model D1 performs slightly better.
The results of the second experimentation shows that the use of the semantic filter caused a reduction of in the input date size (bits) 99% which dramatically improved the execution time 36 times less.
Decrease the Disk-space
Decrease the RAM
Decrease the Execution time
Increase the quality of results