This document proposes a framework that unifies problem-solving knowledge and workflow modeling. It extends the Unified Problem-solving Method description Language (UPML) by incorporating workflow control structures and participants as new knowledge components. The framework represents workflows using a stack of ontologies, including a High-Level Petri Nets ontology to model control flow and a TOVE ontology to model organizational resources. It defines knowledge components for tasks, methods, domains, control models, and resource models, and connects them with bridges. This allows problem-solving knowledge, such as methods and tasks, to be explicitly represented and reused in workflows.
Proposing a Formal Method for Workflow Modelling: Temporal Logic of Actions (...ijcsta
The study and implementation of formal techniques to aid the design and implementation ofWorkflow Management
Systems (WfMS) is still required. Using these techniques, we can provide this technology with automated reasoning
capacities, which are required for the automated demonstration of the properties that will verify a given model.
This paper develops a formalization of the workflow paradigm based on communication (speech-act theory) by
using a temporal logic, namely, the Temporal Logic of Actions (TLA). This formalization provides the basic
theoretical foundation for the automated demonstration of the properties of a workflow map, its simulation, and
fine-tuning by managers.
A framework to performance analysis of software architectural stylesijfcstjournal
Growing and executable system architecture has a significant role in successful production of large and
distributed systems. Assessing the effect of different decisions in architecture design can decrease the time and cost of software production, especially when these decisions are related to non-functional properties of system. Performance is a non-functional property which relates to timing behaviour of system. In this paper
we propose an approach for modelling and analysis of performance in architecture level. To do this,we follow a general process which needs two formal notations for specifying architecture and performance models of system. In this paper we show how Stochastic Process Algebra (SPA) in the form of PEPA language can be used for performance modelling and analysis of software archi
tectures modelled using Graph Transformation System (GTS). To enable architecture model for performance analysis, equivalent PEPA model should be constructed with transformation. Transformed performance model of the
architecture has been analysed through PEPA toolkit for some properties like throughput, sensitivity analysis, response time and utilisation rate. The analysis results have been explained with regard to a realistic case study.
Proposing a Formal Method for Workflow Modelling: Temporal Logic of Actions (...ijcsta
The study and implementation of formal techniques to aid the design and implementation ofWorkflow Management
Systems (WfMS) is still required. Using these techniques, we can provide this technology with automated reasoning
capacities, which are required for the automated demonstration of the properties that will verify a given model.
This paper develops a formalization of the workflow paradigm based on communication (speech-act theory) by
using a temporal logic, namely, the Temporal Logic of Actions (TLA). This formalization provides the basic
theoretical foundation for the automated demonstration of the properties of a workflow map, its simulation, and
fine-tuning by managers.
A framework to performance analysis of software architectural stylesijfcstjournal
Growing and executable system architecture has a significant role in successful production of large and
distributed systems. Assessing the effect of different decisions in architecture design can decrease the time and cost of software production, especially when these decisions are related to non-functional properties of system. Performance is a non-functional property which relates to timing behaviour of system. In this paper
we propose an approach for modelling and analysis of performance in architecture level. To do this,we follow a general process which needs two formal notations for specifying architecture and performance models of system. In this paper we show how Stochastic Process Algebra (SPA) in the form of PEPA language can be used for performance modelling and analysis of software archi
tectures modelled using Graph Transformation System (GTS). To enable architecture model for performance analysis, equivalent PEPA model should be constructed with transformation. Transformed performance model of the
architecture has been analysed through PEPA toolkit for some properties like throughput, sensitivity analysis, response time and utilisation rate. The analysis results have been explained with regard to a realistic case study.
We examine the problem of weaknesses in frameworks of conceptual modeling for handling certain aspects of the system being modeled. We propose the use of a flow-based modeling methodology at the conceptual level. Specifically, and without loss of generality, we develop a conceptual description that can be used for controlling the maintenance of a physical system, and demonstrate it by applying it to an existing electrical power plant system. Recent studies reveal difficulties in finding comprehensive answers for monitoring operations and identifying risks as well as the fact that incomplete information can easily lead to incorrect maintenance. A unified framework for integrated conceptualization is therefore needed. The conceptual modeling approach integrates maintenance operations into a total system comprising humans, physical objects, and information. The proposed model is constructed of (abstract) machines of “things” connected by flows, forming an integrated whole. It represents a man-made, intentionally constructed system and includes technical and human “things” observable in the real world, exemplified by the study case described in this paper. A specification is constructed from a maximum of five basic operations: creation, processing, releasing, transferring, and receiving.
Generation and Optimization of Test cases for Object-Oriented Software Using ...cscpconf
The process of testing any software system is an enormous task which is time consuming and
costly. The time and required effort to do sufficient testing grow, as the size and complexity of
the software grows, which may cause overrun of the project budget, delay in the development of
software system or some test cases may not be covered. During SDLC (software development
life cycle), generally the software testing phase takes around 40-70% of the time and cost. Statebased
testing is frequently used in software testing. Test data generation is one of the key issues
in software testing. A properly generated test suite may not only locate the errors in a software
system, but also help in reducing the high cost associated with software testing. It is often
desired that test data in the form of test sequences within a test suite can be automatically generated to achieve required test coverage. This paper proposes an optimization approach to test data generation for the state-based software testing. In this paper, first state transition graph is derived from state chart diagram. Then, all the required information are extracted from the state chart diagram. Then, test cases are generated. Lastly, a set of test cases are minimized by calculating the node coverage for each test case. It is also determined that which test cases are covered by other test cases. The advantage of our test generation technique is that it optimizes test coverage by minimizing time and cost. The proposed test data generation scheme generates test cases which satisfy transition path coverage criteria, path coverage criteria and action coverage criteria. A case study on Automatic Ticket Machine ( ATM ) has been presented to illustrate our approach.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
Situational method engineering represented as 4 level of method (meta)modeling activities that performed by ontologists and logicians, methodologists, method engineers, project managers.
Minimal Testcase Generation for Object-Oriented Software with State Chartsijseajournal
Today statecharts are a de facto standard in industry for modeling system behavior. Test data generation is
one of the key issues in software testing. This paper proposes an reduction approach to test data generation
for the state-based software testing. In this paper, first state transition graph is derived from state chart
diagram. Then, all the required information are extracted from the state chart diagram. Then, test cases
are generated. Lastly, a set of test cases are minimized by calculating the node coverage for each test case.
It is also determined that which test cases are covered by other test cases. The advantage of our test
generation technique is that it optimizes test coverage by minimizing time and cost. The present test data
generation scheme generates test cases which satisfy transition path coverage criteria, path coverage
criteria and action coverage criteria. A case study on Railway Ticket Vending Machine (RTVM) has been
presented to illustrate our approach.
SCIENTIFIC WORKFLOW CLUSTERING BASED ON MOTIF DISCOVERYijcseit
In this paper, clustering of scientific workflows is investigated. It proposes a work to encode workflows
through workflow representations as sets of embedded workflows. Then, it embeds extracted workflow
motifs in sets of workflows. By motifs, common patterns of workflow steps and relationships are replaced
with indices. Motifs are defined as small functional units that occur much more frequently than expected.
They can show hidden relationships, and they keep as much underlying information as possible. In order to
have a good estimate on distances between observed workflows, this work proposes the scientific workflow
clustering problem with exploiting set descriptors, instead of vector based descriptors. It uses k-means
algorithm as a popular clustering algorithm for workflow clustering. However, one of the biggest
limitations of the k-means algorithm is the requirement of the number of clusters, K, to be specified before
the algorithm is applied. To address this problem it proposes a method based on the SFLA. The simulation
results show that the proposed method is better than PSO and GA algorithms in the K selection.
SCIENTIFIC WORKFLOW CLUSTERING BASED ON MOTIF DISCOVERYijcseit
In this paper, clustering of scientific workflows is investigated. It proposes a work to encode workflows through workflow representations as sets of embedded workflows. Then, it embeds extracted workflow
motifs in sets of workflows. By motifs, common patterns of workflow steps and relationships are replaced with indices. Motifs are defined as small functional units that occur much more frequently than expected.They can show hidden relationships, and they keep as much underlying information as possible. In order to have a good estimate on distances between observed workflows, this work proposes the scientific workflow clustering problem with exploiting set descriptors, instead of vector based descriptors. It uses k-means
algorithm as a popular clustering algorithm for workflow clustering. However, one of the biggest limitations of the k-means algorithm is the requirement of the number of clusters, K, to be specified before the algorithm is applied. To address this problem it proposes a method based on the SFLA. The simulation results show that the proposed method is better than PSO and GA algorithms in the K selection.
We examine the problem of weaknesses in frameworks of conceptual modeling for handling certain aspects of the system being modeled. We propose the use of a flow-based modeling methodology at the conceptual level. Specifically, and without loss of generality, we develop a conceptual description that can be used for controlling the maintenance of a physical system, and demonstrate it by applying it to an existing electrical power plant system. Recent studies reveal difficulties in finding comprehensive answers for monitoring operations and identifying risks as well as the fact that incomplete information can easily lead to incorrect maintenance. A unified framework for integrated conceptualization is therefore needed. The conceptual modeling approach integrates maintenance operations into a total system comprising humans, physical objects, and information. The proposed model is constructed of (abstract) machines of “things” connected by flows, forming an integrated whole. It represents a man-made, intentionally constructed system and includes technical and human “things” observable in the real world, exemplified by the study case described in this paper. A specification is constructed from a maximum of five basic operations: creation, processing, releasing, transferring, and receiving.
Generation and Optimization of Test cases for Object-Oriented Software Using ...cscpconf
The process of testing any software system is an enormous task which is time consuming and
costly. The time and required effort to do sufficient testing grow, as the size and complexity of
the software grows, which may cause overrun of the project budget, delay in the development of
software system or some test cases may not be covered. During SDLC (software development
life cycle), generally the software testing phase takes around 40-70% of the time and cost. Statebased
testing is frequently used in software testing. Test data generation is one of the key issues
in software testing. A properly generated test suite may not only locate the errors in a software
system, but also help in reducing the high cost associated with software testing. It is often
desired that test data in the form of test sequences within a test suite can be automatically generated to achieve required test coverage. This paper proposes an optimization approach to test data generation for the state-based software testing. In this paper, first state transition graph is derived from state chart diagram. Then, all the required information are extracted from the state chart diagram. Then, test cases are generated. Lastly, a set of test cases are minimized by calculating the node coverage for each test case. It is also determined that which test cases are covered by other test cases. The advantage of our test generation technique is that it optimizes test coverage by minimizing time and cost. The proposed test data generation scheme generates test cases which satisfy transition path coverage criteria, path coverage criteria and action coverage criteria. A case study on Automatic Ticket Machine ( ATM ) has been presented to illustrate our approach.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
Situational method engineering represented as 4 level of method (meta)modeling activities that performed by ontologists and logicians, methodologists, method engineers, project managers.
Minimal Testcase Generation for Object-Oriented Software with State Chartsijseajournal
Today statecharts are a de facto standard in industry for modeling system behavior. Test data generation is
one of the key issues in software testing. This paper proposes an reduction approach to test data generation
for the state-based software testing. In this paper, first state transition graph is derived from state chart
diagram. Then, all the required information are extracted from the state chart diagram. Then, test cases
are generated. Lastly, a set of test cases are minimized by calculating the node coverage for each test case.
It is also determined that which test cases are covered by other test cases. The advantage of our test
generation technique is that it optimizes test coverage by minimizing time and cost. The present test data
generation scheme generates test cases which satisfy transition path coverage criteria, path coverage
criteria and action coverage criteria. A case study on Railway Ticket Vending Machine (RTVM) has been
presented to illustrate our approach.
SCIENTIFIC WORKFLOW CLUSTERING BASED ON MOTIF DISCOVERYijcseit
In this paper, clustering of scientific workflows is investigated. It proposes a work to encode workflows
through workflow representations as sets of embedded workflows. Then, it embeds extracted workflow
motifs in sets of workflows. By motifs, common patterns of workflow steps and relationships are replaced
with indices. Motifs are defined as small functional units that occur much more frequently than expected.
They can show hidden relationships, and they keep as much underlying information as possible. In order to
have a good estimate on distances between observed workflows, this work proposes the scientific workflow
clustering problem with exploiting set descriptors, instead of vector based descriptors. It uses k-means
algorithm as a popular clustering algorithm for workflow clustering. However, one of the biggest
limitations of the k-means algorithm is the requirement of the number of clusters, K, to be specified before
the algorithm is applied. To address this problem it proposes a method based on the SFLA. The simulation
results show that the proposed method is better than PSO and GA algorithms in the K selection.
SCIENTIFIC WORKFLOW CLUSTERING BASED ON MOTIF DISCOVERYijcseit
In this paper, clustering of scientific workflows is investigated. It proposes a work to encode workflows through workflow representations as sets of embedded workflows. Then, it embeds extracted workflow
motifs in sets of workflows. By motifs, common patterns of workflow steps and relationships are replaced with indices. Motifs are defined as small functional units that occur much more frequently than expected.They can show hidden relationships, and they keep as much underlying information as possible. In order to have a good estimate on distances between observed workflows, this work proposes the scientific workflow clustering problem with exploiting set descriptors, instead of vector based descriptors. It uses k-means
algorithm as a popular clustering algorithm for workflow clustering. However, one of the biggest limitations of the k-means algorithm is the requirement of the number of clusters, K, to be specified before the algorithm is applied. To address this problem it proposes a method based on the SFLA. The simulation results show that the proposed method is better than PSO and GA algorithms in the K selection.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
A Framework For Unifying Problem-Solving Knowledge And Workflow Modeling
1. A Framework for Unifying Problem-Solving Knowledge and Workflow Modeling
Juan C. Vidal and Manuel Lama and Alberto Bugarı́n
Departament of Electronics and Computer Science
Facultad de Fı́sica, Edificio Monte da Condesa
15782 Santiago de Compostela, Spain
Abstract
Usually a workflow is described from its control struc-
ture and its participants but without taking into account
the knowledge used to execute it. This paper outlines
a new framework for workflows specification which
extends the Unified Problem-solving Method descrip-
tion Language and approaches workflows design from a
knowledge perspective. Within this framework, the re-
sources and the control flows that define the traditional
workflow framework are defined as knowledge compo-
nents and are enriched with a knowledge dimension that
deals with the definition of the knowledge that is used
in its modeling, resource assignments and execution.
Introduction
Workflows are increasingly being used to model processes
because they facilitate the communication, coordination and
collaboration between the participants of the business pro-
cess. In last years, a number of specifications have arose
for describing processes or workflows such as BPEL4WS
(Andrews et al. 2003), BPMN (Obj 2006), or the XPDL
standard defined by the Workflow Management Coalition
(WfMC) (Wor 2005). Most of these languages specify a
workflow on the basis of (i) its control structure where the
execution control of the activities is defined and (ii) its par-
ticipants, that specify which agents execute the workflow
activities. However, these approaches do not incorporate ex-
plicitly the problem-solving knowledge in the workflow defi-
nition: this knowledge is implicitly used its control structure
and in its organizational structure, but as it is not explicitly
represented, it cannot be shared or reused. For dealing with
this drawback, workflows need to be specified as a new com-
ponent at the knowledge-level (Newell 1982).
Since the introduction of the knowledge-level con-
cept, several approaches have been proposed to represent
knowledge components and, more specifically, problem-
solving knowledge components. Between these approaches
the Unified Problem-solving Method description Language
(UPML) (Fensel et al. 2003) can be considered as the de
facto standard. The main drawback of UPML to be appli-
cable for solving a given problem is that it does not de-
fine an operational description for those methods that are
Copyright c
° 2008, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
not simple and decompose into subtasks: it assumes that
this description is at the symbolic level. Based on UPML
and from the perspective of semantic web services, there
are some approaches, such as IRS-III (Cabral et al. 2006)
and WSMO (de Bruijn et al. 2005), that propose an opera-
tional description for the methods. These proposals based on
the UPML specification, however, do not use workflows for
modeling the execution coordination of the tasks that com-
pose a method (or a service).
In this paper we describe a framework which captures
the problem-solving knowledge used to define and execute a
workflow. This framework extends the UPML specification
by incorporating both the control structure and the partici-
pants of a workflow as two new knowledge components. The
structure of these new components is based on two ontolo-
gies: the High-Level Petri Nets ontology (Vidal, Lama, &
Bugarı́n 2006a) and the TOVE ontology (Fox & Gruninger
1998) for process representation and organization modeling,
respectively. Figure 1 depicts the stack of the ontologies that
describes semantically the different dimensions of a work-
flow. In this paper when we refer to a workflow, we will
take all these dimensions into consideration.
Workflows Specification Framework
Framework architecture, depicted in Figure 2, is composed
by a set of boxes, which represent knowledge components,
that are connected by a set of ellipses, which represent
bridges. Specifically, we define six knowledge components
that capture each one of the aspects of a workflow. The
first four components, ontologies, tasks, methods and do-
main models are captured by the UPML model, and describe
High-Level Petri Net
Ontology
Hierarchical HLPN
Ontology
BPMN
BPEL4WS XPDL
UPML Ontology TOVE Ontology
Workflow Description Ontology
Figure 1: Ontology stack for the representation of workflows
2. UPML model
Method-Domain
Bridge
Task
Refiner
Task
Domain
Refiner
Domain
Model
Ontology
Refiner
Ontologies
Method-Task
Bridge
Task-Domain
Bridge
Method
Refiner
Method
Model
Control-Method
Bridge
Control
Refiner
Control
Model
Process model
Resource
Refiner
Resource
Model
Resource-
Domain
Bridge
Resource model
Figure 2: Knowledge-based Workflow Framework
the concept of problem-solving method. The last two com-
ponents, the resource models and the control models, com-
plements the problem-solving method specification to cover
the definition of the workflow participants and the work-
flow control structure. Bridges define the adapters to con-
nect the knowledge components. For example, bridges are
used to relate the control flow to a method, the participants
of a workflow or the domain to which the problem-solving
method will be applied.
UPML Model
The core of the modeling of problem-solving methods is de-
fined in the UPML model (see Figure 2). Although UPML
model is out of the scope of this work, it describes problem-
solving methods by means of the following components:
• Task. A task describes an operation, specifying the re-
quired input and output parameters and the pre- and post-
conditions. We must remark a task only defines an opera-
tion and not the way in which this operation will be solved
nor the resources that will participate in its solution.
• Method. A method details the reasoning process to
achieve a task. Like tasks, they specify the required input
and output parameters and the pre- and postconditions.
However, methods can be split in two types depending on
their structure: non-composite and composite methods.
On the one hand, non-composite methods define a rea-
soning step that cannot be decomposed in more primitive
steps. The execution of this methods is in charge of the
(human and non-human) resources that participate in the
workflow. On the other hand, composite methods decom-
pose into subtasks and specify the control structure over
the subtasks. The operational description of these meth-
ods is the meeting point with control structure of work-
flows. Current implementations of the UPML framework
represent the control-flow in a program like way (Motta
1998) and are not well suited for being reused. Our frame-
work deals the operational description of the method as a
new component that defines the control of the execution
coordination of the method subtasks. This solution facili-
tates the reuse of both the methods and the control models:
– Several control structures can be associated (through a
bridge) with the same method, which enriches the se-
lection of the most suitable method configuration for
solving a task. A business level criteria may define
which is the best control structure for a given method.
This criteria may be based on organization practices,
logistics, fault tolerance, and so forth.
– The same control structure can be reused for differ-
ent methods because it does not depend on the method
decomposition. In our framework, the subtasks are
mapped by a bridge with the control structure. There-
fore both the method (and its task decomposition) and
the control structure (on the tasks) can be defined and
reused independently.
• Domain. A domain model describes the knowledge of a
domain of discourse. The domain model introduces do-
main knowledge, merely the formulas that are then used
by problem-solving methods and tasks.
• Ontology. An ontology defines a terminology and its
properties, used by resources, control structures, tasks,
problem-solving methods, and domain models. The core
of an ontology specification is its signature definition,
which defines signature elements that hold terms. The on-
tology also provides the axioms that characterize logical
properties of the signature elements.
Control Model
The control model deals with the definition of the control-
flow specifying the activities coordination and the condi-
tions that they must verify to enable their execution. A
control model is a knowledge component of the proposed
framework that captures a process-oriented perspective of
the workflow. This model will provide the execution level
semantics (Sheth & Gomadam 2007; Sivashanmugam et al.
2003) of the control structure and the reusable structures that
methods will adapt to define their operational description.
Process modeling techniques are wide and heterogeneous:
process algebra’s, Petri nets, and vendor specific diagrams
are the most representative solutions. At present there is
no standard language for workflow specification, but in our
opinion a solid theoretical foundation with a graphical se-
mantics is needed to make the definition of processes easier.
3. In this framework, we used high-level Petri nets (ISO/IEC
15909-1 2002) to design processes structure, because (i)
they are a formalism with a graphical representation and
(ii) they provide the explicit representation of the states and
events of processes. In order to incorporate the process
structure as a knowledge component, we have developed
a high-level Petri net ontology (Vidal, Lama, & Bugarı́n
2006a) which captures both the static elements and dynamic
behavior of this kind of nets in a declarative way. The on-
tology describes high-level Petri nets as graphs which is a
common way to describe these kind of nets. Based on this
representation, this component provides the properties that
allow the definition of these graphs. For example, they de-
fine:
• The nodes which capture the vertices of the graph. High-
level Petri nets are bipartite directed graphs and thus are
defined by two disjoint sets of vertices called places and
transitions. As it is depicted in Figure 3, places are graphi-
cally represented as circles while transitions as rectangles.
• The arcs capture the directed edges that connect a source
node to a target node. An arc connects places with tran-
sitions and transitions with places but never nodes of the
same type.
• The signature define the set of sorts and operators a high-
level Petri net can use. These sorts and operators will be
used to define the algebraic specification for annotating
the places, transitions and arcs of the net. As it is depicted
in Figure 3, a place is annotated with a sort (concept) at
the top of the circle (e.g. Case), a transition is annotated
with a condition in brackets at the bottom of the rectangle
which define its enabling condition (e.g. [true]), and an
arc is annotated with a term that can be a variable (e.g. x)
or an operator application (e.g. F(x)).
In our framework, a control structure, as any other knowl-
edge component, uses ontologies to define its properties. In
this case, the ontologies will be used to define the signature
of the net, that is, the sorts and operators used to annotate
the net.
Resource Model
Although the main attention of workflow researchers has fo-
cused on the process perspective, i.e. the control-flow, the
resource perspective, i.e. the people and machines actually
doing the work, is essential for successful implementation
of workflow technology.
Our framework addresses this issue through the resource
model (see Figure 2) which structure is based on the TOVE
ontology (Fox & Gruninger 1998). We created this model
as a special class of domain model that must be explicitly
defined because it plays an important role both in the defi-
nition and execution of workflows. In this sense, a resource
model cannot be designed like a classic domain model since
(i) each resource of the model needs to define a binary re-
lation with the domain model itself in order to define the
knowledge it manage and (ii) each resource also needs to
define the methods it can execute.
The resource model extends therefore the domain model
with the properties of the TOVE ontology. In this context,
ADAPTER
Assessment
Method
PROBLEM-SOLVING
METHOD LAYER
Evaluation
Method decomposition
Task Mappings
Assessment
CONTROL LAYER
(subtasks)
- sample - result
Input roles Output roles
Input Mappings Outputs Mapping
AND
SPLIT
make
evaluation1
make
evaluation2
AND
JOIN
take
decision
Case
Evaluation
Evaluation
Case
Case
Evaluation x Evaluation Decision
abstract
case
abstract
case
abstract
case
evaluation1
evaluation2
evaluations decision
x
x
x
x
x G(x)
F(x)
z
y
(y,z) (y,z) D(y,z)
[true]
[true]
Figure 3: Relation between the problem-solving method and
the Petri net that defines the control structure of the work-
flow. The mappings between the two layers are defined with
dotted arcs.
a resource is a member of an organization that is capable of
doing work. A resource has a specific position in that orga-
nization with specific privileges associated with it. He may
also be a member of one or more organizational units which
are permanent groups of resources within the organization,
e.g. product department, development unit. A resource may
also have one or more associated roles. Roles serve as an-
other grouping mechanisms for human resources with simi-
lar job roles or responsibility levels, e.g. managers, technical
(Russell et al. 2004).
Our framework defines two types of resources: human
and non-human. We restrict non-human resources to exter-
nal services which call will be in charge of the workflow
system. In the case of human resources, the system does
not take the initiative and the resources (users) are in charge
of retrieving their work list, performing their work and no-
tifying their results. Thus, tasks assigned to non-human re-
sources are called automatic tasks. In any case, resources
will be in charge of the execution of non-composite meth-
ods.
Bridges
The reuse of the knowledge components is achieved through
the adapters such as they are defined in the UPML frame-
work (Fensel et al. 2003). The framework provides two
types of adapters to define binary relationships between
knowledge components: bridges and refiners. Bridges ex-
plicitly model the relationships between two distinguished
knowledge components, while refiners are used to con-
straint the definition of a knowledge component. Besides the
bridges that are defined in UPML for describing problem-
solving methods, our framework adds two bridges:
• Method-Control bridge relates a composite method with
a high-level Petri net. As it is depicted in Figure 3, this
bridge maps:
– The inputs and outputs of the method with places of the
4. WORKFLOWS COMPOSICIÓN
WORKFLOWS DESCRIPTION
Workflow Management Server
(WMS)
WORKFLOW RESOURCES COORDINATION
Message Broker
EIS Adapter EIS Adapter
EIS Service EIS Service External Web Service
ENTERPRISE APPLICATION INTEGRATION (EAI)
External Web Service
EXTERNAL RESOURCES INVOCATION
Web Service Adapter Web Service Adapter
Control Structure
(High-level Petri Nets)
A B
C
D
UPML Knowledge
Components
A
K
J
I
H
G
F
E
D
C
B
R
Q
P
E
N
M
L R
Director
Responsable
Empleado
Unidad de
Producto
Unidad de
Fabricación
Unidad
Desarrollo
de Producto
Unidad
Comercial
Recursos
Unidades Organizativas
Roles
Unidad
Recursos
Humanos
Director de
Proyecto
Agente
Autónomo
Organization Structure
Figure 4: Infrastructure for the execution of knowledge-enriched workflows
Petri net (e.g. the sample input with the abstract case
place).
– The subtasks of the composite methods with the transi-
tions of the Petri net (e.g. the evaluation task with the
make evaluation1 place).
– The signature used in the definition of the method with
those of the Petri net, that is, it provides a different in-
terpretation of the annotations, sorts and operations of
the Petri net.
• Resource-Domain Model bridge. Individual resources
may also possess capabilities that further clarify its suit-
ability for executing various kinds of tasks. These may
include qualifications and skills as well as other attributes
such as specific responsibilities held or previous work ex-
perience. They features could be of interest when allocat-
ing tasks (work items).
Finally, we must mention the behavior of the Domain-
Method bridge when this bridge relates a resource with a
method. In this case, it relates the roles, units and par-
ticipants of the resource model with a method. When a
method is non-composite, then this association implies that
the resource will perform the method, e.g. a web ser-
vice. Resources cannot be associated to composite methods
since these methods are decomposed in subtasks and will be
solved by another methods.
Workflows Execution
Taking the previously described framework into account,
Figure 4 depicts the infrastructure for a service-oriented ex-
ecution of knowledge-enriched workflows. It defines a four
layers model which captures the components that are in-
volved in the execution:
• The first layer of the infrastructure provides the access to
the description of the workflows that must be executed.
This layer creates this description from its knowledge
components, that is, from the control structures, organiza-
tion structures, problem-solving methods, tasks, and do-
main models. Following the philosophy of UPML, these
components are glued through a set of adapters which de-
fine the way these knowledge components can be related
and the conditions under which they can be combined.
This feature makes the reuse of workflows easier since it
only implies the redefinition of the adapters.
• The second layer handles the execution and composition
of the workflows. This layer uses the first layer in order
to obtain the workflow description and thus each one of
the tasks that must be executed. This layer performs the
following operations:
– Execution. The execution of a task implies that the
most suitable method and resources must be selected
in order to drive its execution. When the selected
method is non-composite, this entails the selection of
the most suitable (i) resource-method adapters that re-
late the task that must be performed with the resources
that have permissions to perform it and (ii) resource-
domain adapters that match the knowledge required by
the tasks with those provided by the resource. From
this information, the scheduler of the workflow engine
assigns the work to the most suitable resource (the in-
tersection between the two adapters). When the work
is assigned to a software resource then the workflow
engine (i) searches the service description in its repos-
itory (which acts as a service registry) and (ii) call this
service through the third layer of the infrastructure.
Figure 5 depicts the knowledge components that partic-
5. ipate in a typical execution of a workflow. If we obvi-
ate the resources, a workflow is described as a task that
is solved by a problem-solving method which opera-
tional description is defined as a Petri net-based pro-
cess structure. The first layer of the Figure 5 repre-
sents this structure where each one of the transitions
depicted in the net represents a task to be performed.
Therefore, this example describes the structure of a
composite problem-solving method composed of four
tasks (second layer): abstract, evaluate case, revise
case, and create order. In the third layer of the Figure
5 a problem-solving method is assigned to each task.
When the method that solves the task is non-composite,
such as the method that solves the abstract task, then
the tree structure defined by the execution has reached
a leaf. However, when method is composite, such as
the evaluate case method that solves the CAD assess-
ment task, then a new non-leaf node is defined. This
new node will define another execution structure simi-
lar to those depicted in Figure 3, that is, the method will
define a new tasks decomposition structured according
to a Petri net. In the assessment method scenario, the
method is composed of two tasks (Evaluation and As-
sessment) that are mapped with two transitions of the
Petri net that define its control structure.
– Composition. When the method that solves a task is
composite then it will be composed by a set of tasks
controlled by another Petri net structure. The composi-
tion between the different Petri nets implies the defini-
tion of a hierarchical net (Gomes & Barro 2005). The
transition that represents the task solved by the com-
posite problem-solving method is substituted by the
Petri net that defines its control structure.
The assessment method depicted in Figure 5 is subject
of such composition although it is not detailed in the
figure. In this case, the evaluate case transition (task) of
the upper Petri net must be linked with the lower Petri
net that substitutes the assessment method. That is, the
input and output places of the evaluate case transition
are fused with those of the substitute net. As result of
this fusion, the behavior of the evaluate case transition
is assumed by the lower Petri net.
• The third layer of the infrastructure facilitates the coor-
dination of the resources that participate in the workflow.
This layer is defined by a message broker which estab-
lishes the logic of the messages exchange. Through this
solution the heterogeneity of the resources that participate
in the workflows is hidden by the message broker. This
architecture also uses the publication/subscription inter-
action model.
• Finally, the fourth layer depicted in Figure 4 integrates
the systems that will execute the non-composite methods,
such as the Enterprise Information Systems (EIS) or Web
services published by external providers. This integration
is performed through adapters which will participate in
the execution like any other resource. This layer enables
the access to all the systems with the same programming
model and data formats.
CONTROL LAYER
Assessment
Method
PROBLEM-SOLVING METHOD LAYER
TASK LAYER
Solved by
Evaluation
Method
decomposition
ADAPTER
Assessment
ADAPTER
Task to solve
CAD
Assessment
Case
case abstract
Case
abstract
case
evaluate
case
Decision
decision
create
order
Order
order
revise
case
x A(x) x E(x)
y
y
O(y)
Figure 5: Example of execution of a knowledge-enriched
workflows
Conclusions and Future work
This work describes a new knowledge-based framework for
workflow specification. The proposed framework extends
the UPML framework for problem-solving method descrip-
tion with (i) a high-level Petri net ontology for describing
the operational description of composite methods and (ii)
an organization ontology for describing the organizational
structure. With these extensions the definition of a work-
flow can be viewed as the specification of a problem-solving
method that includes the process structure (through a Petri
net) as the operational description and, when it is necessary,
the external agents that execute a non-composite method.
Our framework is related with current proposals for defin-
ing an ontology-based infrastructure to develop, discover
and compose semantic web services (de Bruijn et al. 2005;
Battle et al. 2005; Martin et al. 2004). Particularly, the
WSMO approach (de Bruijn et al. 2005) shares with our
framework that is also based on the UPML specification,
and therefore it inherits from UPML the concept of ontolo-
gies, tasks (aka goals), and adapters (aka mediators), which
enable the connection between all the components of the
framework. The main difference between our approach and
WSMO is that WSMO does not introduce the concept for
workflow in its specification: it uses abstract state machines
to define the structure of the choreography and orchestration
of semantic web services. Our approach, however, consid-
ers as an assumption the use of workflows, and particularly
of Petri nets, because they are accepted in the industry and
academic domains as a paradigm for process modeling.
From this perspective, the main advantage of the proposed
framework is due to its architecture which facilitates the
definition of workflows through a set of knowledge compo-
nents. Since each one of the components is defined indepen-
dently from the others, this framework facilitates the reuse
and composition of workflows: through the bridges different
parts of several workflows (tasks, process structure, organi-
zation description, etc.) could be used to compose a new
workflow.
6. Based on this framework a service-oriented architecture
for the definition and execution of workflows has been de-
veloped (Vidal, Lama, & Bugarı́n 2007). This architecture
is currently being applied in the domains of (1) furniture
industry where it is being used to define the business pro-
cesses associated to the creation of designs and assembly
of the pieces of a given furniture (Vidal, Lama, & Bugarı́n
2006b), and (2) eLearning for the execution of learning de-
signs which are modeled through workflows with teachers
and students as the participants that execute the learning
tasks.
Acknowledgments
Authors wish to thank the Xunta de Galicia and the Min-
isterio de Educacin y Ciencia for their financial support
under the projects PGIDIT06SIN20601PR and TSI2007-
65677C02-01.
References
Andrews, T.; Curbera, F.; Dholakia, H.; Goland, Y.;
Klein, J.; Leymann, F.; Liu, K.; Roller, D.; Smith,
D.; Thatte, S.; Trickovic, I.; and Weerawarana,
S. 2003. Business Process Execution Language
for Web Services Version 1.1. IBM. Available at
http://download.boulder.ibm.com/ibmdl/pub/software/dw/
specs/ws-bpel/.
Battle, S.; Bernstein, A.; Boley, H.; Grosof, B.;
Gruninger, M.; Hull, R.; Kifer, M.; Martin, D.; McIl-
raith, S.; McGuinness, D.; Su, J.; and Tabet, S. 2005.
Semantic Web Services Framework (SWSF) Overview.
World Wide Web Consortium (W3C). Available at
http://www.w3.org/Submission/SWSF/.
Cabral, L.; Domingue, J.; Galizia, S.; Gugliotta, A.; Tanas-
escu, V.; Pedrinaci, C.; and Norton, B. 2006. IRS-III: A
broker for semantic web services based applications. In
International Semantic Web Conference, 201–214.
de Bruijn, J.; Lara, R.; Arroyo, S.; Gomez, J. M.; Sung-
Kook, H.; and Fensel, D. 2005. A Unified Semantic
Web Services Architecture based on WSMF and UPML.
International Journal on Web Engineering Technology
2(2):148–180.
Fensel, D.; Motta, E.; Benjamins, V. R.; Crubezy, M.;
Decker, S.; Gaspari, M.; Groenboom, R.; Grosso, W.;
Musen, M.; Plaza, E.; Schreiber, G.; Studer, R.; and
Wielinga, B. 2003. The Unified Problem-solving Method
Development Language UPML. Knowledge and Informa-
tion Systems 5(1):83–131.
Fox, M., and Gruninger, M. 1998. Enterprise Modelling.
AI Magazine 109–121.
Gomes, L., and Barro, J. P. 2005. Structuring and Compos-
ability Issues in Petri Nets Modeling. IEEE Transactions
on Industrial Informatics 1(2):112–123.
ISO/IEC 15909-1. 2002. High-Level Petri Nets - Concepts,
Definitions and Graphical Notation.
Martin, D.; Burstein, M.; Hobbs, J.; Lassila, O.; Mc-
Dermott, D.; McIlraith, S.; Narayanan, S.; Paolucci, M.;
Parsia, B.; Payne, T.; Sirin, E.; Srinivasan, N.; and
Sycara, K. 2004. OWL-S: Semantic Markup for Web Ser-
vices. World Wide Web Consortium (W3C). Available at
http://www.w3.org/Submission/OWL-S/.
Motta, E. 1998. An Overview of the OCML Modelling
Language. In Proceedings of KEML’98: 8th Workshop on
Knowledge Engineering Methods and Languages.
Newell, A. 1982. The knowledge level. Artificial Intelli-
gence 8(1):87–127.
Object Management Group (OMG). 2006. Busi-
ness Process Modeling Notation (BPMN) Speci-
fication Final Adopted Specification. Available at
http://www.bpmn.org/Documents/.
Russell, N.; ter Hofstede, A.; Edmond, D.; and van der
Aalst, W. 2004. Workflow Resource Patterns. BETA
Working Paper Series WP 127, Eindhoven University of
Technology.
Sheth, A. P., and Gomadam, K. 2007. The 4 x 4 se-
mantic model: Exploiting data, functional, non-functional
and execution semantics across business process, work-
flow, partner services an middleware serices tiers. In 9th
International Conference on Enterprise Information Sys-
tems (ICEIS 2007), 1–4.
Sivashanmugam, K.; Verma, K.; A., S.; and Miller, J. 2003.
Adding semantics to web services standards. In 1st Inter-
national Conference on Web Services (ICWS’03).
Vidal, J. C.; Lama, M.; and Bugarı́n, A. 2006a. A High-
level Petri Net Ontology Compatible with PNML. Petri Net
Newsletter 71:11–23.
Vidal, J. C.; Lama, M.; and Bugarı́n, A. 2006b. Integrated
Intelligent Systems for Engineering Design. Frontiers in
Artificial Intelligence and Applications. IOS Press. chapter
Integrated Knowledge-based System for Product Design in
Furniture Estimate, 345–361.
Vidal, J. C.; Lama, M.; and Bugarı́n, A. 2007. Service-
oriented architecture for knowledge-enriched workflows
modelling and execution. In Abramowicz, W., and Maci-
aszek, L., eds., Business Process and Services Computing,
Lecture Notes in Informatics, 69–77.
Workflow Management Coalition (WfMC). 2005. Process
Definition Interface - XML Process Definition Language.
Available at http://www.wfmc.org/standards/docs/.