The complexity and quantity of Grid workows are so overwhelming that new
performance analysis techniques are required to instrument multilingual workowbased
applications and to select, measure and analyze various performance metrics
at dierent levels of abstraction. Moreover, Grid workow performance
analysis tools have to support Grids based on service-oriented architecture (SOA)
and to address well-known issues of the Grid computing such as the diversity,
dynamics and scalability.
In this talk, we present methods and techniques that are being applied to
the performance monitoring and analysis of Grid workows in the K-WfGrid
project and the ASKALON toolkit. Ontology is used for describing performance
data of Grid workows and performance data is associated with multiple levels
of abstraction. Various instrumentation techniques are utilized to support multilingual
and service-based Grid workows. A novel overhead classication for
Grid workows is introduced and the performance analysis is conducted in a
distributed fashion. Performance monitoring and analysis components are implemented
as P2P-based Grid services. All performance data, monitoring and
analysis requests are described in XML, thus simplifying the interaction and
integration between various instrumentation, monitoring and analysis services
and increasing the interoperability of the performance tools.
Performance Metrics and Ontology for Describing Performance Data of Grid Work...Hong-Linh Truong
Many Grid work
ow middleware services require knowledge about the performance
behavior of Grid applications/services in order to eectively select, compose, and
execute work
ows in dynamic and complex Grid systems. To provide performance
information for building such knowledge, Grid work
ow performance tools have
to select, measure, and analyze various performance metrics of work
ows. However,
there is a lack of a comprehensive study of performance metrics which can
be used to evaluate the performance of a work
ow executed in the Grid. Moreover,
given the complexity of both Grid systems and work
ows, semantics of essential
performance-related concepts and relationships, and associated performance data
in Grid work
ows should be well described. In this paper, we analyze performance
metrics that performance monitoring and analysis tools should provide during the
evaluation of the performance of Grid work
ows. Performance metrics are associated
with multiple levels of abstraction. We introduce an ontology for describing
performance data of Grid work
ows and illustrate how the ontology can be utilized
for monitoring and analyzing the performance of Grid work
ows.
Sharing of cluster resources among multiple Workflow Applicationsijcsit
Many computational solutions can be expressed as workflows. A Cluster of processors is a shared
resource among several users and hence the need for a scheduler which deals with multi-user jobs
presented as workflows. The scheduler must find the number of processors to be allotted for each workflow
and schedule tasks on allotted processors. In this work, a new method to find optimal and maximum
number of processors that can be allotted for a workflow is proposed. Regression analysis is used to find
the best possible way to share available processors, among suitable number of submitted workflows. An
instance of a scheduler is created for each workflow, which schedules tasks on the allotted processors.
Towards this end, a new framework to receive online submission of workflows, to allot processors to each
workflow and schedule tasks, is proposed and experimented using a discrete-event based simulator. This
space-sharing of processors among multiple workflows shows better performance than the other methods
found in literature. Because of space-sharing, an instance of a scheduler must be used for each workflow
within the allotted processors. Since the number of processors for each workflow is known only during
runtime, a static schedule can not be used. Hence a hybrid scheduler which tries to combine the advantages
of static and dynamic scheduler is proposed. Thus the proposed framework is a promising solution to
multiple workflows scheduling on cluster.
A novel methodology for task distributionijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Extending Grids with Cloud Resource Management for Scientific ComputingBharat Kalia
Grid computing gained high popularity in the field of scientific computing through the idea of distributed resource sharing among institutions and scientists. Scientific computing is traditionally a high-utilization workload, with production Grids often running at over 80% utilization (generating high and often unpredictable latencies), and with smaller national Grids offering a rather limited amount of high-performance resources. Running large-scale simulations in such overloaded Grid environments often becomes latency bound or suffers from well-known Grid reliability problems. Today, a new research direction coined by the term Cloud computing proposes an alternative attractive to scientific computing scientists primarily because of four main advantages.
Выступление в рамках Российско-американского делового форума "Американо-Российское сотрудничество". Основные вопросы: энергетический диалог, иностранные банки в России, сотрудничество.
Цель выступления:
Сделать правильную рекламу Российско-американского делового клуба, нацеленную на целевых потребителей (понятие целевой потребитель будет дано ниже), при этом вызвав доверие американского сообщества, желание сотрудничать как с Клубом так и с ОАО «Северо-Западный НТЦ», долгосрочные отношения.
Performance Metrics and Ontology for Describing Performance Data of Grid Work...Hong-Linh Truong
Many Grid work
ow middleware services require knowledge about the performance
behavior of Grid applications/services in order to eectively select, compose, and
execute work
ows in dynamic and complex Grid systems. To provide performance
information for building such knowledge, Grid work
ow performance tools have
to select, measure, and analyze various performance metrics of work
ows. However,
there is a lack of a comprehensive study of performance metrics which can
be used to evaluate the performance of a work
ow executed in the Grid. Moreover,
given the complexity of both Grid systems and work
ows, semantics of essential
performance-related concepts and relationships, and associated performance data
in Grid work
ows should be well described. In this paper, we analyze performance
metrics that performance monitoring and analysis tools should provide during the
evaluation of the performance of Grid work
ows. Performance metrics are associated
with multiple levels of abstraction. We introduce an ontology for describing
performance data of Grid work
ows and illustrate how the ontology can be utilized
for monitoring and analyzing the performance of Grid work
ows.
Sharing of cluster resources among multiple Workflow Applicationsijcsit
Many computational solutions can be expressed as workflows. A Cluster of processors is a shared
resource among several users and hence the need for a scheduler which deals with multi-user jobs
presented as workflows. The scheduler must find the number of processors to be allotted for each workflow
and schedule tasks on allotted processors. In this work, a new method to find optimal and maximum
number of processors that can be allotted for a workflow is proposed. Regression analysis is used to find
the best possible way to share available processors, among suitable number of submitted workflows. An
instance of a scheduler is created for each workflow, which schedules tasks on the allotted processors.
Towards this end, a new framework to receive online submission of workflows, to allot processors to each
workflow and schedule tasks, is proposed and experimented using a discrete-event based simulator. This
space-sharing of processors among multiple workflows shows better performance than the other methods
found in literature. Because of space-sharing, an instance of a scheduler must be used for each workflow
within the allotted processors. Since the number of processors for each workflow is known only during
runtime, a static schedule can not be used. Hence a hybrid scheduler which tries to combine the advantages
of static and dynamic scheduler is proposed. Thus the proposed framework is a promising solution to
multiple workflows scheduling on cluster.
A novel methodology for task distributionijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Extending Grids with Cloud Resource Management for Scientific ComputingBharat Kalia
Grid computing gained high popularity in the field of scientific computing through the idea of distributed resource sharing among institutions and scientists. Scientific computing is traditionally a high-utilization workload, with production Grids often running at over 80% utilization (generating high and often unpredictable latencies), and with smaller national Grids offering a rather limited amount of high-performance resources. Running large-scale simulations in such overloaded Grid environments often becomes latency bound or suffers from well-known Grid reliability problems. Today, a new research direction coined by the term Cloud computing proposes an alternative attractive to scientific computing scientists primarily because of four main advantages.
Выступление в рамках Российско-американского делового форума "Американо-Российское сотрудничество". Основные вопросы: энергетический диалог, иностранные банки в России, сотрудничество.
Цель выступления:
Сделать правильную рекламу Российско-американского делового клуба, нацеленную на целевых потребителей (понятие целевой потребитель будет дано ниже), при этом вызвав доверие американского сообщества, желание сотрудничать как с Клубом так и с ОАО «Северо-Западный НТЦ», долгосрочные отношения.
K-WfGrid Distributed Monitoring and Performance Analysis Services for Workflo...Hong-Linh Truong
Grid workflows for e-science are complex and prone to
failures. However, there is a lack of performance monitoring
and analysis tools for supporting the user as well as
workflow middleware to monitor and understand the performance
of complex interactions among Grid applications,
middleware and resources involved in workflow executions.
In this paper, we present a novel integrated environment
which supports online performance monitoring and analysis
of service-oriented workflows. Performance monitoring
and analysis of Grid workflows and infrastructure is conducted
through a Web portal. Performance overheads of
Grid workflows are analyzed in a systematic way, and performance
problems can be detected during runtime. Moreover,
we present several languages that alleviate the interaction
among performance monitoring and analysis services
and their clients. Our system has been integrated into
the K-WfGrid knowledge-based workflow system. It plays
a key role in supporting the user and developer to analyze
their workflows and in providing performance knowledge
for constructing and executing workflows.
Performance Metrics and Ontology for Describing Performance Data of Grid Work...Hong-Linh Truong
Many Grid work
ow middleware services require knowledge about the performance
behavior of Grid applications/services in order to eectively select, compose, and
execute work
ows in dynamic and complex Grid systems. To provide performance
information for building such knowledge, Grid work
ow performance tools have
to select, measure, and analyze various performance metrics of work
ows. However,
there is a lack of a comprehensive study of performance metrics which can
be used to evaluate the performance of a work
ow executed in the Grid. Moreover,
given the complexity of both Grid systems and work
ows, semantics of essential
performance-related concepts and relationships, and associated performance data
in Grid work
ows should be well described. In this paper, we analyze performance
metrics that performance monitoring and analysis tools should provide during the
evaluation of the performance of Grid work
ows. Performance metrics are associated
with multiple levels of abstraction. We introduce an ontology for describing
performance data of Grid work
ows and illustrate how the ontology can be utilized
for monitoring and analyzing the performance of Grid work
ows.
Building resilient scheduling in distributed systems with SpringMarek Jeszka
It is common to have jobs running periodically, especially in asynchronous and distributed systems. If the service is scaled horizontally (i.e. there are multiple instances of the same service), you often only want a single node to handle the task.
In this session I will demonstrate how to manually setup Spring to have custom logic in scheduling configuration and perform recurring tasks only on a single node. This requires keeping notation of the leader and persisting the selection.
The key takeaway of this session is how to implement distributed locking and how simple it is to run Spring application on top of it. In this talk you will learn how to mitigate challenges that arise when you use traditional declarative approach for scheduling and how to switch to a more flexible programmatic approach.
German Conference on Bioinformatics 2021
https://gcb2021.de/
FAIR Computational Workflows
Computational workflows capture precise descriptions of the steps and data dependencies needed to carry out computational data pipelines, analysis and simulations in many areas of Science, including the Life Sciences. The use of computational workflows to manage these multi-step computational processes has accelerated in the past few years driven by the need for scalable data processing, the exchange of processing know-how, and the desire for more reproducible (or at least transparent) and quality assured processing methods. The SARS-CoV-2 pandemic has significantly highlighted the value of workflows.
This increased interest in workflows has been matched by the number of workflow management systems available to scientists (Galaxy, Snakemake, Nextflow and 270+ more) and the number of workflow services like registries and monitors. There is also recognition that workflows are first class, publishable Research Objects just as data are. They deserve their own FAIR (Findable, Accessible, Interoperable, Reusable) principles and services that cater for their dual roles as explicit method description and software method execution [1]. To promote long-term usability and uptake by the scientific community, workflows (as well as the tools that integrate them) should become FAIR+R(eproducible), and citable so that author’s credit is attributed fairly and accurately.
The work on improving the FAIRness of workflows has already started and a whole ecosystem of tools, guidelines and best practices has been under development to reduce the time needed to adapt, reuse and extend existing scientific workflows. An example is the EOSC-Life Cluster of 13 European Biomedical Research Infrastructures which is developing a FAIR Workflow Collaboratory based on the ELIXIR Research Infrastructure for Life Science Data Tools ecosystem. While there are many tools for addressing different aspects of FAIR workflows, many challenges remain for describing, annotating, and exposing scientific workflows so that they can be found, understood and reused by other scientists.
This keynote will explore the FAIR principles for computational workflows in the Life Science using the EOSC-Life Workflow Collaboratory as an example.
[1] Carole Goble, Sarah Cohen-Boulakia, Stian Soiland-Reyes,Daniel Garijo, Yolanda Gil, Michael R. Crusoe, Kristian Peters, and Daniel Schober FAIR Computational Workflows Data Intelligence 2020 2:1-2, 108-121 https://doi.org/10.1162/dint_a_00033.
Presentation of the status of my PhD in 2012 done to ABLE group at Carnegie Mellon.
Years later from that appeared
https://github.com/iTransformers/netTransformer
K-WfGrid Distributed Monitoring and Performance Analysis Services for Workflo...Hong-Linh Truong
Grid workflows for e-science are complex and prone to
failures. However, there is a lack of performance monitoring
and analysis tools for supporting the user as well as
workflow middleware to monitor and understand the performance
of complex interactions among Grid applications,
middleware and resources involved in workflow executions.
In this paper, we present a novel integrated environment
which supports online performance monitoring and analysis
of service-oriented workflows. Performance monitoring
and analysis of Grid workflows and infrastructure is conducted
through a Web portal. Performance overheads of
Grid workflows are analyzed in a systematic way, and performance
problems can be detected during runtime. Moreover,
we present several languages that alleviate the interaction
among performance monitoring and analysis services
and their clients. Our system has been integrated into
the K-WfGrid knowledge-based workflow system. It plays
a key role in supporting the user and developer to analyze
their workflows and in providing performance knowledge
for constructing and executing workflows.
Performance Metrics and Ontology for Describing Performance Data of Grid Work...Hong-Linh Truong
Many Grid work
ow middleware services require knowledge about the performance
behavior of Grid applications/services in order to eectively select, compose, and
execute work
ows in dynamic and complex Grid systems. To provide performance
information for building such knowledge, Grid work
ow performance tools have
to select, measure, and analyze various performance metrics of work
ows. However,
there is a lack of a comprehensive study of performance metrics which can
be used to evaluate the performance of a work
ow executed in the Grid. Moreover,
given the complexity of both Grid systems and work
ows, semantics of essential
performance-related concepts and relationships, and associated performance data
in Grid work
ows should be well described. In this paper, we analyze performance
metrics that performance monitoring and analysis tools should provide during the
evaluation of the performance of Grid work
ows. Performance metrics are associated
with multiple levels of abstraction. We introduce an ontology for describing
performance data of Grid work
ows and illustrate how the ontology can be utilized
for monitoring and analyzing the performance of Grid work
ows.
Building resilient scheduling in distributed systems with SpringMarek Jeszka
It is common to have jobs running periodically, especially in asynchronous and distributed systems. If the service is scaled horizontally (i.e. there are multiple instances of the same service), you often only want a single node to handle the task.
In this session I will demonstrate how to manually setup Spring to have custom logic in scheduling configuration and perform recurring tasks only on a single node. This requires keeping notation of the leader and persisting the selection.
The key takeaway of this session is how to implement distributed locking and how simple it is to run Spring application on top of it. In this talk you will learn how to mitigate challenges that arise when you use traditional declarative approach for scheduling and how to switch to a more flexible programmatic approach.
German Conference on Bioinformatics 2021
https://gcb2021.de/
FAIR Computational Workflows
Computational workflows capture precise descriptions of the steps and data dependencies needed to carry out computational data pipelines, analysis and simulations in many areas of Science, including the Life Sciences. The use of computational workflows to manage these multi-step computational processes has accelerated in the past few years driven by the need for scalable data processing, the exchange of processing know-how, and the desire for more reproducible (or at least transparent) and quality assured processing methods. The SARS-CoV-2 pandemic has significantly highlighted the value of workflows.
This increased interest in workflows has been matched by the number of workflow management systems available to scientists (Galaxy, Snakemake, Nextflow and 270+ more) and the number of workflow services like registries and monitors. There is also recognition that workflows are first class, publishable Research Objects just as data are. They deserve their own FAIR (Findable, Accessible, Interoperable, Reusable) principles and services that cater for their dual roles as explicit method description and software method execution [1]. To promote long-term usability and uptake by the scientific community, workflows (as well as the tools that integrate them) should become FAIR+R(eproducible), and citable so that author’s credit is attributed fairly and accurately.
The work on improving the FAIRness of workflows has already started and a whole ecosystem of tools, guidelines and best practices has been under development to reduce the time needed to adapt, reuse and extend existing scientific workflows. An example is the EOSC-Life Cluster of 13 European Biomedical Research Infrastructures which is developing a FAIR Workflow Collaboratory based on the ELIXIR Research Infrastructure for Life Science Data Tools ecosystem. While there are many tools for addressing different aspects of FAIR workflows, many challenges remain for describing, annotating, and exposing scientific workflows so that they can be found, understood and reused by other scientists.
This keynote will explore the FAIR principles for computational workflows in the Life Science using the EOSC-Life Workflow Collaboratory as an example.
[1] Carole Goble, Sarah Cohen-Boulakia, Stian Soiland-Reyes,Daniel Garijo, Yolanda Gil, Michael R. Crusoe, Kristian Peters, and Daniel Schober FAIR Computational Workflows Data Intelligence 2020 2:1-2, 108-121 https://doi.org/10.1162/dint_a_00033.
Presentation of the status of my PhD in 2012 done to ABLE group at Carnegie Mellon.
Years later from that appeared
https://github.com/iTransformers/netTransformer
Similar to Performance Analysis of Grid Workflows in K-WfGrid and ASKALON (20)
Integrated Analytics for IIoT Predictive Maintenance using IoT Big Data Cloud...Hong-Linh Truong
For predictive maintenance of equipment with In-
dustrial Internet of Things (IIoT) technologies, existing IoT Cloud
systems provide strong monitoring and data analysis capabilities
for detecting and predicting status of equipment. However, we
need to support complex interactions among different software
components and human activities to provide an integrated analyt-
ics, as software algorithms alone cannot deal with the complexity
and scale of data collection and analysis and the diversity of
equipment, due to the difficulties of capturing and modeling
uncertainties and domain knowledge in predictive maintenance.
In this paper, we describe how we design and augment complex
IoT big data cloud systems for integrated analytics of IIoT
predictive maintenance. Our approach is to identify various
complex interactions for solving system incidents together with
relevant critical analytics results about equipment. We incorpo-
rate humans into various parts of complex IoT Cloud systems
to enable situational data collection, services management, and
data analytics. We leverage serverless functions, cloud services,
and domain knowledge to support dynamic interactions between
human and software for maintaining equipment. We use a real-
world maintenance of Base Transceiver Stations to illustrate our
engineering approach which we have prototyped with state-of-
the art cloud and IoT technologies, such as Apache Nifi, Hadoop,
Spark and Google Cloud Functions.
Modeling and Provisioning IoT Cloud Systems for Testing UncertaintiesHong-Linh Truong
Modern Cyber-Physical Systems (CPS) and Internet of Things (IoT)
systems consist of both loosely and tightly interactions among
various resources in IoT networks, edge servers and cloud data
centers. These elements are being built atop virtualization layers
and deployed in both edge and cloud infrastructures. They also deal
with a lot of data through the interconnection of different types of
networks and services. Therefore, several new types of uncertainties
are emerging, such as data, actuation, and elasticity uncertainties.
This triggers several challenges for testing uncertainty in such
systems. However, there is a lack of novel ways to model and
prepare the right infrastructural elements covering requirements
for testing emerging uncertainties. In this paper, first we present
techniques for modeling CPS/IoT Systems and their uncertainties
to be tested. Second, we introduce techniques for determining and
generating deployment configuration for testing in different IoT
and cloud infrastructures. We illustrate our work with a real-world
use case for monitoring and analysis of Base Transceiver Stations.
Testing Uncertainty of Cyber-Physical Systems in IoT Cloud Infrastructures: C...Hong-Linh Truong
Today’s cyber-physical systems (CPS) span IoT and cloud-based
datacenter infrastructures, which are highly heterogeneous with
various types of uncertainty. Thus, testing uncertainties in these
CPS is a challenging and multidisciplinary activity. We need several
tools for modeling, deployment, control, and analytics to test and
evaluate uncertainties for different configurations of the same CPS.
In this paper, we explain why using state-of-the art model-driven
engineering (MDE) and model-based testing (MBT) tools is not
adequate for testing uncertainties of CPS in IoT Cloud infrastruc-
tures. We discus how to combine them with techniques for elastic
execution to dynamically provision both CPS under test and testing
utilities to perform tests in various IoT Cloud infrastructures.
Towards a Resource Slice Interoperability Hub for IoTHong-Linh Truong
Interoperability for IoT is a challenging problem
because it requires us to tackle (i) cross-system interoperability
issues at the IoT platform sides as well as relevant network
functions and clouds in the edge systems and data centers
and (ii) cross-layer interoperability, e.g., w.r.t. data formats,
communication protocols, data delivery mechanisms, and perfor-
mance. However, existing solutions are quite static w.r.t software
deployment and provisioning for interoperability. Many middle-
ware, services and platforms have been built and deployed as
interoperability bridges but they are not dynamically provisioned
and reconfigured for interoperability at runtime. Furthermore,
they are often not considered together with other services as a
whole in application-specific contexts. In this paper, we focus
on dynamic aspects by introducing the concept of Resource
Slice Interoperability Hub (rsiHub). Our approach leverages
existing software artifacts and services for interoperability to
create and provision dynamic resource slices, including IoT,
network functions and clouds, for addressing application-specific
interoperability requirements. We will present our key concepts,
architectures and examples toward the realization of rsiHub.
On Supporting Contract-aware IoT Dataspace ServicesHong-Linh Truong
Advances in the Internet of Things (IoT) enable a
huge number of connected devices that produce large amounts
of data. Such data is increasingly shared among various
stakeholders to support advanced (predictive) analytics and
precision decision making in different application domains like
smart cities and industrial internet. Currently there are several
platforms that facilitate sharing, buying and selling IoT data.
However, these platforms do not support the establishment and
monitoring of usage contracts for IoT data. In this paper we
address this research issue by introducing a new extensible
platform for enabling contract-aware IoT dataspace services,
which supports data contract specification and IoT data flow
monitoring based on established data contracts. We present
a general architecture of contract monitoring services for
IoT dataspaces and evaluate our platform through illustrative
examples with real-world datasets and through performance
analysis.
Towards the Realization of Multi-dimensional Elasticity for Distributed Cloud...Hong-Linh Truong
As multiple types of distributed, heterogeneous cloud computing environments have proliferated, cloud software can leverage
diverse types of infrastructural, platform and data resources with di
erent cost and quality models. This introduces a multi-
dimensional elasticity perspective for cloud software that would greatly meet changing demands from the user. However, we argue
that current techniques are not enough for dealing with multi-dimensional elasticity in distributed cloud environments. We present
our approach to the realization of multi-dimensional elasticity by introducing novel concepts and a roadmap to achieve them.
On Engineering Analytics of Elastic IoT Cloud SystemsHong-Linh Truong
Developing IoT cloud platforms is very challenging, as IoT
cloud platforms consist of a mix of cloud services and IoT elements, e.g.,
for sensor management, near-realtime events handling, and data analyt-
ics. Developers need several tools for deployment, control, governance
and analytics actions to test and evaluate designs of software compo-
nents and optimize the operation of di erent design con gurations. In
this paper, we describe requirements and our techniques on support-
ing the development and testing of IoT cloud platforms. We present our
choices of tools and engineering actions that help the developer to design,
test and evaluate IoT cloud platforms in multi-cloud environments.
HINC – Harmonizing Diverse Resource Information Across IoT, Network Functions...Hong-Linh Truong
Effective resource management in IoT systems must
represent IoT resources, edge-to-cloud network capabilities, and
cloud resources at a high-level, while being able to link to diverse
low-level types of IoT devices, network functions, and cloud
computing infrastructures. Hence resource management in such
a context demands a highly distributed and extensible approach,
which allows us to integrate and provision IoT, network functions,
and cloud resources from various providers. In this paper, we
address this crucial research issue. We first present a high-
level information model for virtualized IoT, network functions
and cloud resource modeling, which also incorporates software-
defined gateways, network slicing and data centers. This model
is used to glue various low-level resource models from different
types of infrastructures in a distributed manner to capture
sets of resources spanning across different sub-networks. We
then develop a set of utilities and a middleware to support
the integration of information about distributed resources from
various sources. We present a proof of concept prototype with
various experiments to illustrate how various tasks in IoT cloud
systems can be simplified as well as to evaluate the performance
of our framework.
SINC – An Information-Centric Approach for End-to-End IoT Cloud Resource Prov...Hong-Linh Truong
We present SINC –
Slicing IoT, Network Functions, and Clouds – which enables designers to dynamically create/update end-to-end slices of the overall IoT network in order to simultaneously meet multiple user needs.
Governing Elastic IoT Cloud Systems under UncertaintiesHong-Linh Truong
we introduce U-GovOps – a novel framework for
dynamic, on-demand governance of elastic IoT cloud systems under
uncertainty. We introduce a declarative policy language to simplify
the development of uncertainty- and elasticity-aware governance
strategies. Based on that we develop runtime mechanisms, which
enable mitigating the uncertainties by monitoring and governing
the IoT cloud systems through specified strategies.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Performance Analysis of Grid Workflows in K-WfGrid and ASKALON
1. Performance Analysis of Grid workflows
in K-WfGrid and ASKALON
Hong-Linh Truong and Thomas Fahringer
Distributed and Parallel Systems Group
Institute of Computer Science, University of Innsbruck
{truong,tf}@dps.uibk.ac.at
http://dps.uibk.ac.at/projects/pma
With contributions from:
Bartosz Balis, Marian Bubak, Peter Brunner, Schahram Dustdar, Jakub
Dziwisz, Andreas Hoheisel, Tilman Linden, Radu Prodan, Francesco
Nerieri, Kuba Rozkwitalski, Robert Samborski
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005.
2. Outline
Motivation
Objectives and approach
Performance metrics and ontologies
Architecture of Grid monitoring and analysis
Conclusions
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 2
3. Performance Instrumentation, Monitoring, and Analysis
for the Grid
Challenging task because of dynamic nature of
the Grid and its applications
Combination of fine-grained and coarse-grained
models
Multilingual applications
Heterogeneous and dynamic systems
Not just performance but also dependability
The focus of the talk
Our concepts, architecture, interfaces and
integration
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 3
4. ASKALON Toolkit
http://dps.uibk.ac.at/projects/askalon/
Programming and execution environment for Grid workflows
Integrated various services for scheduling, executing,
monitoring and analyzing Grid workflows
Target applications
Workflows of scientific applications (C/Fortran)
• Material science, flooding, astrophysics, movie
rendering, etc.
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 4
5. The K-WfGrid Project
An FP6 EU project
www.kwfgrid.net
Execute workflow
Focuses
Automatic construction and
reuse of workflows based on
Construct Monitor
knowledge gathered through
execution workflow environment
Target applications K-WfGrid
Workflows of Grid/Web services Reuse Analyze
Grid/Web services may invoke knowledge information
legacy applications
Business (Coordinated Traffic
Management, EPR) and
Capture knowledge
scientific (e.g., Food simulation)
applications
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 5
6. The K-WfGrid Project
An FP6 EU project
www.kwfgrid.net
Execute workflow
Focuses
Automatic construction and
reuse of workflows based on
Construct Monitor
knowledge gathered through
execution workflow environment
Many Grid users and
Target applications K-WfGrid
developers we know
Workflows of Grid/Web services Reuse Analyze
emphasize the interoperability, information
Grid/Web services may invoke knowledge
legacy applications integration and reliability, not
Business (Coordinated Traffic
Management, EPR) and
just performance
Capture knowledge
scientific (e.g., Food simulation)
applications
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 6
7. Objectives, Requirements, Approaches
Performance monitoring and analysis for Grid workflows of
Web/Grid services and multilingual components
Performance and dependability metrics
Monitoring and performance analysis
Service-oriented distributed architecture and peer-to-
peer model
Unified monitoring and performance analysis system
covering infrastructure and applications
Standardized data representations for monitoring data
and events
Adaptive and generic sensors, distributed analysis,
performance bottleneck search
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 7
8. Hierarchical View of Grid Workflows
<parallel>
Workflow
<activity name="mProject1">
<executable name="mProject1"/>
</activity>
Workflow Region n
<activity name="mProject2">
<executable name="mProject2"/>
</activity>
Activity m
</parallel>
mProject1Service.java
Invoked Application m
public void mProject1() {
A();
while () {
Code Code Code ...
Region 1 Region … Region q }
}
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 8
9. Hierarchical View of Grid Workflows
<parallel>
Workflow
<activity name="mProject1">
<executable name="mProject1"/>
</activity>
Workflow Region n
<activity name="mProject2">
<executable name="mProject2"/>
</activity>
Activity m
</parallel>
Monitoring and
mProject1Service.java
Invoked Application m analysis at multiple
public void mProject1() {
levels of abstraction
A();
while () {
Code Code Code ...
Region 1 Region … Region q }
}
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 9
10. Workflow Execution Model (Simplified)
Local scheduler
Workflow execution
Spanning multiple Grid sites
Highly inter-organizational, inter-related and dynamic
Multiple levels of job scheduling
At workflow execution engine (part of WfMS)
At Grid sites
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 10
11. Performance Metrics of Grid Workflows
Interesting performance metrics associated with multiple
levels of abstraction
Metrics can be used in workflow composition, for
comparing different invoked applications of a single activity,
adaptive scheduling, etc.
Five levels of abstraction
Code region, Invoked application, Activity ,Workflow
Region, Workflow
Topdown PMA: from a higher level to a lower one
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 11
12. Monitoring and Measuring Performance
Metrics
Performance monitoring and analysis tools
Operate at multiple levels
Correlate performance metrics from multiple levels
Middleware and application instrumentation
Instrument execution engine
• Execution engine can be distributed or centralized
Instrument applications
• Distributed, spanning multiple Grid sites
Challenging problems: the complexity of performance tools and data
Integrate multiple performance monitoring tools executed on multiple
Grid sites
Integrate performance data produced by various tools
We need common concepts for performance data associated with
Grid workflows
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 12
13. Utilizing Ontologies for Describing Performance
Data of Grid workflows
Metrics ontology
Specifies which performance metrics a tool can provide
Simplifies the access to performance metrics provided by various tools
Performance data integration
Performance data integration based on common concepts.
High-level search and retrieval of performance data
Knowledge base performance data of Grid workflows
Utilized by high-level tools such as schedulers, workflow composition
tools, etc.
Used to re(discover) workflow patterns, interactions in workflows, to
check correct execution, etc.
Distributed performance analysis
Performance analysis requests can be built based on ontologies
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 13
14. Workflow Performance Ontology
WfPerfOnto (Ontology describing Performance data of Grid
Workflows)
Specifies performance metrics
Basic concepts
• Concepts reflect the hierarchical structure of a workflow
• Static and dynamic workflow performance data
Relationships
• Static and dynamic relationships among concepts
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 14
16. Monitoring and Analysis Scenario
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 16
17. Components for Grid PMA
Three main parts of a unified performance monitoring,
instrumentation and analysis system
Monitoring and instrumentation services
Performance analysis services
Performance service interfaces and data representations
• We must reuse existing tools and techniques as much as
possible
Integration model
Loosely coupled: among Grid sites/organizations
• Utilizing SOA for performance tools
Tightly coupled: within services deployed in a single Grid
site/organization
• Interfacing to existing (parallel) performance tools
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 17
18. K-WfGrid Monitoring and Analysis Architecture
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 18
19. Self-Managing Sensor-Based Middleware
Integrating diverse types of sensors into a single system
Event-driven and demand-driven sensors for system and
applications monitoring, rule-based monitoring
Self-managing services
Service-based operations and TCP-based stream data
delivery
Peer-to-peer Grid services for the monitoring middleware
Query and subscription of monitoring data
Data query and subscription
Group-based data query and subscription, and notification
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 19
20. Self-Managing Sensor Based Monitoring
Middleware
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 20
21. Workflow Instrumentation
Issues to be addressed
Multiple levels of instrumentation
Instrumentation of multilingual applications (C/Java/Fortran)
Must be dynamic (enabled) instrumentation
ASKALON and K-WfGrid approach
Utilizing existing instrumentation techniques
• OCM-G (Roland Wismueller and Marian Bubak)
dynamic enabled instrumentation C/Fortran
• Dyninst (Barton Miller, Jeff Hollingsworth)
for binary code generated from C/Fortran
• Source/dynamic instrumentation of Java (from Java 1.5)
Using APART standardized intermediate representation (SIR)
XML-based request for controlling instrumentation
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 21
22. DIPAS: Distributed Performance Analysis
DIPAS Portal DIPAS Portal
DIPAS DIPAS
Knowledge
GateWay GateWay Builder Agent
GOM:
Grid analysis Ontology Memory
Grid analysis
agent agent
Monitoring Service
Grid analysis
agent GWES:
DIPAS Execution Service
Grid analysis agent accepts WARL and returns performance metrics
described in XML under a tree of metrics
WARL (workflow analysis request langauge): based on concepts and
properties in WfPerfOnto
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 22
23. temporal overheads
Workflow Overhead unidentified overhead
middleware
resource brokerage
Classification performance prediction
scheduling
optimization algorithm
rescheduling
Middleware execution management
control of parallelism
Scheduler fork construct
barrier
Resource Manager
job preparation
archive/compression of data
extractdecompression/
Execution management job submission
submission decision policy
• Control of parallelism GRAM submission
GRAM polling
• Loss of parallelism queue
queue waiting
queue residency
Loss of parallelism restart
job failure
job cancel
Load imbalance service latency
resource broker
Serialization performance prediction
scheduler
Replicated job security
execution management
loss of parallelism
Data transfer load imbalance
serialisation
replicated job
Activity
data transfer
file staging
stage in
Parallel processing overheads access to database
stage out
External load third party data transfer
input from user
data transfer imbalance
activity
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. parallel overheads
external load
23
24. Current status of the implementation
SOA-based
Monitoring, Instrumentation and Analysis services are GT4 based
XML-based for performance data representations and requests
Monitoring and Instrumentation Services
gSOAP, GSI-based dynamic instrumentation service
Java-based dynamic instrumentation
OCM-G
But we need to integrate them into a single framework for workflows
and it is a non trivial task
Analysis services
GT4-based with distributed components
Simple language for workflow analysis request (WARL), designed
based on WfPerfOnto
Metrics are described in XML
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 24
26. Snapshot: Online System Monitoring
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 26
27. Rule-based Monitoring
Sensors use rules to analyze monitoring data
Rules are based on IBM ABLE toolkit
Example:
A network path in the
Austrian Grid, bandwidth <
5MB/s (based on Iperf)
Define a fuzzy variable with
VERY LOW, LOW, MEDUM,
HIGH, VERY HIGH
Fuzzy rules: Events send
when bandwidth VERY LOW
or VERY HIGH
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 27
29. DAG-based workflow monitoring and
analysis
Online application
profiling
Monitoring data of
workflow activity
execution
Monitoring data of
computational node
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 29
30. K-WfGrid: Online Workflow Monitoring and
Analysis
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 30
32. Conclusions
The architecture of monitoring and analysis service must tackle the
dynamics and the diversity of the Grid
Service-oriented, peer-to-peer model, adaptive sensors
Integration and reuse are important issues
Loosely coupled and tightly coupled
Do not neglect data representations and service interfaces
Performance metrics and ontology for Grid workflows
What performance metrics are important and how to measure them
Common and generic concepts and relationships monitored and alayzed
Given well-defined service interfaces, data representation, performance
metrics and ontology
Simplify the integration among components
Towards automatic, intelligent and distributed performance performance
analysis
UIBK perf. work: http://dps.uibk.ac.at/projects/pma/
Papers: http://dps.uibk.ac.at/index.pl/publications
Hong-Linh Truong, Dagstuhl seminar on Automatic Performance Analysis, 15 December 2005. 32