Managing and running business processes in the Cloud changes how Workflow Management Systems (WfMSs) are deployed. Consequently, when designing such WfMSs, there is a need of determining the sweet spot in the performance vs. resource consumption trade-off. While all Cloud providers agree on the pay-as-you-go resource consumption model, every provider uses a different cost model to gain a competitive edge. In this paper, we present a novel method for estimating the infrastructure costs of running business processes in the Cloud. The method is based on the precise measurement of the resources required to run a mix of business process in the Cloud, while accomplishing expected performance requirements. To showcase the method we use the BenchFlow framework to run experiments on a widely used open-source WfMS executing custom workload with a varying number of simulated users. The experiments are necessary to reliably measure WfMS’s performance and resource consumption, which is then used to estimate the infrastructure costs of executing such workload on four different Cloud providers.
My talk from The International Conference on Business Process Management 2016 (BPM2016). Cite us: http://link.springer.com/chapter/10.1007/978-3-319-45468-9_5
Maximizing Your CA Datacom® Investment for the New Application Economy (Part 2)CA Technologies
Make sure your CA Datacom® mainframe database is being used to the fullest, and optimized to maximize your investment. Join us to hear about a number of modernization enhancements that help to improve performance, scalability, platform support, standards compliance and usability. This interactive technical education is for customers who have recently upgraded, currently upgrading or considering upgrading to the most current product releases. We will discuss the most impactful enhancements and best practices so you can immediately begin using these recommended features with confidence to help improve your CA Datacom operations today. For more information, please visit http://cainc.to/Nv2VOe
Service revamp lean six sigma black belt projectSumit K Jha
Project- Service revamp
Type- Lean Six Sigma Black Belt Project
Outline- To improve the entire process of getting purchase orders, purchasing, manufacturing, warehousing and installation
Tools/Framework- Six Sigma concepts such as SIPOC, fish bone analysis, control charts and hypothesis testing; statistical tools, Microsoft Dynamic AX
Role- Project manager
Outcome- The successful completion of the project yielded in cost savings of INR 1.61 crores
As opposed to databases for which established benchmarks have been driving the advancement of the field since a long time, workflow engines still lack a well-accepted benchmark that allows to give a fair comparison of their performance. In this talk we discuss the reasons and propose how to address the main challenges related to benchmarking these complex middleware systems at the core of business process automation and service composition solutions. In particular, we look at how to generate a representative workload and how to define suitable performance metrics. You will learn how to use our framework to measure the performance and resource consumption of your BPMN engine and compare different configurations to tune its performance in your concrete real-life project. The talk will also present preliminary experimental results obtained while benchmarking popular open source engines.
Workflow Engine Performance Benchmarking with BenchFlowVincenzo Ferme
As opposed to databases for which established benchmarks have been driving the advancement of the field since a long time, workflow engines still lack a well-accepted benchmark that allows to give a fair comparison of their performance. In this talk we discuss how BenchFlow addresses the main challenges related to benchmarking these complex middleware systems at the core of business process automation and service composition solutions. In particular, we look at how to define suitable workloads, representative performance metrics, and how to fully automate the execution of the performance experiments. Concerning the automation of experiments execution and analysis, we designed and implemented the BenchFlow framework. The framework automates the deployment of WfMS packaged within Docker containers. This way the initial configuration and conditions for each experiment can be precisely controlled and reproduced. Moreover, the BenchFlow framework supports heterogeneous WfMS APIs by providing an extensible plugin mechanism. During the benchmark execution it automatically deploys a set of BPMN models and invokes them according to parametric load functions specified declaratively. It automatically collects performance and resource consumption data, both on the driver and servers during the experiment as well as extracting them from the WfMS database afterwards. In addition to latency, throughput and resource utilisation, we compute multiple performance metrics that characterise the WfMS performance at the engine level, at the process level and at the BPMN construct level. To ensure reliability and improve usefulness of the obtained results, we automatically compute descriptive statistics and perform statistical tests to asses the homogeneity of results obtained from different repetitions of the same experiment.
The talk will also present experimental results obtained while benchmarking popular open source engines, using workflow patterns as a micro-benchmark.
Business Case Calculator for DevOps Initiatives - Leading credit card service...Capgemini
The 2015 World Quality Report data reveals that 61% of respondent’s rate time-to-market as very important which is the key reason for the proliferation of DevOps. The biggest ingredient is speed based on efficiencies upstream and in operations. Technology leaders now need to wear a business hat and build their strategy based on cost to achieve desired velocity as opposed to cost savings.
Join MasterCard and Capgemini to learn about a real time to market driven DevOps business case calculator with technology, process and tool components.
Presented at HPE Discover Las Vegas 2016.
Understanding the TCO and ROI of Apache Kafka & Confluentconfluent
For a product or service to be cost effective, it must be considered to be good value, where the benefits are worth at least what is paid for them. But how do we measure this, to prove the case? Given that value can be intangible, it can be hard to quantify and may have little relationship to cost. Added to this, the open source nature of Apache Kafka means that many companies skip the requirement to build a business case for it, until it has become mission critical and demands financial and human resources.
In this presentation, Lyndon Hedderly, Team Lead of Business Value Consulting at Confluent, will cover how Confluent works with customers to measure the business value of data streaming
Maximizing Your CA Datacom® Investment for the New Application Economy (Part 2)CA Technologies
Make sure your CA Datacom® mainframe database is being used to the fullest, and optimized to maximize your investment. Join us to hear about a number of modernization enhancements that help to improve performance, scalability, platform support, standards compliance and usability. This interactive technical education is for customers who have recently upgraded, currently upgrading or considering upgrading to the most current product releases. We will discuss the most impactful enhancements and best practices so you can immediately begin using these recommended features with confidence to help improve your CA Datacom operations today. For more information, please visit http://cainc.to/Nv2VOe
Service revamp lean six sigma black belt projectSumit K Jha
Project- Service revamp
Type- Lean Six Sigma Black Belt Project
Outline- To improve the entire process of getting purchase orders, purchasing, manufacturing, warehousing and installation
Tools/Framework- Six Sigma concepts such as SIPOC, fish bone analysis, control charts and hypothesis testing; statistical tools, Microsoft Dynamic AX
Role- Project manager
Outcome- The successful completion of the project yielded in cost savings of INR 1.61 crores
As opposed to databases for which established benchmarks have been driving the advancement of the field since a long time, workflow engines still lack a well-accepted benchmark that allows to give a fair comparison of their performance. In this talk we discuss the reasons and propose how to address the main challenges related to benchmarking these complex middleware systems at the core of business process automation and service composition solutions. In particular, we look at how to generate a representative workload and how to define suitable performance metrics. You will learn how to use our framework to measure the performance and resource consumption of your BPMN engine and compare different configurations to tune its performance in your concrete real-life project. The talk will also present preliminary experimental results obtained while benchmarking popular open source engines.
Workflow Engine Performance Benchmarking with BenchFlowVincenzo Ferme
As opposed to databases for which established benchmarks have been driving the advancement of the field since a long time, workflow engines still lack a well-accepted benchmark that allows to give a fair comparison of their performance. In this talk we discuss how BenchFlow addresses the main challenges related to benchmarking these complex middleware systems at the core of business process automation and service composition solutions. In particular, we look at how to define suitable workloads, representative performance metrics, and how to fully automate the execution of the performance experiments. Concerning the automation of experiments execution and analysis, we designed and implemented the BenchFlow framework. The framework automates the deployment of WfMS packaged within Docker containers. This way the initial configuration and conditions for each experiment can be precisely controlled and reproduced. Moreover, the BenchFlow framework supports heterogeneous WfMS APIs by providing an extensible plugin mechanism. During the benchmark execution it automatically deploys a set of BPMN models and invokes them according to parametric load functions specified declaratively. It automatically collects performance and resource consumption data, both on the driver and servers during the experiment as well as extracting them from the WfMS database afterwards. In addition to latency, throughput and resource utilisation, we compute multiple performance metrics that characterise the WfMS performance at the engine level, at the process level and at the BPMN construct level. To ensure reliability and improve usefulness of the obtained results, we automatically compute descriptive statistics and perform statistical tests to asses the homogeneity of results obtained from different repetitions of the same experiment.
The talk will also present experimental results obtained while benchmarking popular open source engines, using workflow patterns as a micro-benchmark.
Business Case Calculator for DevOps Initiatives - Leading credit card service...Capgemini
The 2015 World Quality Report data reveals that 61% of respondent’s rate time-to-market as very important which is the key reason for the proliferation of DevOps. The biggest ingredient is speed based on efficiencies upstream and in operations. Technology leaders now need to wear a business hat and build their strategy based on cost to achieve desired velocity as opposed to cost savings.
Join MasterCard and Capgemini to learn about a real time to market driven DevOps business case calculator with technology, process and tool components.
Presented at HPE Discover Las Vegas 2016.
Understanding the TCO and ROI of Apache Kafka & Confluentconfluent
For a product or service to be cost effective, it must be considered to be good value, where the benefits are worth at least what is paid for them. But how do we measure this, to prove the case? Given that value can be intangible, it can be hard to quantify and may have little relationship to cost. Added to this, the open source nature of Apache Kafka means that many companies skip the requirement to build a business case for it, until it has become mission critical and demands financial and human resources.
In this presentation, Lyndon Hedderly, Team Lead of Business Value Consulting at Confluent, will cover how Confluent works with customers to measure the business value of data streaming
Product Keynote: Jira Service Desk, Opsgenie, StatuspageAtlassian
Software has changed the way we work, and no one has had to adapt faster than the IT team. In this keynote, learn about the new developments across Atlassian cloud products targeted at helping IT teams meet the challenges of building a world-class operations, incident management and support organization.
Continuous Performance Testing: The New StandardTechWell
In the past several years the software development lifecycle has changed significantly with high-speed software releases, shared application services, and platform virtualization. The traditional performance assurance approach of pre-release testing does not address these innovations. To maintain confidence in acceptable performance in production, pre-release testing must be augmented with in-production performance monitoring. Obbie Pet describes three types of monitors—performance, resource, and VM platform—and three critical metrics fundamental to isolating performance problems—response time, transaction rate, and error rate. Obbie reviews techniques to acquire and interpret these metrics, and describes how to develop a continuous performance monitoring process. In conjunction with pre-release testing, this monitoring can be woven into a single integrated process, offering a best bet in assuring performance in today’s development world. Take away this integrated process for consideration in your own shop.
IBM Streams V4.1 and Incremental Checkpointinglisanl
Fang Zheng is a member of the IBM Streams development team. In his presentation, Fang provides an introduction to the incremental checkpointing feature that is available in IBM Streams V4.1, including how it works and how to use it.
How Lean helped us put quality back at the heart of our Agile Process, by Ren...Institut Lean France
Learn how BISAM, the leading software editor into Performance, Attribution & Composites Analytics, decided to refocus on quality after more than 10 years of disciplined Agile practices.
A fascinating Lean IT story presented by Renaud Wilsiud, CTO of BISAM at the Lean IT Summit 2017.
Discover more Lean IT REX on www.lean-it-summit.com
Driving Innovation with Kanban at Jaguar Land RoverLeanKit
Find out how Kanban is accelerating product design and development at Jaguar Land Rover.
Watch the recorded webinar here: https://vimeo.com/172780037
Hamish McMinn, Automotive and IT Project Manager, will explain how Kanban is improving time, cost and quality across new vehicle development projects at Jaguar Land Rover.
You'll learn:
-Why new product development provides rich opportunities for continuous process improvement.
-Benefits and challenges of transferring agile software techniques to hardware design and development.
-How to visualize work, focus on flow and increase cross-functional collaboration using LeanKit.
Hamish will share learnings from the initial pilot project, and how Kanban is now being scaled across multiple engineering teams.
Presentation by Céline Deknop of the paper "Advanced Differencing of Legacy Code and Migration Logs" @SATToSE2020 (Virtual event).
Rediffusion of the presentation can be found here : https://www.youtube.com/watch?v=YJxPzWqW9DI&fbclid=IwAR3voPfFsp-ywRUrXOejW4oq8axlFAqbxGidNh2WMEE_VR-pb0diK3Cb05Y (around the 3h mark)
FME makes it possible to solve complex problems, but this often leads to complex models which can be challenging to maintain without introducing errors or unwanted behaviours. Typical scenarios are when changing the model or updating the FME version, transformer versions, or other components used by the model.
Testing is a cornerstone in the modern software development process to assure code quality by examining the software's artifacts and behaviour. Methodologies for testing source code are well developed, mature, and well-described. On the other hand, methods for testing scripts or programs developed with a “drag- and drop interface” development software like FME are not as well established. Some work on establishing a test framework for FME scripts has been done, for example, the rTest framework developed by the company Veremes and presented at FME IUC 2017.
We present a test method for quality assurance of FME scripts that is not based on defining test cases but instead works by comparing datasets at different measure points in a script to assure that data has not changed in an unwanted way after updating a script.
Webinar - Order out of Chaos: Avoiding the Migration MigrainePeak Hosting
When your business has outgrown your current managed hosting provider, the logical thing is to search for something better. Change can be difficult and chaotic, but it doesn’t have to be.
This webinar focuses on best practices for making your migration from the cloud as pain free as possible, including a discussion on what you need to know and ask of your migration provider to ensure it goes smoothly. As an example of this, we will outline Peak Hosting’s migration process, as well as discuss one of our customer migrations and why they chose to undertake it.
This presentation deals with the Performance testing implemented in the Project.
We have to use Certain Tools in the POC process.
1) VSTS
2) JMeter
finally, VSTS has been Implemented.
I hope this presentation helps the viewer to get the overview of the tools which Accenture Deals.
Evolving from Automated to Continous Testing for Agile and DevOpsParasoft
As agile development practices mature and DevOps principles begin to infiltrate our corporate cultures, organizations realize the distinct opportunity to accelerate software delivery.
Product Keynote: Jira Service Desk, Opsgenie, StatuspageAtlassian
Software has changed the way we work, and no one has had to adapt faster than the IT team. In this keynote, learn about the new developments across Atlassian cloud products targeted at helping IT teams meet the challenges of building a world-class operations, incident management and support organization.
Continuous Performance Testing: The New StandardTechWell
In the past several years the software development lifecycle has changed significantly with high-speed software releases, shared application services, and platform virtualization. The traditional performance assurance approach of pre-release testing does not address these innovations. To maintain confidence in acceptable performance in production, pre-release testing must be augmented with in-production performance monitoring. Obbie Pet describes three types of monitors—performance, resource, and VM platform—and three critical metrics fundamental to isolating performance problems—response time, transaction rate, and error rate. Obbie reviews techniques to acquire and interpret these metrics, and describes how to develop a continuous performance monitoring process. In conjunction with pre-release testing, this monitoring can be woven into a single integrated process, offering a best bet in assuring performance in today’s development world. Take away this integrated process for consideration in your own shop.
IBM Streams V4.1 and Incremental Checkpointinglisanl
Fang Zheng is a member of the IBM Streams development team. In his presentation, Fang provides an introduction to the incremental checkpointing feature that is available in IBM Streams V4.1, including how it works and how to use it.
How Lean helped us put quality back at the heart of our Agile Process, by Ren...Institut Lean France
Learn how BISAM, the leading software editor into Performance, Attribution & Composites Analytics, decided to refocus on quality after more than 10 years of disciplined Agile practices.
A fascinating Lean IT story presented by Renaud Wilsiud, CTO of BISAM at the Lean IT Summit 2017.
Discover more Lean IT REX on www.lean-it-summit.com
Driving Innovation with Kanban at Jaguar Land RoverLeanKit
Find out how Kanban is accelerating product design and development at Jaguar Land Rover.
Watch the recorded webinar here: https://vimeo.com/172780037
Hamish McMinn, Automotive and IT Project Manager, will explain how Kanban is improving time, cost and quality across new vehicle development projects at Jaguar Land Rover.
You'll learn:
-Why new product development provides rich opportunities for continuous process improvement.
-Benefits and challenges of transferring agile software techniques to hardware design and development.
-How to visualize work, focus on flow and increase cross-functional collaboration using LeanKit.
Hamish will share learnings from the initial pilot project, and how Kanban is now being scaled across multiple engineering teams.
Presentation by Céline Deknop of the paper "Advanced Differencing of Legacy Code and Migration Logs" @SATToSE2020 (Virtual event).
Rediffusion of the presentation can be found here : https://www.youtube.com/watch?v=YJxPzWqW9DI&fbclid=IwAR3voPfFsp-ywRUrXOejW4oq8axlFAqbxGidNh2WMEE_VR-pb0diK3Cb05Y (around the 3h mark)
FME makes it possible to solve complex problems, but this often leads to complex models which can be challenging to maintain without introducing errors or unwanted behaviours. Typical scenarios are when changing the model or updating the FME version, transformer versions, or other components used by the model.
Testing is a cornerstone in the modern software development process to assure code quality by examining the software's artifacts and behaviour. Methodologies for testing source code are well developed, mature, and well-described. On the other hand, methods for testing scripts or programs developed with a “drag- and drop interface” development software like FME are not as well established. Some work on establishing a test framework for FME scripts has been done, for example, the rTest framework developed by the company Veremes and presented at FME IUC 2017.
We present a test method for quality assurance of FME scripts that is not based on defining test cases but instead works by comparing datasets at different measure points in a script to assure that data has not changed in an unwanted way after updating a script.
Webinar - Order out of Chaos: Avoiding the Migration MigrainePeak Hosting
When your business has outgrown your current managed hosting provider, the logical thing is to search for something better. Change can be difficult and chaotic, but it doesn’t have to be.
This webinar focuses on best practices for making your migration from the cloud as pain free as possible, including a discussion on what you need to know and ask of your migration provider to ensure it goes smoothly. As an example of this, we will outline Peak Hosting’s migration process, as well as discuss one of our customer migrations and why they chose to undertake it.
This presentation deals with the Performance testing implemented in the Project.
We have to use Certain Tools in the POC process.
1) VSTS
2) JMeter
finally, VSTS has been Implemented.
I hope this presentation helps the viewer to get the overview of the tools which Accenture Deals.
Evolving from Automated to Continous Testing for Agile and DevOpsParasoft
As agile development practices mature and DevOps principles begin to infiltrate our corporate cultures, organizations realize the distinct opportunity to accelerate software delivery.
A Declarative Approach for Performance Tests Execution in Continuous Software...Vincenzo Ferme
Software performance testing is an important activity to ensure quality in continuous software development environments. Current performance testing approaches are mostly based on scripting languages and framework where users implement, in a procedural way, the performance tests they want to issue to the system under test. However, existing solutions lack support for explicitly declaring the performance test goals and intents. Thus, while it is possible to express how to execute a performance test, its purpose and applicability context remain implicitly described. In this work, we propose a declarative domain specific language (DSL) for software performance testing and a model-driven framework that can be programmed using the mentioned language and drive the end-to-end process of executing performance tests. Users of the DSL and the framework can specify their performance intents by relying on a powerful goal-oriented language, where standard (e.g., load tests) and more advanced (e.g., stability boundary detection and configuration tests) performance tests can be specified starting from templates. The DSL and the framework have been designed to be integrated into a continuous software development process and validated through extensive use cases that illustrate the expressiveness of the goal-oriented language, and the powerful control it enables on the end-to-end performance test execution to determine how to reach the declared intent.
My talk from The 9th ACM/SPEC International Conference on Performance Engineering (ICPE 2018). Cite us: https://dl.acm.org/citation.cfm?id=3184417
BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...Vincenzo Ferme
BenchFlow is an open-source expert system providing a complete platform for automating performance tests and performance analysis. We know that not all the developers are performance experts, but in nowadays agile environment, they need to deal with performance testing and performance analysis every day. In BenchFlow, the users define objective-driven performance testing using an expressive and SUT-aware DSL implemented in YAML. Then BenchFlow automates the end-to-end process of executing the performance tests and providing performance insights, dealing with system under test deployment relying on Docker technologies, distributing simulated users load on different server, error handling, performance data collection and performance metrics and insights computation.
My talk for SPEC Research Group DevOps (https://research.spec.org/devopswg) about BenchFlow. Discover BenchFlow: https://github.com/benchflow
Towards Holistic Continuous Software Performance AssessmentVincenzo Ferme
In agile, fast and continuous development lifecycles, software performance analysis is fundamental to confidently release continuously improved software versions. Researchers and industry practitioners have identified the importance of integrating performance testing in agile development processes in a timely and efficient way. However, existing techniques are fragmented and not integrated taking into account the heterogeneous skills of the users developing polyglot distributed software, and their need to automate performance practices as they are integrated in the whole lifecycle without breaking its intrinsic velocity. In this paper we present our vision for holistic continuous software performance assessment, which is being implemented in the BenchFlow tool. BenchFlow enables performance testing and analysis practices to be pervasively integrated in continuous development lifecycle activities. Users can specify performance activities (e.g., standard performance tests) by relying on an expressive Domain Specific Language for objective-driven performance analysis. Collected performance knowledge can be thus reused to speed up performance activities throughout the entire process.
My talk from The International Workshop on Quality-aware DevOps (QUDOS 2017). Cite us: http://dl.acm.org/citation.cfm?id=3053636
Using Docker Containers to Improve Reproducibility in Software and Web Engine...Vincenzo Ferme
The ability to replicate and reproduce scientific results has become an increasingly important topic for many academic disciplines. In computer science and, more specifically, software and Web engineering, contributions of scientific work rely on developed algorithms, tools and prototypes, quantitative evaluations, and other computational analyses. Published code and data come with many undocumented assumptions, dependencies, and configurations that are internal knowledge and make reproducibility hard to achieve. This tutorial presents how Docker containers can overcome these issues and aid the reproducibility of research artefacts in software engineering and discusses their applications in the field.
Cite us: http://link.springer.com/chapter/10.1007/978-3-319-38791-8_58
A Container-Centric Methodology for Benchmarking Workflow Management SystemsVincenzo Ferme
Trusted benchmarks should provide reproducible results obtained following a transparent and well-defined process. In this paper, we show how Containers, originally developed to ease the automated deployment of Cloud application components, can be used in the context of a benchmarking methodology. The proposed methodology focuses on Workflow Management Systems (WfMSs), a critical service orchestration middleware, which can be characterized by its architectural complexity, for which Docker Containers offer a highly suitable approach. The contributions of our work are: 1) a new benchmarking approach taking full advantage of containerization technologies; and 2) the formalization of the interaction process with the WfMS vendors described clearly in a written agreement. Thus, we take advantage of emerging Cloud technologies to address technical challenges, ensuring the performance measurements can be trusted. We also make the benchmarking process transparent, automated, and repeatable so that WfMS vendors can join the benchmarking effort.
On the Road to Benchmarking BPMN 2.0 Workflow EnginesVincenzo Ferme
Design the first benchmark to assess and compare the performance of Workflow Engines that are compliant with Business Process Model and Notation 2.0 (BPMN 2.0) standard.
My talk from The International Conference on Performance Engineering 2015 (ICPE2015) - http://icpe2015.ipd.kit.edu/icpe2015/
Gli Open Data: cosa sono, a cosa servono, dove si trovano. Una visione della realtà attuale sugli Open Data, nel mondo ed in Italia.
Per fruire al meglio della presentazione, che include video e note, vi consiglio di scaricarla.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
4. Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
D
A
B
C
IT Resources
sweet spot in the performance vs. resource consumption trade-off
5. Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
D
A
B
C
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
6. Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
7. Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C
P
a
a
S
Workload
Users
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
B
P
a
a
S
8. Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C
P
a
a
S
Workload
Users
$
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
B
P
a
a
S
9. Vincenzo Ferme
3
Challenges in Deploying WfMSs in the Cloud
evaluate the Cloud cost models, determine the best suitable provider
Evaluate the Cloud Offers
10. Vincenzo Ferme
3
Challenges in Deploying WfMSs in the Cloud
evaluate the Cloud cost models, determine the best suitable provider
Evaluate the Cloud Offers
11. Vincenzo Ferme
3
Challenges in Deploying WfMSs in the Cloud
evaluate the Cloud cost models, determine the best suitable provider
Evaluate the Cloud Offers Best Suitable Provider?
12. Vincenzo Ferme
3
Challenges in Deploying WfMSs in the Cloud
evaluate the Cloud cost models, determine the best suitable provider
Evaluate the Cloud Offers Best Suitable Provider?
$
Performance
Resources
17. Vincenzo Ferme
4
Goal
select candidate VMs according to workload and wanted performance
WfMS
RAM USAGE
CPU USAGE
CPU + RAM Bound
Throughput
Resp. Time
Matching Clouds VMs
$$
$
$$
$
18. Vincenzo Ferme
5
The Steps of the Method
from performance requirements to cheapest VMs
1. Workload Mix
80%
C
A
B
20%
D
19. Vincenzo Ferme
5
The Steps of the Method
from performance requirements to cheapest VMs
1. Workload Mix
80%
C
A
B
20%
D
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
2. Load Function
20. Vincenzo Ferme
5
The Steps of the Method
from performance requirements to cheapest VMs
1. Workload Mix
80%
C
A
B
20%
D
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
2. Load Function
Metrics
KPIs
3. Performance Req.
21. Vincenzo Ferme
5
The Steps of the Method
from performance requirements to cheapest VMs
1. Workload Mix
80%
C
A
B
20%
D
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
2. Load Function
Metrics
KPIs
3. Performance Req.
4. Run Experiments
22. Vincenzo Ferme
5
The Steps of the Method
from performance requirements to cheapest VMs
1. Workload Mix
80%
C
A
B
20%
D
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
2. Load Function
Metrics
KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
Resources
Perform
ance
23. Vincenzo Ferme
5
The Steps of the Method
from performance requirements to cheapest VMs
1. Workload Mix
80%
C
A
B
20%
D
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
2. Load Function
Metrics
KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
Resources
Perform
ance
6. Cloud Offers
$
24. Vincenzo Ferme
5
The Steps of the Method
from performance requirements to cheapest VMs
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
2. Load Function
Metrics
KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
Resources
Perform
ance
6. Cloud Offers
$
7. VMs Selection
Res. Efficiency
Cost
$
27. Vincenzo Ferme
6
Assumptions
RAM and CPU, only WfMS no DBMS, workload dependent
CPU
RAM
No Network, No I/O
Observed Resources
WfMS
DBMS
Only WfMS
28. Vincenzo Ferme
6
Assumptions
RAM and CPU, only WfMS no DBMS, workload dependent
CPU
RAM
No Network, No I/O
Observed Resources
WfMS
DBMS
Only WfMS
80%
C
A
B
20%
D
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
Workload
Dependent
29. Vincenzo Ferme
7
An Example of Application
applying the method for the Camunda WfMS
1.
80%
C
A
B
20%
D
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
2.
Metrics
KPIs
3. 4.
5.
Resources
Perform
ance
6.
$
7.
Resources Efficiency
Cost
$
30. Vincenzo Ferme
8
1. The Workload Mix
realistic, mix of 5 models with different complexity
camunda_additional_approval
Change necessary?
Amend
something
(Empty
Script)
Do the next
task
(Empty
Script)
Do some
approval
(Decision
Script)
Approved?
Do the
next task
(Empty
Script)
Counter
instantiation
no
yes
yes
no
21 Elements (Nodes + Edges)
31. Vincenzo Ferme
9
1. The Workload Mix
realistic, mix of 5 models with different complexity
camunda_oopSignavio
Order
Received
Determine
Location
(Decision Script)
Exception
detected?
Escalation
(Empty
Script)
Order
amount?
Order Canceled
Approved?
Send Rejection
Email
(Empty Script)
Order
Rejected
Send Order to WH
(Empty Script)
Receive
Dispatch
Confirmation
(Empty
Script)
Control
Dispatch
(Empty
Script)
Send
Confirmation
Email
(Empty
Script) Order
Confirmed
Order Approval
(Decision Script)
>200
<=200no
yes
no
yes
32 Elements (Nodes + Edges)
32. Vincenzo Ferme
10
1. The Workload Mix
realistic, mix of 5 models with different complexity
camunda_invoice_collaboration
Scan Invoice
(Empty
Script)
Archive
Original
(Empty
Script)
Ask Approver
Assignment
(Empty Script)
Approve
Invoice
(Decision
Script)
Assign
Approver
(Empty
Script)
Invoice
Approved?
Prepare Bank
Transfer
(Empty Script)
Ask Invoice
Review
(Empty
Script)
Review
successful?
Review
Invoice
(Decision
Script)
Archive
Invoice
(Empty
Script)
noyes
no
yes
39 Elements (Nodes + Edges)
33. Vincenzo Ferme
11
1. The Workload Mix
realistic, mix of 5 models with different complexityactiviti_modelSignavio
In-depth Analysis
Medical
Analysis
(Empty
Script)
Risk
Assessment
(Empty
Script)
Send Meeting
Request
(Empty
Script)
F2F Meeting
(Empty
Script)
Calculate
Score
(Empty
Script)
Preliminary
Judgement
(Empty Script)
Send Rejection
Email
(Empty Script)
Approve Loan
Application
(Decision Script)
Approved?
Update
Application With
Calculation
(Empty Script)
Final Evaluation
(Decision Script)
yes
not ok
ok
no
41 Elements (Nodes + Edges)
34. Vincenzo Ferme
12
1. The Workload Mix
realistic, mix of 5 models with different complexity
COUNTERPARTY
Counterparty
Requested
Review
Onboarding
Request
(Decision
Script)
Override
requested?
Review Override
Request
(Decision Script)
Tax
(Empty Script)
Missing Tax
Documents
(Decision Script) Goto tax?
Check Tax Rules
(Decision Script)
Override
requested?
Tax
approved?
Amend Tax Data
(Decision Script)
Override
requested?
High Risk
Country
(Empty Script)
Missing
Compliance
Documents
(Empty Script)
World Check
(Decision
Script) Goto
compliance?
Check Compliance
Rules
(Decision Script)
Override
requested?
Compliance
approved?
Amend Compliance
Data
(Decision Script)
Override
requested?
Start SSI
Review
(Empty Script)
Override
approved?
override rejected
Ready to
publish?
Publish
Counterparty
(Empty Script)
Cancel SSI Review
(Empty Script)
rejected
published
yes
no
yes
no
no
maybe
no
yes
no
maybe
noyes
yes
yes
no
yes
no
yes
no
no
yes
no
yes
yes
84 Elements (Nodes + Edges)
35. Vincenzo Ferme
13
2. The Load Functions
realistic for different sized companies
• 50
• 500
• 1000
users (usr)
36. Vincenzo Ferme
13
2. The Load Functions
realistic for different sized companies
Users
0
200
400
600
800
1000
Time (min:sec)0:000:301:002:003:004:005:006:007:008:0010:00
Example for usr=1000
• 50
• 500
• 1000
users (usr)
37. Vincenzo Ferme
13
2. The Load Functions
realistic for different sized companies
Users
0
200
400
600
800
1000
Time (min:sec)0:000:301:002:003:004:005:006:007:008:0010:00
Example for usr=1000
• 50
• 500
• 1000
users (usr)
1 second think time
60. Vincenzo Ferme
25
Limitations and Future Work
measure on the Cloud, validate the approach
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
2. Load Function
Metrics
KPIs
3. Performance Req.
4. Run Experiments6. Cloud Offers
$
7. VMs Selection
Res. Efficiency
Cost
$
5. Analyse Results
Resources
Perform
ance
61. Vincenzo Ferme
25
Limitations and Future Work
measure on the Cloud, validate the approach
Metrics
KPIs
3. Performance Req.
4. Run Experiments6. Cloud Offers
$
7. VMs Selection
Res. Efficiency
Cost
$
8. Cloud Exp.
5. Analyse Results
Resources
Perform
ance
62. Vincenzo Ferme
25
Limitations and Future Work
measure on the Cloud, validate the approach
Metrics
KPIs
3. Performance Req.
4. Run Experiments6. Cloud Offers
$
7. VMs Selection
Res. Efficiency
Cost
$
8. Cloud Exp.
Add Network, I/O
5. Analyse Results
Resources
Perform
ance
63. Vincenzo Ferme
25
Limitations and Future Work
measure on the Cloud, validate the approach
Metrics
KPIs
3. Performance Req.
4. Run Experiments6. Cloud Offers
$
7. VMs Selection
Res. Efficiency
Cost
$
8. Cloud Exp.
Add Network, I/O
5. Analyse Results
Resources
Perform
ance
Consider DBMS
64. Highlights
Deploying WfMSs to the Cloud
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C
P
a
a
S
Workload
Users
$
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
B
P
a
a
S
65. Highlights
Deploying WfMSs to the Cloud
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C
P
a
a
S
Workload
Users
$
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
B
P
a
a
S
Proposed Method
Vincenzo Ferme
5
The Steps of the Method
from performance requirements to cheapest VMs
1. Workload Mix
80%
C
A
B
20%
D
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
2. Load Function
Metrics
KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
Resources
Perform
ance
6. Cloud Offers
$
66. Highlights
Deploying WfMSs to the Cloud
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C
P
a
a
S
Workload
Users
$
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
B
P
a
a
S
Proposed Method
Vincenzo Ferme
5
The Steps of the Method
from performance requirements to cheapest VMs
1. Workload Mix
80%
C
A
B
20%
D
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
2. Load Function
Metrics
KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
Resources
Perform
ance
6. Cloud Offers
$
Application to Camunda
Vincenzo Ferme
23
7. VMs Selection
select the set of suitable VMs, and identify the cheapest ones
Normalised Cores Utilisation vs Price of VMs
67. Highlights
Deploying WfMSs to the Cloud
Vincenzo Ferme
2
Deploying WfMSs to the Cloud
WfMS
IaaS
D
A
B
C
P
a
a
S
Workload
Users
$
$
IT Resources
sweet spot in the performance vs. resource consumption trade-off
B
P
a
a
S
Proposed Method
Vincenzo Ferme
5
The Steps of the Method
from performance requirements to cheapest VMs
1. Workload Mix
80%
C
A
B
20%
D
0
600
1200
0
1
2
3
4
5
6
7
8
9
10
2. Load Function
Metrics
KPIs
3. Performance Req.
4. Run Experiments5. Analyse Results
Resources
Perform
ance
6. Cloud Offers
$
Application to Camunda
Vincenzo Ferme
23
7. VMs Selection
select the set of suitable VMs, and identify the cheapest ones
Normalised Cores Utilisation vs Price of VMs
Limitations and Future Work
Vincenzo Ferme
24
Limitations and Future Work
approximation of the actual performance on the Cloud
Performance on the Cloud has more Variance
70. Vincenzo Ferme
29
Published Work
[BTW ’15]
C. Pautasso, V. Ferme, D. Roller, F. Leymann, and M. Skouradaki. Towards workflow
benchmarking: Open research challenges. In Proc. of the 16th conference on
Database Systems for Business,Technology, and Web, BTW 2015, pages 331–350, 2015.
[SSP ’14]
M. Skouradaki, D. H. Roller, F. Leymann, V. Ferme, and C. Pautasso. Technical open
challenges on benchmarking workflow management systems. In Proc. of the 2014
Symposium on Software Performance, SSP 2014, pages 105–112, 2014.
[ICPE ’15]
M. Skouradaki, D. H. Roller, L. Frank, V. Ferme, and C. Pautasso. On the Road to
Benchmarking BPMN 2.0 Workflow Engines. In Proc. of the 6th ACM/SPEC
International Conference on Performance Engineering, ICPE ’15, pages 301–304, 2015.
71. Vincenzo Ferme
30
Published Work
[CLOSER ’15]
M. Skouradaki,V. Ferme, F. Leymann, C. Pautasso, and D. H. Roller. “BPELanon”: Protect
business processes on the cloud. In Proc. of the 5th International Conference on
Cloud Computing and Service Science, CLOSER 2015. SciTePress, 2015.
[SOSE ’15]
M. Skouradaki, K. Goerlach, M. Hahn, and F. Leymann. Application of Sub-Graph
Isomorphism to Extract Reoccurring Structures from BPMN 2.0 Process Models.
In Proc. of the 9th International IEEE Symposium on Service-Oriented System
Engineering, SOSE 2015, 2015.
[BPM ’15]
V. Ferme, A. Ivanchikj, C. Pautasso. A Framework for Benchmarking BPMN 2.0
Workflow Management Systems. In Proc. of the 13th International Conference on
Business Process Management, BPM ’15, pages 251-259, 2015.
72. Vincenzo Ferme
31
Published Work
[BPMD ’15]
A. Ivanchikj,V. Ferme, C. Pautasso. BPMeter: Web Service and Application for Static
Analysis of BPMN 2.0 Collections. In Proc. of the 13th International Conference on
Business Process Management [Demo], BPM ’15, pages 30-34, 2015.
[ICPE ’16]
V. Ferme, and C. Pautasso. Integrating Faban with Docker for Performance
Benchmarking. In Proc. of the 7th ACM/SPEC International Conference on Performance
Engineering, ICPE ’16, 2016.
[CLOSER ’16]
V. Ferme, A. Ivanchikj, C. Pautasso., M. Skouradaki, F. Leymann. A Container-centric
Methodology for Benchmarking Workflow Management Systems. In Proc. of the
6th International Conference on Cloud Computing and Service Science, CLOSER 2016.
SciTePress, 2016.
73. Vincenzo Ferme
[CAiSE ’16]
M. Skouradaki, V. Ferme, C. Pautasso, F. Leymann, A. van Hoorn. Micro-Benchmarking
BPMN 2.0 Workflow Management Systems with Workflow Patterns . In Proc. of
the 28th International Conference on Advanced Information Systems Engineering, CAiSE
’16, 2016.
32
Published Work
[ICWE ’16]
C. Jürgen,V. Ferme, H.C. Gall. Using Docker Containers to Improve Reproducibility
in Software and Web Engineering Research. In Proc. of the 16th International
Conference on Web Engineering, 2016.
[ICWS ’16]
M. Skouradaki,V.Andrikopoulos, F. Leymann. Representative BPMN 2.0 Process Model
Generation from Recurring Structures. In Proc. of the 23rd IEEE International
Conference on Web Services, ICWS '16, 2016.
74. Vincenzo Ferme
33
Published Work
[SummerSOC ’16]
M. Skouradaki, T. Azad, U. Breitenbücher, O. Kopp, F. Leymann. A Decision Support
System for the Performance Benchmarking of Workflow Management Systems.
In Proc. of the 10th Symposium and Summer School On Service-Oriented Computing,
SummerSOC '16, 2016.
[OTM ’16]
M. Skouradaki, V. Andrikopoulos, O. Kopp, F. Leymann. RoSE: Reoccurring Structures
Detection in BPMN 2.0 Process Models Collections. In Proc. of On the Move to
Meaningful Internet Systems Conference, OTM '16, 2016. (to appear)
[BPM Forum ’16]
V. Ferme, A. Ivanchikj, C. Pautasso. Estimating the Cost for Executing Business
Processes in the Cloud. In Proc. of the 14th International Conference on Business
Process Management, BPM Forum ’16, 2016.
75. Vincenzo Ferme
34
Docker Performance
[IBM ’14]
W. Felter, A. Ferreira, R. Rajamony, and J. Rubio. An updated performance comparison
of virtual machines and Linux containers. IBM Research Report, 2014.
Although containers themselves have almost no overhead,
Docker is not without performance gotchas. Docker
volumes have noticeably better performance than files stored in
AUFS. Docker’s NAT also introduces overhead for workloads
with high packet rates. These features represent a tradeoff
between ease of management and performance and should
be considered on a case-by-case basis.
”
“Our results show that containers result in equal or better
performance than VMs in almost all cases.
“
”
BenchFlow Configures Docker for Performance by Default
76. Vincenzo Ferme
35
•We want to characterise the Workload Mix using Real-World process models
• Share your executable BPMN 2.0 process models, even anonymised
Process Models
•We want to characterise the Load Functions using Real-World behaviours
• Share your execution logs, even anonymised
Execution Logs
•We want to add more and more WfMSs to the benchmark
• Contact us for collaboration, and BenchFlow framework support
WfMSs
Call for Collaboration
WfMSs, process models, process logs
77. Vincenzo Ferme
36
Cloud WfMSs, business process optimisations, capacity planning
State of the Art of WfMSs in the Cloud
• Cloud WfMSs: emerging market
• BPaaS: started to be evaluated
• Business Process Execution Optimisations
• Cost-aware WfMSs
78. Vincenzo Ferme
36
Cloud WfMSs, business process optimisations, capacity planning
State of the Art of WfMSs in the Cloud
• Cloud WfMSs: emerging market
• BPaaS: started to be evaluated
• Business Process Execution Optimisations
• Cost-aware WfMSs
We Focus on Capacity Planning
with a Widely Used WfMSs
89. Vincenzo Ferme
39
Server-side Data and Metrics Collection
monitors
DBMSFaban Drivers
ContainersServers
harness
WfMS
TestExecution
Web
Service
90. Vincenzo Ferme
39
Server-side Data and Metrics Collection
monitors
DBMSFaban Drivers
ContainersServers
harness
WfMS
TestExecution
Web
Service
MONITOR
91. Vincenzo Ferme
40
Server-side Data and Metrics Collection
monitors
DBMSFaban Drivers
ContainersServers
harness
WfMS
TestExecution
Web
Service
• CPU usage
• Database state
Examples of Monitors:Monitors’ Characteristics:
• RESTful services
• Lightweight (written in Go)
• As less invasive on the SUT as possible
92. Vincenzo Ferme
40
Server-side Data and Metrics Collection
monitors
DBMSFaban Drivers
ContainersServers
harness
WfMS
TestExecution
Web
Service
CPU
Monitor
DB
Monitor
• CPU usage
• Database state
Examples of Monitors:Monitors’ Characteristics:
• RESTful services
• Lightweight (written in Go)
• As less invasive on the SUT as possible
93. Vincenzo Ferme
41
Server-side Data and Metrics Collection
collect data
DBMSFaban Drivers
ContainersServers
harness
WfMS
TestExecution
Web
Service
94. Vincenzo Ferme
41
Server-side Data and Metrics Collection
collect data
DBMSFaban Drivers
ContainersServers
harness
WfMS
TestExecution
Web
Service
Minio
Instance
Database
Analyses
COLLECTORS
95. Vincenzo Ferme
42
Server-side Data and Metrics Collection
collect data
DBMSFaban Drivers
ContainersServers
harness
WfMS
TestExecution
Web
Service
• Container’s Stats (e.g., CPU usage)
• Database dump
• Applications Logs
Examples of Collectors:Collectors’ Characteristics:
• RESTful services
• Lightweight (written in Go)
• Two types: online and offline
• Buffer data locally
96. Vincenzo Ferme
42
Server-side Data and Metrics Collection
collect data
DBMSFaban Drivers
ContainersServers
harness
WfMS
TestExecution
Web
Service
Stats
Collector
DB
Collector
• Container’s Stats (e.g., CPU usage)
• Database dump
• Applications Logs
Examples of Collectors:Collectors’ Characteristics:
• RESTful services
• Lightweight (written in Go)
• Two types: online and offline
• Buffer data locally