DS3 is a technique that decomposes complex system test cases into simpler test cases using log-based program slicing. It performs the following steps:
1. Uses static slicing to initially decompose test cases based on assertions.
2. Analyzes test case execution logs to identify dependencies between statements and global resources.
3. Refines the static slices by adding missing dependencies identified in the logs.
4. Merges slices that share all non-assertion statements to minimize duplication.
An evaluation on two large systems found that DS3 produced well-formed test case slices and significantly reduced test case execution times compared to the original complex test cases and standard static slicing. The sliced test cases also maintained similar
This presentation describes the results published in the following paper published in the Journal INFORMATION AND SOFTWARE TECHNOLOGY
TITLE: A Large Scale Empirical Comparison of State-of-the-art Search-based Test Case Generators
AUTHORS: Annibale Panichella, Fitsum Kifetew, Paolo Tonella
ABSTRACT: Context: Replication studies and experiments form an important foundation in advancing scientific research. While their prevalence in Software Engineering is increasing, there is still more to be done. Objective: This article aims to extend our previous replication study on search-based test generation techniques by performing a large-scale empirical comparison with further techniques from state of the art. Method: We designed a comprehensive experimental study involving six techniques, a benchmark composed of 180 non-trivial Java classes, and a total of 21,600 independent executions. Metrics regarding the effectiveness and efficiency of the techniques were collected and analyzed by means of statistical methods. Results: Our empirical study shows that single-target approaches are generally outperformed by multi-target approaches, while within the multi-target approaches, DynaMOSA/MOSA, which are based on many-objective optimization, outperform the others, in particular for complex classes. Conclusion: The results obtained from our large-scale empirical investigation con rm what has been reported in previous studies, while also highlighting striking differences and novel observations. Future studies, on different benchmarks and considering additional techniques, could further reinforce and extend our findings.
This presentation describes the results published in the following paper published in the Journal INFORMATION AND SOFTWARE TECHNOLOGY
TITLE: A Large Scale Empirical Comparison of State-of-the-art Search-based Test Case Generators
AUTHORS: Annibale Panichella, Fitsum Kifetew, Paolo Tonella
ABSTRACT: Context: Replication studies and experiments form an important foundation in advancing scientific research. While their prevalence in Software Engineering is increasing, there is still more to be done. Objective: This article aims to extend our previous replication study on search-based test generation techniques by performing a large-scale empirical comparison with further techniques from state of the art. Method: We designed a comprehensive experimental study involving six techniques, a benchmark composed of 180 non-trivial Java classes, and a total of 21,600 independent executions. Metrics regarding the effectiveness and efficiency of the techniques were collected and analyzed by means of statistical methods. Results: Our empirical study shows that single-target approaches are generally outperformed by multi-target approaches, while within the multi-target approaches, DynaMOSA/MOSA, which are based on many-objective optimization, outperform the others, in particular for complex classes. Conclusion: The results obtained from our large-scale empirical investigation con rm what has been reported in previous studies, while also highlighting striking differences and novel observations. Future studies, on different benchmarks and considering additional techniques, could further reinforce and extend our findings.
When assertthat(you).understandUnitTesting() failsMartin Skurla
When assertthat(you).understandUnitTesting() fails:
1.) Test face-lifting
2.) Handling assertions
3.) Declarative and Data driven testing
4.) Advanced techniques
5.) Results
When assertthat(you).understandUnitTesting() failsMartin Skurla
When assertthat(you).understandUnitTesting() fails:
1.) Test face-lifting
2.) Handling assertions
3.) Declarative and Data driven testing
4.) Advanced techniques
5.) Results
TMPA-2017: Regression Testing with Semiautomatic Test Selection for Auditing ...Iosif Itkin
TMPA-2017: Tools and Methods of Program Analysis
3-4 March, 2017, Hotel Holiday Inn Moscow Vinogradovo, Moscow
Regression Testing with Semiautomatic Test Selection for Auditing of IMS Database
Alexey Ruchay, Ivan Kliavin, Tatiana Kotova, Julia Ivanova, Applied Technologies
Would like to know more?
Visit our website:
www.tmpaconf.org
www.exactprosystems.com/events/tmpa
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
Test automation principles, terminologies and implementationsSteven Li
A general slides for test automation principle, terminologies and implementation
Also, the slides provide an example - PET, which is a platform written by Perl, but not just for Perl. It provides a general framework to use.
Determining the root cause of performance issues is a critical task for Operations. In this webinar, we'll show you the tools and techniques for diagnosing and tuning the performance of your MongoDB deployment. Whether you're running into problems or just want to optimize your performance, these skills will be useful.
Container Performance Analysis Brendan Gregg, NetflixDocker, Inc.
Containers pose interesting challenges for performance monitoring and analysis, requiring new analysis methodologies and tooling. Resource-oriented analysis, as is common with systems performance tools and GUIs, must now account for both hardware limits and soft limits, as implemented using resource controls including cgroups. The interaction between containers can also be examined, and noisy neighbors either identified of exonerated. Performance tooling can also need special usage or workarounds to function properly from within a container or on the host, to deal with different privilege levels and name spaces. At Netflix, we're using containers for some microservices, and care very much about analyzing and tuning our containers to be as fast and efficient as possible. This talk will show how to successfully analyze performance in a Docker container environment, and navigate differences encountered.
Scaling Security Threat Detection with Apache Spark and DatabricksDatabricks
Apple must detect a wide variety of security threats, and rises to the challenge using Apache Spark across a diverse pool of telemetry. This talk covers some of the home-grown solutions we’ve built to address complications of scale
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Understanding Nidhi Software Pricing: A Quick Guide 🌟
Choosing the right software is vital for Nidhi companies to streamline operations. Our latest presentation covers Nidhi software pricing, key factors, costs, and negotiation tips.
📊 What You’ll Learn:
Key factors influencing Nidhi software price
Understanding the true cost beyond the initial price
Tips for negotiating the best deal
Affordable and customizable pricing options with Vector Nidhi Software
🔗 Learn more at: www.vectornidhisoftware.com/software-for-nidhi-company/
#NidhiSoftwarePrice #NidhiSoftware #VectorNidhi
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: The Art of Triggers and Actions in FME
Log-Based Slicing for System-Level Test Cases
1. Log-Based Slicing for
System-Level Test Cases
Salma Messaoudi1, Donghwan Shin1, Annibale Panichella1,2,
Domenico Bianculli1, and Lionel Briand1,3
1
2
3
2. 2
Regression Testing
Introdution
• Arguably one of the most important activities in software testing
• However, its cost-effectiveness can be largely impaired by the cost of individual test cases
Quality assurance technique applied when changes are merged to an existing codebase
Test Case x
Test Case y
Test Case z
2.5 hours
4 hours
2 hours
Even if a perfect test case prioritization technique is applied,
no faults could be detected during the first few hours
3. 3
Observation: Complex System Test Cases
Introduction
• Often very expensive (e.g., took 1h to run a single test case)
• Why?
• Trigger (and wait for) physical components that take a lot of time to complete
• Poorly designed (e.g., containing multiple test scenarios combined into a single test case)
From a collaborative industrial research project
• The execution time of the decomposed test cases would decrease
• The cost-effectiveness of test case prioritization could improve
• Furthermore, providing finer-grained information about test results would facilitate debugging
activities, such as fault localization
What if we decompose complex system test cases?!
4. 4
Simple Idea: Program Slicing on System Test Cases
Introduction
• Decompose a complex system test case containing multiple test scenario into separate ones,
each of them with only one test scenario and its corresponding assertions
Static Slicing on line (4)
Static Slicing on line (7)
Decomposed
Test Case 1?
Decomposed
Test Case 2?
def test_example():
(1) fdm = create_fdm_setup()
(2) ref = read_csv(‘output.csv’)
(3) sim = deploy_proc(‘output.csv’)
(4) self.assertEqual(ref, sim)
(5) new = run_ic()
(6) diff = FindDiffs(ref, new, 1E-8)
(7) self.assertEqual(len(diff), 0)
Simplified Example System Test Case
5. 5
Limitation of Static Slicing
Introduction
‘output.csv’ is written by line (1)
and read by lines (2) and (3)
Static Slicing on line (4)
Not executable due to
the missing “hidden” dependency!
def test_example():
(1) fdm = create_fdm_setup()
(2) ref = read_csv(‘output.csv’)
(3) sim = deploy_proc(‘output.csv’)
(4) self.assertEqual(ref, sim)
(5) new = run_ic()
(6) diff = FindDiffs(ref, new, 1E-8)
(7) self.assertEqual(len(diff), 0)
Example System Test Case
def test_static_01():
(2) ref = read_csv(‘output.csv’)
(3) sim = deploy_proc(‘output.csv’)
(4) self.assertEqual(ref, sim)
Sliced Test Case
6. 6
Other Program Slicing Approaches?
Introduction
• Dynamic slicing*
• Require to instrument the source code and to collect coverage information
• Not feasible for a system composed of 3rd-party components for which the source code is not available
• Does not address the problem of “hidden” dependencies as they are not captured by code coverage
• Observation-based slicing**
• Require to run each system test case multiple times
• Not applicable for time-consuming system test cases (e.g., the ones developed by our industrial partner)
* Bogdan Korel and Janusz Laski. 1988. Dynamic program slicing. Information processing letters 29, 3 (1988), 155–163
** David Binkley, Nicolas Gold, Mark Harman, Syed Islam, Jens Krinke, and Shin Yoo. 2014. ORBS: Language-independent program slicing. In
Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, 109–120.
7. 7
New Idea: Leaveraging Existing Logs to Refine Static Slices
Approach
• There are test case execution logs, obtained from past regression testing sessions
• The logs include run-time information about the system under test
• There is a traceability between test case statements and log messages (if not, test cases can be instrumented)
• The log contains information on the usage of global resources (if not, a watchdog process can be used)
• We can extract the global resources accessed (e.g., files, network connections) and the actions
performed (e.g., read/write files, open/close connections) upon executing each test statement
...
20210526:10:00:01 INFO [test_example.py:line 1] write ‘output.csv’
20210526:10:00:01 INFO [test_example.py:line 1] fdm initialized
20210526:10:00:02 INFO [test_example.py:line 2] read ‘output.csv’
20210526:10:00:05 INFO [test_example.py:line 3] read ‘output.csv’
...
Example log
8. 8
Our Approach: Decomposing System teSt caSes (DS3)
Approach
System
Test Case
System Test
Cases Logs
Candidate Sliced
Test Cases
Static Slicing
Perform static slice on assertions
1
Def-Use Info for
Global Resources
Def-Use Analysis for
Global Resources
Identify (stmt, entity, def/use)
2
Refined Sliced
Test Cases
Log-based
Slice Refinement
Add missing statements
3
Slice
Minimization
4
Final Sliced
Test Cases
DS3
9. 9
Step 1: Backward Static Slicing
Approach
Using standard static slicing approaches
def test_example():
(1) fdm = create_fdm_setup()
(2) ref = read_csv(‘output.csv’)
(3) sim = deploy_proc(‘output.csv’)
(4) self.assertEqual(ref, sim)
(5) new = run_ic()
(6) diff = FindDiffs(ref, new, 1E-8)
(7) self.assertEqual(len(diff), 0)
Example System Test Case
def test_static_01():
(2) ref = read_csv(‘output.csv’)
(3) sim = deploy_proc(‘output.csv’)
(4) self.assertEqual(ref, sim)
Sliced Test Case 1 (based on line 4)
def test_static_02():
(2) ref = read_csv(‘output.csv’)
(5) new = run_ic()
(6) diff = FindDiffs(ref, new, 1E-8)
(7) self.assertEqual(len(diff), 0)
Sliced Test Case 2 (based on line 7)
10. 10
Step 2: Def-Use Analysis for Global Resources
Approach
Based on the execution logs
...
20210526:10:00:01 INFO [test_example.py:line 1] write ‘output.csv’
20210526:10:00:01 INFO [test_example.py:line 1] fdm initialized
20210526:10:00:02 INFO [test_example.py:line 2] read ‘output.csv’
20210526:10:00:05 INFO [test_example.py:line 3] read ‘output.csv’
...
Example log
(line 1, ‘output.csv’, def)
(line 2, ‘output.csv’, use)
(line 3, ‘output.csv’, use)
Identify (statement, global resources, def/use)
…
Analysis Results
based on keywords
Lines 2 and 3 depends on (à) line 1
11. 11
Step 3: Log-based Slice Refinement
Approach
To add missing “hidden” dependencies into static slices
def test_static_01():
(2) ref = read_csv(‘output.csv’)
(3) sim = deploy_proc(‘output.csv’)
(4) self.assertEqual(ref, sim)
Sliced Test Case 1 (based on line 4)
def test_static_02():
(2) ref = read_csv(‘output.csv’)
(5) new = run_ic()
(6) diff = FindDiffs(ref, new, 1E-8)
(7) self.assertEqual(len(diff), 0)
Sliced Test Case 2 (based on line 7)
def test_static_01():
(1) fdm = create_fdm_setup()
(2) ref = read_csv(‘output.csv’)
(3) sim = deploy_proc(‘output.csv’)
(4) self.assertEqual(ref, sim)
Refefined Sliced Test Case 1
def test_static_02():
(1) fdm = create_fdm_setup()
(2) ref = read_csv(‘output.csv’)
(5) new = run_ic()
(6) diff = FindDiffs(ref, new, 1E-8)
(7) self.assertEqual(len(diff), 0)
Refined Sliced Test Case 2
Lines 2, 3 à line 1
Line 2 à line 1
12. 12
Step 4: Slice Minimization
Approach
Based on our observations in real-world codebases
Observation: If all non-assertion statements of a slice are
also in another slice, both slices share the same test
scenario, and therefore should be merged
def test_static_01():
(1) fdm = create_fdm_setup()
(2) ref = read_csv(‘output.csv’)
(3) sim = deploy_proc(‘output.csv’)
(4) self.assertEqual(ref, sim)
Refefined Sliced Test Case 1
def test_static_02():
(1) fdm = create_fdm_setup()
(2) ref = read_csv(‘output.csv’)
(5) new = run_ic()
(6) diff = FindDiffs(ref, new, 1E-8)
(7) self.assertEqual(len(diff), 0)
Refined Sliced Test Case 2
Not a subset of the other
The slices are final!
13. 13
Evaluation
Evaluation
• Aim to evaluate following aspects of DS3:
• Slicing effectiveness (compared to vanilla static slicing)
• Execution time of sliced test cases (compared to its original)
• Test effectiveness of sliced test cases (compared to its original)
• Evaluation subjects
Programming Language Number of Test Cases
Prop Python 30
JSBSim C++ and Python 81 (76)
Table 1. Evaluation Subjects Summary
14. 14
Research Questions
Evaluation
• RQ1: How effective is DS3 in slicing system test cases compared to standard static slicing?
• RQ2: How efficient are the slices produced by DS3 compared to the original test cases?
• RQ3: What is the code coverage and fault detection capability of the slices produced by DS3
compared to the original test cases?
15. 15
RQ1: Slicing Effectiveness
Evaluation
• The standard static slicer has significantly lower slicing effectiveness than DS3
• Leveraging the global resource usages reported in the logs, DS3 is able to generate well-formed
slices in all cases
System # TCs Appr.
# Slices
SlicingEff
(Pass/Total)
Total Pass Fail
Prop 30
DS3 137 137 0 1.00
Static Slicer 166 40 126 0.24
JSBSim 76
DS3 84 84 0 1.00
Static Slicer 169 56 113 0.33
Table 2. Slicing Effectiveness Comparision Results
16. 16
RQ2: Execution Time
Evaluation
• On average, the execution time of a sliced test case
(239.6s) is much less than that of the original test
case (3196.9s)
• There are some cases where the total execution
time of all slices for a system test case is greater
than that of the original test case
• Because there are the same “fixtures” shared by
multiple slices
• For some cases (e.g., TC30), even the total
execution time of all sliced test cases is much less
than that of the original
• Because the original has “expensive” statements that
actually do not have any dependencies on others
System
Test Case #Slices
Execution Time (s)
Original S-Avg S-Total
TC1 2 1139 520.0 1040
TC2 2 498 342.5 685
TC3 3 245 132.6 398
TC4 1 330 201 201
… … … … …
TC29 10 680 99.1 991
TC30 5 23231 200.8 1004
Average 4.9 3195.9 239.6 967.1
Table 3. Execution Time Results
17. 17
RQ3: Test Effectiveness
Evaluation
• There is no significant difference, meaning sliced test cases are as effective as their original test
cases in terms of coverage and fault detection
• The minor difference is because, again, some of the original test cases include statements that actually
do not have any dependencies on others
Test Case Covered
Branches
Covered
Functions
Killed
Mutants
JSBSim
Original 5736 1831 289
Sliced 5698 1831 288
Table 4. Test Effectiveness Results
18. 18
Conclusion
Conclusion
• DS3 can automatically slice expensive system test cases into far less expensive sliced test cases
• Slicing test cases rarely decrease the effectiveness compared to their original test cases
• Sliced test cases can be further utilized by, for example, test case prioritization and test suite
selection, to increase the cost-effectiveness of regression testing
19. 19
Log-Based Slicing for System-Level Test Cases
@SnT_uni_lu
SnT, Interdisciplinary Centre for
Security, Reliability and Trust
Connect with us
Donghwan Shin
Research Scientist
donghwan.shin@uni.lu
Salma Messaoudi, Donghwan Shin, Annibale Panichella,
Domenico Bianculli, and Lionel Briand
Contact: