Successfully reported this slideshow.
Your SlideShare is downloading. ×

Multi-objective Test Case Selection Through Linkage Learning-based Crossover (SSBSE'21))

Ad

1
Multi-Objective Test Case Selection
Through Linkage Learning-Driven
Crossover
Mitchell Olsthoorn, Annibale Panichella

Ad

Context: Regression Testing
De
fi
nition:
Re-running tests to ensure that previously
developed and tested software still f...

Ad

Related Work: What Has Been Done So Far?
3
vs
{
}

test
{
}

test
{
}

test
{
}

test
{
}

test
{
}

test
B01 B02 B03 B04 ...

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Check these out next

1 of 30 Ad
1 of 30 Ad

Multi-objective Test Case Selection Through Linkage Learning-based Crossover (SSBSE'21))

Download to read offline

Test case selection (TCS) aims to select a subset of the test suite to run for regression testing. The selection is typically based on past coverage and execution cost data. Researchers have successfully used multi-objective evolutionary algorithms (MOEAs), such as NSGA-II and its variants, to solve the problem. These MOEAs use traditional crossovers to create new candidate solutions during the search. Recent studies in evolutionary computation showed that more effective recombinations can be made by using linkage learning. Inspired by these recent advances in this field, we propose a new variant of NSGA-II, called L2-NSGA, that uses linkage learning to optimize test case selection. In particular, we use an unsupervised clustering algorithm to infer promising patterns among the solutions (sub-test suites). Then, these patterns are used in the next iterations of L2-NSGA to create solutions that contain/preserve these inferred patterns. Our results show that our customizations make NSGA-II more effective for test case selection. Furthermore, the test suite sub-sets generated by L2-NSGA are less expensive and more effective (detect more faults) than those generated by MOEAs used in the literature for regression testing.

Test case selection (TCS) aims to select a subset of the test suite to run for regression testing. The selection is typically based on past coverage and execution cost data. Researchers have successfully used multi-objective evolutionary algorithms (MOEAs), such as NSGA-II and its variants, to solve the problem. These MOEAs use traditional crossovers to create new candidate solutions during the search. Recent studies in evolutionary computation showed that more effective recombinations can be made by using linkage learning. Inspired by these recent advances in this field, we propose a new variant of NSGA-II, called L2-NSGA, that uses linkage learning to optimize test case selection. In particular, we use an unsupervised clustering algorithm to infer promising patterns among the solutions (sub-test suites). Then, these patterns are used in the next iterations of L2-NSGA to create solutions that contain/preserve these inferred patterns. Our results show that our customizations make NSGA-II more effective for test case selection. Furthermore, the test suite sub-sets generated by L2-NSGA are less expensive and more effective (detect more faults) than those generated by MOEAs used in the literature for regression testing.

Advertisement
Advertisement

More Related Content

Advertisement

Multi-objective Test Case Selection Through Linkage Learning-based Crossover (SSBSE'21))

  1. 1. 1 Multi-Objective Test Case Selection Through Linkage Learning-Driven Crossover Mitchell Olsthoorn, Annibale Panichella
  2. 2. Context: Regression Testing De fi nition: Re-running tests to ensure that previously developed and tested software still functions the same after a change Expensive process: Running the full test suite can take up to days for large software products Methods: • Test Case Minimization • Test Case Prioritization • Test Case Selection 2 Yoo and Harmon, STVR, 2012
  3. 3. Related Work: What Has Been Done So Far? 3 vs { } test { } test { } test { } test { } test { } test B01 B02 B03 B04 B05 B06 B07 B08 B09 B10 B11 B12 0 1 1 0 1 0 1 0 0 1 0 1 1 0 1 1 0 0 0 1 0 1 0 0 . . . . . . . . . . . . 1 0 0 1 0 1 0 0 1 0 1 0 Maximize coverage 9477ms 7206ms … 8768ms Minimize cost Multi-Objective Evolutionary Algorithms (MOEAs) in particular NSGA-II Yoo, S., Harman, M.: Pareto e ffi cient multi-objective test case selection. International Symposium on Software Testing and Analysis. (2007)
  4. 4. Problem: Classical Recombination (Crossover) Randomized crossover performs poorly on high numbers of problem variables Breaks up promising patterns: 4 0 1 1 0 1 0 1 0 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 1 0 0 1 1 0 1 0 0 0 0 1 1 x = x
  5. 5. Based on NSGA-II Uses Linkage learning: Infer groups of genes that should stay together Our Solution 5 0 1 1 0 1 0 1 0 0 1 0 1 1 0 1 1 0 1 0 1 0 0 1 1 1 0 1 1 1 0 1 0 0 1 1 1 x
  6. 6. L2-NSGA 6 Pop. = 1 0 1 1 0 1 0 0 1 0 0 1 T01 T02 T03 T04 T05 T06 T07 T08 T09 T10 T11 T12 0 0 1 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 1 0 0 1 1 0 0 1 1 0 1 0 1 0 0 1 0 1 Test Case Selection Through Linkage Learning-based Crossover 5 Algorithm 1: L2-NSGA 1 begin 2 P Ω≠ INITIAL-POPULATION() 3 while not (end condition) do 4 FOS Ω≠ INFER-MODEL(P , 2) 5 P Õ Ω≠ ÿ 6 forall i in 1..|P | do 7 Parent Ω≠ TOURNAMENT-SELECTION(P ) 8 Donor Ω≠ TOURNAMENT-SELECTION(P ) 9 Child Ω≠ L2-CROSSOVER(Parent, Donor, FOS) 10 Child Ω≠ MUTATE(Child) 11 P Õ Ω≠ P Õ t {Child} 12 R Ω≠ P Õ t P 13 F Ω≠ FAST-NONDOMINATED-SORT(R) 14 P Ω≠ ÿ 15 d Ω≠ 1 16 while | P | + | Fd |Æ M do 17 CROWDING-DISTANCE-ASSIGNMENT(Fd) 18 P Ω≠ P t Fd 19 d Ω≠ d + 1 20 SORT-BY-CROWDING-DISTANCE(Fd) 21 P Ω≠ P t Fd[1 : (M≠ | P |)] 22 return F1 i.e., random subsets of test suites (line 2). The population then evolves through subsequent generations to find nearby non-dominated solutions (loop in 3-24). In line 4, the algorithm infers the linkage structures from the best individuals in the population P using UPGMA. The structures (FOS) are only inferred every
  7. 7. L2-NSGA 7 Pop. = 1 0 1 1 0 1 0 0 1 0 0 1 T01 T02 T03 T04 T05 T06 T07 T08 T09 T10 T11 T12 0 0 1 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 1 0 0 1 1 0 0 1 1 0 1 0 1 0 0 1 0 1 dist. 2 Calculate distances between all test cases Test Case Selection Through Linkage Learning-based Crossover 5 Algorithm 1: L2-NSGA 1 begin 2 P Ω≠ INITIAL-POPULATION() 3 while not (end condition) do 4 FOS Ω≠ INFER-MODEL(P , 2) 5 P Õ Ω≠ ÿ 6 forall i in 1..|P | do 7 Parent Ω≠ TOURNAMENT-SELECTION(P ) 8 Donor Ω≠ TOURNAMENT-SELECTION(P ) 9 Child Ω≠ L2-CROSSOVER(Parent, Donor, FOS) 10 Child Ω≠ MUTATE(Child) 11 P Õ Ω≠ P Õ t {Child} 12 R Ω≠ P Õ t P 13 F Ω≠ FAST-NONDOMINATED-SORT(R) 14 P Ω≠ ÿ 15 d Ω≠ 1 16 while | P | + | Fd |Æ M do 17 CROWDING-DISTANCE-ASSIGNMENT(Fd) 18 P Ω≠ P t Fd 19 d Ω≠ d + 1 20 SORT-BY-CROWDING-DISTANCE(Fd) 21 P Ω≠ P t Fd[1 : (M≠ | P |)] 22 return F1 i.e., random subsets of test suites (line 2). The population then evolves through subsequent generations to find nearby non-dominated solutions (loop in 3-24). In line 4, the algorithm infers the linkage structures from the best individuals in the population P using UPGMA. The structures (FOS) are only inferred every
  8. 8. L2-NSGA 8 Combine closest clusters Hierarchical Clustering (UPGMA) Test Case Selection Through Linkage Learning-based Crossover 5 Algorithm 1: L2-NSGA 1 begin 2 P Ω≠ INITIAL-POPULATION() 3 while not (end condition) do 4 FOS Ω≠ INFER-MODEL(P , 2) 5 P Õ Ω≠ ÿ 6 forall i in 1..|P | do 7 Parent Ω≠ TOURNAMENT-SELECTION(P ) 8 Donor Ω≠ TOURNAMENT-SELECTION(P ) 9 Child Ω≠ L2-CROSSOVER(Parent, Donor, FOS) 10 Child Ω≠ MUTATE(Child) 11 P Õ Ω≠ P Õ t {Child} 12 R Ω≠ P Õ t P 13 F Ω≠ FAST-NONDOMINATED-SORT(R) 14 P Ω≠ ÿ 15 d Ω≠ 1 16 while | P | + | Fd |Æ M do 17 CROWDING-DISTANCE-ASSIGNMENT(Fd) 18 P Ω≠ P t Fd 19 d Ω≠ d + 1 20 SORT-BY-CROWDING-DISTANCE(Fd) 21 P Ω≠ P t Fd[1 : (M≠ | P |)] 22 return F1 i.e., random subsets of test suites (line 2). The population then evolves through subsequent generations to find nearby non-dominated solutions (loop in 3-24). In line 4, the algorithm infers the linkage structures from the best individuals in the population P using UPGMA. The structures (FOS) are only inferred every T1 … T12 T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T1…T4 T2…T4 T3 T4 T5 T6 T5…T7 T8 T9 T10 T11 T10…T12 T5…T12
  9. 9. L2-NSGA 9 Parent 1 0 1 1 0 1 0 0 1 0 0 1 T01 T02 T03 T04 T05 T06 T07 T08 T09 T10 T11 T12 0 0 1 0 0 1 1 1 1 0 1 0 0 0 1 0 0 1 0 1 1 0 0 1 Donor Child [ {T08, T09}, {T10, T11, T12}, {T01, T02, T03, T04}, …] FoS We randomly sample subsets from the FoS and apply the values from the donor to the parent Test Case Selection Through Linkage Learning-based Crossover 5 Algorithm 1: L2-NSGA 1 begin 2 P Ω≠ INITIAL-POPULATION() 3 while not (end condition) do 4 FOS Ω≠ INFER-MODEL(P , 2) 5 P Õ Ω≠ ÿ 6 forall i in 1..|P | do 7 Parent Ω≠ TOURNAMENT-SELECTION(P ) 8 Donor Ω≠ TOURNAMENT-SELECTION(P ) 9 Child Ω≠ L2-CROSSOVER(Parent, Donor, FOS) 10 Child Ω≠ MUTATE(Child) 11 P Õ Ω≠ P Õ t {Child} 12 R Ω≠ P Õ t P 13 F Ω≠ FAST-NONDOMINATED-SORT(R) 14 P Ω≠ ÿ 15 d Ω≠ 1 16 while | P | + | Fd |Æ M do 17 CROWDING-DISTANCE-ASSIGNMENT(Fd) 18 P Ω≠ P t Fd 19 d Ω≠ d + 1 20 SORT-BY-CROWDING-DISTANCE(Fd) 21 P Ω≠ P t Fd[1 : (M≠ | P |)] 22 return F1 i.e., random subsets of test suites (line 2). The population then evolves through subsequent generations to find nearby non-dominated solutions (loop in 3-24). In line 4, the algorithm infers the linkage structures from the best individuals in the population P using UPGMA. The structures (FOS) are only inferred every
  10. 10. L2-NSGA 10 Test Case Selection Through Linkage Learning-based Crossover 5 Algorithm 1: L2-NSGA 1 begin 2 P Ω≠ INITIAL-POPULATION() 3 while not (end condition) do 4 FOS Ω≠ INFER-MODEL(P , 2) 5 P Õ Ω≠ ÿ 6 forall i in 1..|P | do 7 Parent Ω≠ TOURNAMENT-SELECTION(P ) 8 Donor Ω≠ TOURNAMENT-SELECTION(P ) 9 Child Ω≠ L2-CROSSOVER(Parent, Donor, FOS) 10 Child Ω≠ MUTATE(Child) 11 P Õ Ω≠ P Õ t {Child} 12 R Ω≠ P Õ t P 13 F Ω≠ FAST-NONDOMINATED-SORT(R) 14 P Ω≠ ÿ 15 d Ω≠ 1 16 while | P | + | Fd |Æ M do 17 CROWDING-DISTANCE-ASSIGNMENT(Fd) 18 P Ω≠ P t Fd 19 d Ω≠ d + 1 20 SORT-BY-CROWDING-DISTANCE(Fd) 21 P Ω≠ P t Fd[1 : (M≠ | P |)] 22 return F1 i.e., random subsets of test suites (line 2). The population then evolves through subsequent generations to find nearby non-dominated solutions (loop in 3-24). In line 4, the algorithm infers the linkage structures from the best individuals in the population P using UPGMA. The structures (FOS) are only inferred every Apply NSGA environmental selection through non-dominated sorting and crowding distance
  11. 11. L2-NSGA 11 Test Case Selection Through Linkage Learning-based Crossover 5 Algorithm 1: L2-NSGA 1 begin 2 P Ω≠ INITIAL-POPULATION() 3 while not (end condition) do 4 FOS Ω≠ INFER-MODEL(P , 2) 5 P Õ Ω≠ ÿ 6 forall i in 1..|P | do 7 Parent Ω≠ TOURNAMENT-SELECTION(P ) 8 Donor Ω≠ TOURNAMENT-SELECTION(P ) 9 Child Ω≠ L2-CROSSOVER(Parent, Donor, FOS) 10 Child Ω≠ MUTATE(Child) 11 P Õ Ω≠ P Õ t {Child} 12 R Ω≠ P Õ t P 13 F Ω≠ FAST-NONDOMINATED-SORT(R) 14 P Ω≠ ÿ 15 d Ω≠ 1 16 while | P | + | Fd |Æ M do 17 CROWDING-DISTANCE-ASSIGNMENT(Fd) 18 P Ω≠ P t Fd 19 d Ω≠ d + 1 20 SORT-BY-CROWDING-DISTANCE(Fd) 21 P Ω≠ P t Fd[1 : (M≠ | P |)] 22 return F1 i.e., random subsets of test suites (line 2). The population then evolves through subsequent generations to find nearby non-dominated solutions (loop in 3-24). In line 4, the algorithm infers the linkage structures from the best individuals in the population P using UPGMA. The structures (FOS) are only inferred every Apply NSGA environmental selection through non-dominated sorting and crowding distance
  12. 12. L2-NSGA 12 Test Case Selection Through Linkage Learning-based Crossover 5 Algorithm 1: L2-NSGA 1 begin 2 P Ω≠ INITIAL-POPULATION() 3 while not (end condition) do 4 FOS Ω≠ INFER-MODEL(P , 2) 5 P Õ Ω≠ ÿ 6 forall i in 1..|P | do 7 Parent Ω≠ TOURNAMENT-SELECTION(P ) 8 Donor Ω≠ TOURNAMENT-SELECTION(P ) 9 Child Ω≠ L2-CROSSOVER(Parent, Donor, FOS) 10 Child Ω≠ MUTATE(Child) 11 P Õ Ω≠ P Õ t {Child} 12 R Ω≠ P Õ t P 13 F Ω≠ FAST-NONDOMINATED-SORT(R) 14 P Ω≠ ÿ 15 d Ω≠ 1 16 while | P | + | Fd |Æ M do 17 CROWDING-DISTANCE-ASSIGNMENT(Fd) 18 P Ω≠ P t Fd 19 d Ω≠ d + 1 20 SORT-BY-CROWDING-DISTANCE(Fd) 21 P Ω≠ P t Fd[1 : (M≠ | P |)] 22 return F1 i.e., random subsets of test suites (line 2). The population then evolves through subsequent generations to find nearby non-dominated solutions (loop in 3-24). In line 4, the algorithm infers the linkage structures from the best individuals in the population P using UPGMA. The structures (FOS) are only inferred every Apply NSGA environmental selection through non-dominated sorting and crowding distance
  13. 13. L2-NSGA 13 Test Case Selection Through Linkage Learning-based Crossover 5 Algorithm 1: L2-NSGA 1 begin 2 P Ω≠ INITIAL-POPULATION() 3 while not (end condition) do 4 FOS Ω≠ INFER-MODEL(P , 2) 5 P Õ Ω≠ ÿ 6 forall i in 1..|P | do 7 Parent Ω≠ TOURNAMENT-SELECTION(P ) 8 Donor Ω≠ TOURNAMENT-SELECTION(P ) 9 Child Ω≠ L2-CROSSOVER(Parent, Donor, FOS) 10 Child Ω≠ MUTATE(Child) 11 P Õ Ω≠ P Õ t {Child} 12 R Ω≠ P Õ t P 13 F Ω≠ FAST-NONDOMINATED-SORT(R) 14 P Ω≠ ÿ 15 d Ω≠ 1 16 while | P | + | Fd |Æ M do 17 CROWDING-DISTANCE-ASSIGNMENT(Fd) 18 P Ω≠ P t Fd 19 d Ω≠ d + 1 20 SORT-BY-CROWDING-DISTANCE(Fd) 21 P Ω≠ P t Fd[1 : (M≠ | P |)] 22 return F1 i.e., random subsets of test suites (line 2). The population then evolves through subsequent generations to find nearby non-dominated solutions (loop in 3-24). In line 4, the algorithm infers the linkage structures from the best individuals in the population P using UPGMA. The structures (FOS) are only inferred every Apply NSGA environmental selection through non-dominated sorting and crowding distance
  14. 14. Evaluation 14
  15. 15. Benchmark: Software-Artifact Infrastructure Repository (SIR) 15 Program Versions LOC #Tests Bash v1, v2, v3 44 991 - 46 294 1 061 Flex v1, v2, v3 9 484 - 10 243 567 Grep v1, v2, v3 9 400 - 10 066 806 Sed v1, v2, v3 5 488 - 7 082 360 https://sir.csc.ncsu.edu/ Extensively used in literature Largest programs in SIR Contain seeded faults
  16. 16. Algorithms: NSGA-II vs L2-NSGA Termination criterion: 200 generations Number of repetitions: 20 runs 16 Algorithm Pop. size Mutation Operator Mutation Parameter Crossover Operator Crossover Parameters NSGA-II 100 Bit- fl ip Pm = 1/n Scattered Pc = 0.8 L2-NSGA 100 Bit- fl ip Pm = 1/n L2 Pc = 0. 8 IntLL = 2
  17. 17. RQ1: To what extent does L2-NSGA produce better Pareto efficient solutions compared to NSGA-II? 17
  18. 18. RQ1: to What Extent Does L2-NSGA Produce Better Pareto Efficient Solutions Compared to NSGA-II? Performance Metrics: • Inverted Generational Distance (IGD) Measures proximity to reference front and diversity • Hyper Volume (HV) Measures dominated area 18 HV IGD cost coverage
  19. 19. Statistical results: 22 out of 24 con fi gurations with a p-value <= 0.01 1 out of 24 con fi gurations with a 0.01 < p-value <= 0.05 Flex v1: no signi fi cant di ff erence E ff ect size: 21 Large, 1 Medium, and 2 Small RQ1: to What Extent Does L2-NSGA Produce Better Pareto Efficient Solutions Compared to NSGA-II? 19 IGD HV System Version NSGA-II L2-NSGA NSGA-II L2-NSGA Bash v1 0.1987 0.1046 0.4165 0.6418 v2 0.2059 0.1136 0.6223 0.7710 v3 0.2839 0.1221 0.3638 0.6110 Flex v1 0.0300 0.0265 0.9924 0.9937 v2 0.0324 0.0230 0.9810 0.9853 v3 0.0519 0.0350 0.9808 0.9857 Grep v1 0.1872 0.0995 0.4623 0.6327 v2 0.1702 0.1301 0.5246 0.5991 v3 0.1920 0.1428 0.4310 0.5540 Sed v1 0.1123 0.0544 0.8863 0.9580 v2 0.0546 0.0158 0.9508 0.9900 v3 0.0752 0.0253 0.8919 0.9761
  20. 20. Statistical results: 22 out of 24 con fi gurations with a p-value <= 0.01 1 out of 24 con fi gurations with a 0.01 < p-value <= 0.05 Flex v1: no signi fi cant di ff erence E ff ect size: 21 Large, 1 Medium, and 2 Small RQ1: to What Extent Does L2-NSGA Produce Better Pareto Efficient Solutions Compared to NSGA-II? 20 IGD HV System Version NSGA-II L2-NSGA NSGA-II L2-NSGA Bash v1 0.1987 0.1046 0.4165 0.6418 v2 0.2059 0.1136 0.6223 0.7710 v3 0.2839 0.1221 0.3638 0.6110 Flex v1 0.0300 0.0265 0.9924 0.9937 v2 0.0324 0.0230 0.9810 0.9853 v3 0.0519 0.0350 0.9808 0.9857 Grep v1 0.1872 0.0995 0.4623 0.6327 v2 0.1702 0.1301 0.5246 0.5991 v3 0.1920 0.1428 0.4310 0.5540 Sed v1 0.1123 0.0544 0.8863 0.9580 v2 0.0546 0.0158 0.9508 0.9900 v3 0.0752 0.0253 0.8919 0.9761
  21. 21. Statistical results: 22 out of 24 con fi gurations with a p-value <= 0.01 1 out of 24 con fi gurations with a 0.01 < p-value <= 0.05 Flex v1: no signi fi cant di ff erence E ff ect size: 21 Large, 1 Medium, and 2 Small RQ1: to What Extent Does L2-NSGA Produce Better Pareto Efficient Solutions Compared to NSGA-II? 21 IGD HV System Version NSGA-II L2-NSGA NSGA-II L2-NSGA Bash v1 0.1987 0.1046 0.4165 0.6418 v2 0.2059 0.1136 0.6223 0.7710 v3 0.2839 0.1221 0.3638 0.6110 Flex v1 0.0300 0.0265 0.9924 0.9937 v2 0.0324 0.0230 0.9810 0.9853 v3 0.0519 0.0350 0.9808 0.9857 Grep v1 0.1872 0.0995 0.4623 0.6327 v2 0.1702 0.1301 0.5246 0.5991 v3 0.1920 0.1428 0.4310 0.5540 Sed v1 0.1123 0.0544 0.8863 0.9580 v2 0.0546 0.0158 0.9508 0.9900 v3 0.0752 0.0253 0.8919 0.9761
  22. 22. RQ1: to What Extent Does L2-NSGA Produce Better Pareto Efficient Solutions Compared to NSGA-II? 22 Sed v1 100 Generations 200 Generations
  23. 23. RQ1: to What Extent Does L2-NSGA Produce Better Pareto Efficient Solutions Compared to NSGA-II? 23 Sed v1 100 Generations 200 Generations L2-NSGA achieves better results independently of the size of the project and the test suites
  24. 24. RQ2: What is the cost-effectiveness of the solution produced by L2-NSGA vs. NSGA-II ? 24
  25. 25. RQ2: What Is the Cost-Effectiveness of the Solution Produced by L2-NSGA vs. NSGA-II? 25 ICE System Version NSGA-II L2-NSGA Bash v1 0.6857 0.8566 v2 0.5711 0.7031 v3 0.6760 0.8559 Flex v1 0.6718 0.6721 v2 0.5243 0.5244 v3 0.6809 0.6827 Grep v1 0.3725 0.43031 v2 0.3474 0.4260 v3 0.1370 0.2052 Sed v1 0.7552 0.7760 v2 0.9275 0.9414 v3 0.9476 0.9894 Statistical results: 10 out of 12 con fi gurations with a p-value <= 0.01 Flex v1 and v2: no signi fi cant di ff erence E ff ect size: 10 Large and 2 Negl.
  26. 26. RQ2: What Is the Cost-Effectiveness of the Solution Produced by L2-NSGA vs. NSGA-II? 26 0 10 20 30 40 bash−v1 bash−v2 bash−v3 flex−v1 flex−v2 flex−v3 grep−v1 grep−v2 grep−v3 sed−v1 sed−v2 sed−v3 System Time (s) L2−NSGA NSGA−II
  27. 27. Concluding 27
  28. 28. Code Available 28 https://github.com/mitchellolsthoorn/SSBSE-Research-2021-regression-linkage-replication
  29. 29. Future Work • Experiment with alternative clustering algorithms • Apply this technique to other regression testing methods (TCP) • Bigger dataset • Dataset with other languages than C 29
  30. 30. 30 Multi-Objective Test Case Selection Through Linkage Learning-Driven Crossover Mitchell Olsthoorn, Annibale Panichella

×