Judge: Identifying, Understanding, and Evaluating Sources of Unsoundness in Call Graphs

Judge: Identifying,
Understanding, and Evaluating
Sources of Unsoundness in Call
Graphs
Michael Reif, Florian Kübler, Michael Eichberg, Dominik Helm, and Mira Mezini

Software Technology Group

TU Darmstadt
@Reifmi
Why We Shouldn’t Take 

Call Graphs for Granted
• Call graphs are a central data-structure for numerous static
analyses

• Call graphs directly impact a client analysis’ result

• The chosen algorithm predetermines an analysis’ precision
and recall

• Programming languages evolve (APIs and features are
added) and frameworks might not
!2
State-of-the-art Call-graph
Generators for Java
• Many different static analysis frameworks are available

• All can compute a different set of call graphs

• All frameworks use different approaches and make unknown
trade-offs or implementation choices

• Are they actually comparable??
!3
OPAL
Judge’s Overview
TC1.jarTC2.jar⟨Test Case⟩
.jar
⟨Advanced
Test Case⟩
.jar
compile test cases
AllTestCases
<Test Fixtures
Category>.md
Test Case 1(TC1)
…
Test Case 3 (TCN)
⟨Test Fixtures⟩.md
Test Case 1
…
Test Case 3
Judge’s Overview
TC1.jarTC2.jar⟨Test Case⟩
.jar
⟨Advanced
Test Case⟩
.jar
compile test cases
AllTestCases
<Test Fixtures
Category>.md
Test Case 1(TC1)
…
Test Case 3 (TCN)
⟨Test Fixtures⟩.md
Test Case 1
…
Test Case 3
⟨CG⟩
.json
compute CG
Done for each CG per supported
static analysis framework.
⟨CG Algorithm Profile⟩
.tsvcompute profile using CG and expected call targets
Judge’s Overview
TC1.jarTC2.jar⟨Test Case⟩
.jar
⟨Advanced
Test Case⟩
.jar
compile test cases
AllTestCases
<Test Fixtures
Category>.md
Test Case 1(TC1)
…
Test Case 3 (TCN)
⟨Test Fixtures⟩.md
Test Case 1
…
Test Case 3
⟨CG⟩
.json
compute CG
Done for each CG per supported
static analysis framework.
⟨CG Algorithm Profile⟩
.tsvcompute profile using CG and expected call targets
⟨Project⟩
.jar
⟨Features &
Locations⟩
.json
⟨CG⟩
.json
compute CG
run Hermes
Infrastructure used for computing the prevalence of features in
real projects.
Judge’s Overview
TC1.jarTC2.jar⟨Test Case⟩
.jar
⟨Advanced
Test Case⟩
.jar
compile test cases
AllTestCases
<Test Fixtures
Category>.md
Test Case 1(TC1)
…
Test Case 3 (TCN)
⟨Test Fixtures⟩.md
Test Case 1
…
Test Case 3
⟨CG⟩
.json
compute CG
Done for each CG per supported
static analysis framework.
⟨CG Algorithm Profile⟩
.tsvcompute profile using CG and expected call targets
⟨Project⟩
.jar
⟨Features &
Locations⟩
.json
⟨CG⟩
.json
compute CG
run Hermes
Infrastructure used for computing the prevalence of features in
real projects.
⟨Potential
Sources of
Unsoundness⟩
.tsv
compute suitability of CG algo.
use the
respective
CG profile
Test Suite
TC1.jarTC2.jar⟨Test Case⟩
.jar
⟨Advanced
Test Case⟩
.jar
compile test cases
AllTestCases
<Test Fixtures
Category>.md
Test Case 1(TC1)
…
Test Case 3 (TCN)
⟨Test Fixtures⟩.md
Test Case 1
…
Test Case 3
⟨CG⟩
.json
compute CG
Done for each CG per supported
static analysis framework.
⟨CG Algorithm Profile⟩
.tsvcompute profile using CG and expected call targets
⟨Project⟩
.jar
⟨Features &
Locations⟩
.json
⟨CG⟩
.json
compute CG
run Hermes
Infrastructure used for computing the prevalence of features in
real projects.
⟨Potential
Sources of
Unsoundness⟩
.tsv
compute suitability of CG algo.
use the
respective
CG profile
Test Suite
TC1.jarTC2.jar⟨Test Case⟩
.jar
⟨Advanced
Test Case⟩
.jar
compile test cases
AllTestCases
<Test Fixtures
Category>.md
Test Case 1(TC1)
…
Test Case 3 (TCN)
⟨Test Fixtures⟩.md
Test Case 1
…
Test Case 3
⟨CG⟩
.json
compute CG
Done for each CG per supported
static analysis framework.
⟨CG Algorithm Profile⟩
.tsvcompute profile using CG and expected call targets
⟨Project⟩
.jar
⟨Features &
Locations⟩
.json
⟨CG⟩
.json
compute CG
run Hermes
Infrastructure used for computing the prevalence of features in
real projects.
⟨Potential
Sources of
Unsoundness⟩
.tsv
compute suitability of CG algo.
use the
respective
CG profile
• Each category has:

• a description

• multiple test cases

• Each test case has:

• a scenario description

• unique id

• the test code

• excepted calls

• Available annotations:

• CallSite

• IndirectCall
Test Suite
Language Features

• Static Initializer

• Polymorphic Calls

• Java 8 Polymorphic Calls

• Lambdas/Method References

• Signature Polymorphic Methods

• Non-Java bytecode

• …
!6
APIs

• Reflection

• Unsafe

• Serialization

• Method Handles

• Dynamic Proxies

• Classloading

• …
Computing the Algorithms’
Profile
!7
TC1.jarTC2.jar⟨Test Case⟩
.jar
⟨Advanced
Test Case⟩
.jar
compile test cases
AllTestCases
<Test Fixtures
Category>.md
Test Case 1(TC1)
…
Test Case 3 (TCN)
⟨Test Fixtures⟩.md
Test Case 1
…
Test Case 3
⟨CG⟩
.json
compute CG
Done for each CG per supported
static analysis framework.
⟨CG Algorithm Profile⟩
.tsvcompute profile using CG and expected call targets
⟨Project⟩
.jar
⟨Features &
Locations⟩
.json
⟨CG⟩
.json
compute CG
run Hermes
Infrastructure used for computing the prevalence of features in
real projects.
⟨Potential
Sources of
Unsoundness⟩
.tsv
compute suitability of CG algo.
use the
respective
CG profile
TC1.jarTC2.jar⟨Test Case⟩
.jar
⟨Advanced
Test Case⟩
.jar
compile test cases
AllTestCases
<Test Fixtures
Category>.md
Test Case 1(TC1)
…
Test Case 3 (TCN)
⟨Test Fixtures⟩.md
Test Case 1
…
Test Case 3
⟨CG⟩
.json
compute CG
Done for each CG per supported
static analysis framework.
⟨CG Algorithm Profile⟩
.tsvcompute profile using CG and expected call targets
⟨Project⟩
.jar
⟨Features &
Locations⟩
.json
⟨CG⟩
.json
compute CG
run Hermes
Infrastructure used for computing the prevalence of features in
real projects.
⟨Potential
Sources of
Unsoundness⟩
.tsv
compute suitability of CG algo.
use the
respective
CG profile
Finding Features in
Real Code
!8
TC1.jarTC2.jar⟨Test Case⟩
.jar
⟨Advanced
Test Case⟩
.jar
compile test cases
AllTestCases
<Test Fixtures
Category>.md
Test Case 1(TC1)
…
Test Case 3 (TCN)
⟨Test Fixtures⟩.md
Test Case 1
…
Test Case 3
⟨CG⟩
.json
compute CG
Done for each CG per supported
static analysis framework.
⟨CG Algorithm Profile⟩
.tsvcompute profile using CG and expected call targets
⟨Project⟩
.jar
⟨Features &
Locations⟩
.json
⟨CG⟩
.json
compute CG
run Hermes
Infrastructure used for computing the prevalence of features in
real projects.
⟨Potential
Sources of
Unsoundness⟩
.tsv
compute suitability of CG algo.
use the
respective
CG profile
Finding Features in
Real Code
!8
[1] Reif, Michael et al. Hermes: assessment and creation of effective test corpora. SOAP ’17. ACM, 43–48.
• We used Hermes [1], a static analysis code query
infrastructure

• Each query is an analysis that checks if a specific feature
is found in a given code base

• We developed 15 Hermes queries to derive 107 Hermes
features and map the derived features to the test case ids

• All queries perform a most-conservative intra-procedural
analysis
Potential Sources of
Unsoundness
!9
0✘
Lambda8
(Invokedynamic -
Scala)
Lambda3
(Invokedynamic -
Java ≤ 10)
1✓
… ……
TR1
(Reflection)
2✘
Extensions
Count
3
Supported
by CG(a)
✓
BPC2
(Polymorphic Call)
Features
(Based on
Test Cases)
✘mz
my ✓
mx ✘
✓mu
……
m4 ✓
m3 ✓
m2 ✘
Reached
by CG(a)
✓m1
Name
Methods
Computed Using Feature Queries / Hermes
LibraryCodeApplicationCode
Sourceof
Unsoundness
For Project (p)
ConditionalSource
ofUnsoundness
Extensions
Mapping
TC1.jarTC2.jar⟨Test Case⟩
.jar
⟨Advanced
Test Case⟩
.jar
compile test cases
AllTestCases
<Test Fixtures
Category>.md
Test Case 1(TC1)
…
Test Case 3 (TCN)
⟨Test Fixtures⟩.md
Test Case 1
…
Test Case 3
⟨CG⟩
.json
compute CG
Done for each CG per supported
static analysis framework.
⟨CG Algorithm Profile⟩
.tsvcompute profile using CG and expected call targets
⟨Project⟩
.jar
⟨Features &
Locations⟩
.json
⟨CG⟩
.json
compute CG
run Hermes
Infrastructure used for computing the prevalence of features in
real projects.
⟨Potential
Sources of
Unsoundness⟩
.tsv
compute suitability of CG algo.
use the
respective
CG profile
• Sources of Unsoundness
definitely make the call graph
unsound

• Conditional sources of
Unsoundness might introduce
unsoundness
Research Questions
• RQ1: How prevalent are the language and API features?

• RQ2: How do the frameworks compare to each other?

• RQ3: Which framework is best suited for which kind of
code base?

• RQ4: How much effort is necessary to get a sound call
graph?
!10
Prevalent Language
Features and APIs (RQ1)
• All the API and language features supported by
Java up to version 7 are used widely across all
code bases 

• Support for Java 8 is a must, unless analyzing
Android or Clojure code

• Supporting classical Reflection and Serialization
is strongly recommended, independent of the
source code’s age

• Support for many features is only required in
specific scenarios
!11
The Call Graphs’ Feature Support (RQ2)
!12
The Call Graphs’ Feature Support (RQ2)
!12
The Call Graphs’ Feature Support (RQ2)
!12
Standard Java
Features are well-
supported
The Call Graphs’ Feature Support (RQ2)
!12
Standard Java
Features are well-
supported
The Call Graphs’ Feature Support (RQ2)
!12
Java 8 Features
are partially
supported
Standard Java
Features are well-
supported
The Call Graphs’ Feature Support (RQ2)
!12
Java 8 Features
are partially
supported
Standard Java
Features are well-
supported
The Call Graphs’ Feature Support (RQ2)
!12
Java 8 Features
are partially
supported
The JVM is not
fully covered
Standard Java
Features are well-
supported
The Call Graphs’ Feature Support (RQ2)
!12
Java 8 Features
are partially
supported
The JVM is not
fully covered
Standard Java
Features are well-
supported
The Call Graphs’ Feature Support (RQ2)
!12
Java 8 Features
are partially
supported
The JVM is not
fully covered
Standard Java
Features are well-
supported
Reflection API
partially
supported
The Call Graphs’ Feature Support (RQ2)
!12
Java 8 Features
are partially
supported
The JVM is not
fully covered
Standard Java
Features are well-
supported
Reflection API
partially
supported
The Call Graphs’ Feature Support (RQ2)
!12
Java 8 Features
are partially
supported
The JVM is not
fully covered
Some APIs and
language features
are unsupported
Standard Java
Features are well-
supported
Reflection API
partially
supported
Performance Results (RQ2)
!13
Performance Results (RQ2)
!13
Performance Results (RQ2)
!13
avg. Runtimes
largely differ
Performance Results (RQ2)
!13
avg. Runtimes
largely differ
Performance Results (RQ2)
!13
avg. Runtimes
largely differ
Reachable Methods vary even for
implementations of the same algorithm
by more than 20x
RTA-Example
!14
void program(boolean condition){
Collection c1 = new LinkedList();
Collection c2;
if(condition){
c2 = new ArrayList();
} else {
c2 = new Vector();
}
c2.add(null);
Collection c3 = new HashSet();
}
• RTA [2] depends on the program’s instantiated
types

• Soot, WALA, and OPAL behave complete
differently
[2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341.
RTA-Example
!14
void program(boolean condition){
Collection c1 = new LinkedList();
Collection c2;
if(condition){
c2 = new ArrayList();
} else {
c2 = new Vector();
}
c2.add(null);
Collection c3 = new HashSet();
}
• RTA [2] depends on the program’s instantiated
types

• Soot, WALA, and OPAL behave complete
differently
[2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341.
RTA-Example
!14
void program(boolean condition){
Collection c1 = new LinkedList();
Collection c2;
if(condition){
c2 = new ArrayList();
} else {
c2 = new Vector();
}
c2.add(null);
Collection c3 = new HashSet();
}
• RTA [2] depends on the program’s instantiated
types

• Soot, WALA, and OPAL behave complete
differently
[2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341.
{ LinkedList, ArrayList, Vector, HashSet }
RTA-Example
!14
void program(boolean condition){
Collection c1 = new LinkedList();
Collection c2;
if(condition){
c2 = new ArrayList();
} else {
c2 = new Vector();
}
c2.add(null);
Collection c3 = new HashSet();
}
• RTA [2] depends on the program’s instantiated
types

• Soot, WALA, and OPAL behave complete
differently
[2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341.
{ LinkedList, ArrayList, Vector, HashSet }
RTA-Example
!14
void program(boolean condition){
Collection c1 = new LinkedList();
Collection c2;
if(condition){
c2 = new ArrayList();
} else {
c2 = new Vector();
}
c2.add(null);
Collection c3 = new HashSet();
}
• RTA [2] depends on the program’s instantiated
types

• Soot, WALA, and OPAL behave complete
differently
[2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341.
{ LinkedList, ArrayList, Vector, HashSet }
{ LinkedList, ArrayList, Vector}
RTA-Example
!14
void program(boolean condition){
Collection c1 = new LinkedList();
Collection c2;
if(condition){
c2 = new ArrayList();
} else {
c2 = new Vector();
}
c2.add(null);
Collection c3 = new HashSet();
}
• RTA [2] depends on the program’s instantiated
types

• Soot, WALA, and OPAL behave complete
differently
[2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341.
{ LinkedList, ArrayList, Vector, HashSet }
{ArrayList, Vector}{ LinkedList, ArrayList, Vector}
Project-specific Evaluation
(RQ3)
!15
Project-specific Evaluation
(RQ3)
!15
Project-specific Evaluation
(RQ3)
!15
Soot supports CSR
but its expensive
Project-specific Evaluation
(RQ3)
!15
Soot supports CSR
but its expensive
Project-specific Evaluation
(RQ3)
!15
Soot supports CSR
but its expensive
OPAL supports most
features but has the
smallest call graph
Project-specific Evaluation
(RQ3)
!15
Soot supports CSR
but its expensive
OPAL supports most
features but has the
smallest call graph
OPAL covers only 47
methods from Xalan
(~0.3%)
Project-specific Evaluation
(RQ3)
!15
Soot supports CSR
but its expensive
OPAL supports most
features but has the
smallest call graph
OPAL covers only 47
methods from Xalan
(~0.3%)
Very few call sites
have a huge impact
Is it worth it to do the work
manually? (RQ 4)
• GOAL: Get a reasonably sound call graph

• JVM profiling and TamiFlex [3] as ground truth
!16
[3] Bodden, Eric, et al. Taming Reflection--Static Analysis in the Presence of Reflection and Custom Class Loaders. (2010).
Apply Judge
Inspect Results
Add Entry Points
• Analyzed 10 reflective call sites

• Added 50 entry points

• manual analysis took roughly 90 minutes

• The call graph then covered 91% of all
methods contained in the profile and 121 from
198 reported by TamiFlex
!17
!17
!17
!17
1 of 50

More Related Content

Similar to Judge: Identifying, Understanding, and Evaluating Sources of Unsoundness in Call Graphs

Similar to Judge: Identifying, Understanding, and Evaluating Sources of Unsoundness in Call Graphs(20)

Recently uploaded(20)

STERILITY TEST.pptxSTERILITY TEST.pptx
STERILITY TEST.pptx
Anupkumar Sharma102 views
Classification of crude drugs.pptxClassification of crude drugs.pptx
Classification of crude drugs.pptx
GayatriPatra1449 views
231112 (WR) v1  ChatGPT OEB 2023.pdf231112 (WR) v1  ChatGPT OEB 2023.pdf
231112 (WR) v1 ChatGPT OEB 2023.pdf
WilfredRubens.com100 views
ANATOMY AND PHYSIOLOGY UNIT 1 { PART-1}ANATOMY AND PHYSIOLOGY UNIT 1 { PART-1}
ANATOMY AND PHYSIOLOGY UNIT 1 { PART-1}
DR .PALLAVI PATHANIA156 views
GSoC 2024GSoC 2024
GSoC 2024
DeveloperStudentClub1049 views
discussion post.pdfdiscussion post.pdf
discussion post.pdf
jessemercerail70 views
AI Tools for Business and StartupsAI Tools for Business and Startups
AI Tools for Business and Startups
Svetlin Nakov57 views
Education and Diversity.pptxEducation and Diversity.pptx
Education and Diversity.pptx
DrHafizKosar56 views
Azure DevOps Pipeline setup for Mule APIs #36Azure DevOps Pipeline setup for Mule APIs #36
Azure DevOps Pipeline setup for Mule APIs #36
MysoreMuleSoftMeetup75 views
STYP infopack.pdfSTYP infopack.pdf
STYP infopack.pdf
Fundacja Rozwoju Społeczeństwa Przedsiębiorczego143 views
Streaming Quiz 2023.pdfStreaming Quiz 2023.pdf
Streaming Quiz 2023.pdf
Quiz Club NITW87 views
CWP_23995_2013_17_11_2023_FINAL_ORDER.pdfCWP_23995_2013_17_11_2023_FINAL_ORDER.pdf
CWP_23995_2013_17_11_2023_FINAL_ORDER.pdf
SukhwinderSingh895865467 views
SIMPLE PRESENT TENSE_new.pptxSIMPLE PRESENT TENSE_new.pptx
SIMPLE PRESENT TENSE_new.pptx
nisrinamadani2146 views
Narration  ppt.pptxNarration  ppt.pptx
Narration ppt.pptx
Tariq KHAN62 views
Industry4wrd.pptxIndustry4wrd.pptx
Industry4wrd.pptx
BC Chew153 views
2022 CAPE Merit List 2023 2022 CAPE Merit List 2023
2022 CAPE Merit List 2023
Caribbean Examinations Council3K views

Judge: Identifying, Understanding, and Evaluating Sources of Unsoundness in Call Graphs

  • 1. Judge: Identifying, Understanding, and Evaluating Sources of Unsoundness in Call Graphs Michael Reif, Florian Kübler, Michael Eichberg, Dominik Helm, and Mira Mezini Software Technology Group TU Darmstadt @Reifmi
  • 2. Why We Shouldn’t Take 
 Call Graphs for Granted • Call graphs are a central data-structure for numerous static analyses • Call graphs directly impact a client analysis’ result • The chosen algorithm predetermines an analysis’ precision and recall • Programming languages evolve (APIs and features are added) and frameworks might not !2
  • 3. State-of-the-art Call-graph Generators for Java • Many different static analysis frameworks are available • All can compute a different set of call graphs • All frameworks use different approaches and make unknown trade-offs or implementation choices • Are they actually comparable?? !3 OPAL
  • 4. Judge’s Overview TC1.jarTC2.jar⟨Test Case⟩ .jar ⟨Advanced Test Case⟩ .jar compile test cases AllTestCases <Test Fixtures Category>.md Test Case 1(TC1) … Test Case 3 (TCN) ⟨Test Fixtures⟩.md Test Case 1 … Test Case 3
  • 5. Judge’s Overview TC1.jarTC2.jar⟨Test Case⟩ .jar ⟨Advanced Test Case⟩ .jar compile test cases AllTestCases <Test Fixtures Category>.md Test Case 1(TC1) … Test Case 3 (TCN) ⟨Test Fixtures⟩.md Test Case 1 … Test Case 3 ⟨CG⟩ .json compute CG Done for each CG per supported static analysis framework. ⟨CG Algorithm Profile⟩ .tsvcompute profile using CG and expected call targets
  • 6. Judge’s Overview TC1.jarTC2.jar⟨Test Case⟩ .jar ⟨Advanced Test Case⟩ .jar compile test cases AllTestCases <Test Fixtures Category>.md Test Case 1(TC1) … Test Case 3 (TCN) ⟨Test Fixtures⟩.md Test Case 1 … Test Case 3 ⟨CG⟩ .json compute CG Done for each CG per supported static analysis framework. ⟨CG Algorithm Profile⟩ .tsvcompute profile using CG and expected call targets ⟨Project⟩ .jar ⟨Features & Locations⟩ .json ⟨CG⟩ .json compute CG run Hermes Infrastructure used for computing the prevalence of features in real projects.
  • 7. Judge’s Overview TC1.jarTC2.jar⟨Test Case⟩ .jar ⟨Advanced Test Case⟩ .jar compile test cases AllTestCases <Test Fixtures Category>.md Test Case 1(TC1) … Test Case 3 (TCN) ⟨Test Fixtures⟩.md Test Case 1 … Test Case 3 ⟨CG⟩ .json compute CG Done for each CG per supported static analysis framework. ⟨CG Algorithm Profile⟩ .tsvcompute profile using CG and expected call targets ⟨Project⟩ .jar ⟨Features & Locations⟩ .json ⟨CG⟩ .json compute CG run Hermes Infrastructure used for computing the prevalence of features in real projects. ⟨Potential Sources of Unsoundness⟩ .tsv compute suitability of CG algo. use the respective CG profile
  • 8. Test Suite TC1.jarTC2.jar⟨Test Case⟩ .jar ⟨Advanced Test Case⟩ .jar compile test cases AllTestCases <Test Fixtures Category>.md Test Case 1(TC1) … Test Case 3 (TCN) ⟨Test Fixtures⟩.md Test Case 1 … Test Case 3 ⟨CG⟩ .json compute CG Done for each CG per supported static analysis framework. ⟨CG Algorithm Profile⟩ .tsvcompute profile using CG and expected call targets ⟨Project⟩ .jar ⟨Features & Locations⟩ .json ⟨CG⟩ .json compute CG run Hermes Infrastructure used for computing the prevalence of features in real projects. ⟨Potential Sources of Unsoundness⟩ .tsv compute suitability of CG algo. use the respective CG profile
  • 9. Test Suite TC1.jarTC2.jar⟨Test Case⟩ .jar ⟨Advanced Test Case⟩ .jar compile test cases AllTestCases <Test Fixtures Category>.md Test Case 1(TC1) … Test Case 3 (TCN) ⟨Test Fixtures⟩.md Test Case 1 … Test Case 3 ⟨CG⟩ .json compute CG Done for each CG per supported static analysis framework. ⟨CG Algorithm Profile⟩ .tsvcompute profile using CG and expected call targets ⟨Project⟩ .jar ⟨Features & Locations⟩ .json ⟨CG⟩ .json compute CG run Hermes Infrastructure used for computing the prevalence of features in real projects. ⟨Potential Sources of Unsoundness⟩ .tsv compute suitability of CG algo. use the respective CG profile • Each category has: • a description • multiple test cases • Each test case has: • a scenario description • unique id • the test code • excepted calls • Available annotations: • CallSite • IndirectCall
  • 10. Test Suite Language Features • Static Initializer • Polymorphic Calls • Java 8 Polymorphic Calls • Lambdas/Method References • Signature Polymorphic Methods • Non-Java bytecode • … !6 APIs • Reflection • Unsafe • Serialization • Method Handles • Dynamic Proxies • Classloading • …
  • 11. Computing the Algorithms’ Profile !7 TC1.jarTC2.jar⟨Test Case⟩ .jar ⟨Advanced Test Case⟩ .jar compile test cases AllTestCases <Test Fixtures Category>.md Test Case 1(TC1) … Test Case 3 (TCN) ⟨Test Fixtures⟩.md Test Case 1 … Test Case 3 ⟨CG⟩ .json compute CG Done for each CG per supported static analysis framework. ⟨CG Algorithm Profile⟩ .tsvcompute profile using CG and expected call targets ⟨Project⟩ .jar ⟨Features & Locations⟩ .json ⟨CG⟩ .json compute CG run Hermes Infrastructure used for computing the prevalence of features in real projects. ⟨Potential Sources of Unsoundness⟩ .tsv compute suitability of CG algo. use the respective CG profile
  • 12. TC1.jarTC2.jar⟨Test Case⟩ .jar ⟨Advanced Test Case⟩ .jar compile test cases AllTestCases <Test Fixtures Category>.md Test Case 1(TC1) … Test Case 3 (TCN) ⟨Test Fixtures⟩.md Test Case 1 … Test Case 3 ⟨CG⟩ .json compute CG Done for each CG per supported static analysis framework. ⟨CG Algorithm Profile⟩ .tsvcompute profile using CG and expected call targets ⟨Project⟩ .jar ⟨Features & Locations⟩ .json ⟨CG⟩ .json compute CG run Hermes Infrastructure used for computing the prevalence of features in real projects. ⟨Potential Sources of Unsoundness⟩ .tsv compute suitability of CG algo. use the respective CG profile Finding Features in Real Code !8
  • 13. TC1.jarTC2.jar⟨Test Case⟩ .jar ⟨Advanced Test Case⟩ .jar compile test cases AllTestCases <Test Fixtures Category>.md Test Case 1(TC1) … Test Case 3 (TCN) ⟨Test Fixtures⟩.md Test Case 1 … Test Case 3 ⟨CG⟩ .json compute CG Done for each CG per supported static analysis framework. ⟨CG Algorithm Profile⟩ .tsvcompute profile using CG and expected call targets ⟨Project⟩ .jar ⟨Features & Locations⟩ .json ⟨CG⟩ .json compute CG run Hermes Infrastructure used for computing the prevalence of features in real projects. ⟨Potential Sources of Unsoundness⟩ .tsv compute suitability of CG algo. use the respective CG profile Finding Features in Real Code !8 [1] Reif, Michael et al. Hermes: assessment and creation of effective test corpora. SOAP ’17. ACM, 43–48. • We used Hermes [1], a static analysis code query infrastructure • Each query is an analysis that checks if a specific feature is found in a given code base • We developed 15 Hermes queries to derive 107 Hermes features and map the derived features to the test case ids • All queries perform a most-conservative intra-procedural analysis
  • 14. Potential Sources of Unsoundness !9 0✘ Lambda8 (Invokedynamic - Scala) Lambda3 (Invokedynamic - Java ≤ 10) 1✓ … …… TR1 (Reflection) 2✘ Extensions Count 3 Supported by CG(a) ✓ BPC2 (Polymorphic Call) Features (Based on Test Cases) ✘mz my ✓ mx ✘ ✓mu …… m4 ✓ m3 ✓ m2 ✘ Reached by CG(a) ✓m1 Name Methods Computed Using Feature Queries / Hermes LibraryCodeApplicationCode Sourceof Unsoundness For Project (p) ConditionalSource ofUnsoundness Extensions Mapping TC1.jarTC2.jar⟨Test Case⟩ .jar ⟨Advanced Test Case⟩ .jar compile test cases AllTestCases <Test Fixtures Category>.md Test Case 1(TC1) … Test Case 3 (TCN) ⟨Test Fixtures⟩.md Test Case 1 … Test Case 3 ⟨CG⟩ .json compute CG Done for each CG per supported static analysis framework. ⟨CG Algorithm Profile⟩ .tsvcompute profile using CG and expected call targets ⟨Project⟩ .jar ⟨Features & Locations⟩ .json ⟨CG⟩ .json compute CG run Hermes Infrastructure used for computing the prevalence of features in real projects. ⟨Potential Sources of Unsoundness⟩ .tsv compute suitability of CG algo. use the respective CG profile • Sources of Unsoundness definitely make the call graph unsound • Conditional sources of Unsoundness might introduce unsoundness
  • 15. Research Questions • RQ1: How prevalent are the language and API features? • RQ2: How do the frameworks compare to each other? • RQ3: Which framework is best suited for which kind of code base? • RQ4: How much effort is necessary to get a sound call graph? !10
  • 16. Prevalent Language Features and APIs (RQ1) • All the API and language features supported by Java up to version 7 are used widely across all code bases • Support for Java 8 is a must, unless analyzing Android or Clojure code • Supporting classical Reflection and Serialization is strongly recommended, independent of the source code’s age • Support for many features is only required in specific scenarios !11
  • 17. The Call Graphs’ Feature Support (RQ2) !12
  • 18. The Call Graphs’ Feature Support (RQ2) !12
  • 19. The Call Graphs’ Feature Support (RQ2) !12 Standard Java Features are well- supported
  • 20. The Call Graphs’ Feature Support (RQ2) !12 Standard Java Features are well- supported
  • 21. The Call Graphs’ Feature Support (RQ2) !12 Java 8 Features are partially supported Standard Java Features are well- supported
  • 22. The Call Graphs’ Feature Support (RQ2) !12 Java 8 Features are partially supported Standard Java Features are well- supported
  • 23. The Call Graphs’ Feature Support (RQ2) !12 Java 8 Features are partially supported The JVM is not fully covered Standard Java Features are well- supported
  • 24. The Call Graphs’ Feature Support (RQ2) !12 Java 8 Features are partially supported The JVM is not fully covered Standard Java Features are well- supported
  • 25. The Call Graphs’ Feature Support (RQ2) !12 Java 8 Features are partially supported The JVM is not fully covered Standard Java Features are well- supported Reflection API partially supported
  • 26. The Call Graphs’ Feature Support (RQ2) !12 Java 8 Features are partially supported The JVM is not fully covered Standard Java Features are well- supported Reflection API partially supported
  • 27. The Call Graphs’ Feature Support (RQ2) !12 Java 8 Features are partially supported The JVM is not fully covered Some APIs and language features are unsupported Standard Java Features are well- supported Reflection API partially supported
  • 30. Performance Results (RQ2) !13 avg. Runtimes largely differ
  • 31. Performance Results (RQ2) !13 avg. Runtimes largely differ
  • 32. Performance Results (RQ2) !13 avg. Runtimes largely differ Reachable Methods vary even for implementations of the same algorithm by more than 20x
  • 33. RTA-Example !14 void program(boolean condition){ Collection c1 = new LinkedList(); Collection c2; if(condition){ c2 = new ArrayList(); } else { c2 = new Vector(); } c2.add(null); Collection c3 = new HashSet(); } • RTA [2] depends on the program’s instantiated types • Soot, WALA, and OPAL behave complete differently [2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341.
  • 34. RTA-Example !14 void program(boolean condition){ Collection c1 = new LinkedList(); Collection c2; if(condition){ c2 = new ArrayList(); } else { c2 = new Vector(); } c2.add(null); Collection c3 = new HashSet(); } • RTA [2] depends on the program’s instantiated types • Soot, WALA, and OPAL behave complete differently [2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341.
  • 35. RTA-Example !14 void program(boolean condition){ Collection c1 = new LinkedList(); Collection c2; if(condition){ c2 = new ArrayList(); } else { c2 = new Vector(); } c2.add(null); Collection c3 = new HashSet(); } • RTA [2] depends on the program’s instantiated types • Soot, WALA, and OPAL behave complete differently [2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341. { LinkedList, ArrayList, Vector, HashSet }
  • 36. RTA-Example !14 void program(boolean condition){ Collection c1 = new LinkedList(); Collection c2; if(condition){ c2 = new ArrayList(); } else { c2 = new Vector(); } c2.add(null); Collection c3 = new HashSet(); } • RTA [2] depends on the program’s instantiated types • Soot, WALA, and OPAL behave complete differently [2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341. { LinkedList, ArrayList, Vector, HashSet }
  • 37. RTA-Example !14 void program(boolean condition){ Collection c1 = new LinkedList(); Collection c2; if(condition){ c2 = new ArrayList(); } else { c2 = new Vector(); } c2.add(null); Collection c3 = new HashSet(); } • RTA [2] depends on the program’s instantiated types • Soot, WALA, and OPAL behave complete differently [2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341. { LinkedList, ArrayList, Vector, HashSet } { LinkedList, ArrayList, Vector}
  • 38. RTA-Example !14 void program(boolean condition){ Collection c1 = new LinkedList(); Collection c2; if(condition){ c2 = new ArrayList(); } else { c2 = new Vector(); } c2.add(null); Collection c3 = new HashSet(); } • RTA [2] depends on the program’s instantiated types • Soot, WALA, and OPAL behave complete differently [2] D. Bacon and P. Sweeney. Fast static analysis of C++ virtual function calls. OOPSLA '96. ACM, 324-341. { LinkedList, ArrayList, Vector, HashSet } {ArrayList, Vector}{ LinkedList, ArrayList, Vector}
  • 43. Project-specific Evaluation (RQ3) !15 Soot supports CSR but its expensive OPAL supports most features but has the smallest call graph
  • 44. Project-specific Evaluation (RQ3) !15 Soot supports CSR but its expensive OPAL supports most features but has the smallest call graph OPAL covers only 47 methods from Xalan (~0.3%)
  • 45. Project-specific Evaluation (RQ3) !15 Soot supports CSR but its expensive OPAL supports most features but has the smallest call graph OPAL covers only 47 methods from Xalan (~0.3%) Very few call sites have a huge impact
  • 46. Is it worth it to do the work manually? (RQ 4) • GOAL: Get a reasonably sound call graph • JVM profiling and TamiFlex [3] as ground truth !16 [3] Bodden, Eric, et al. Taming Reflection--Static Analysis in the Presence of Reflection and Custom Class Loaders. (2010). Apply Judge Inspect Results Add Entry Points • Analyzed 10 reflective call sites • Added 50 entry points • manual analysis took roughly 90 minutes • The call graph then covered 91% of all methods contained in the profile and 121 from 198 reported by TamiFlex
  • 47. !17
  • 48. !17
  • 49. !17
  • 50. !17