The document discusses approaches for developing perfect programs, including maintainable, efficient, and correct code. It advocates for testing programs through assertions, random testing, symbolic execution, and using types. Specific tools mentioned for random testing include QuickCheck, FsCheck, and ScalaCheck. The document also discusses using constraint solvers like Z3 and dynamic symbolic execution tools like Pex to generate more comprehensive test cases. It notes limitations in testing nondeterministic code, concurrency, and constraint solver capabilities. Overall the document evaluates different testing strategies for developing correct programs.
7. WELL, NOT THAT FAST
Dear <…>
…
We see XYZException!!!.............!
where n = enough for you to feel miserable
n times
8. OK, LET’S TEST IT
1. Choose your favourite library
(good chances it’s .*Unit)
2. Write some tests
aand
3. XYZException has gone
9. THE PROBLEM, PART 1
But…
- they only check what you think a
program should do, not what it
actually should do
- ... on what you think is all possible
inputs
10. RANDOMIZE IT
Throw the thousands of inputs into
your program with
- QuickCheck (Haskell)
- FsCheck (F#)
- ScalaCheck (Scala)
- or something else
15. I WILL BUILD MY OWN
TEST GENERATOR
let f x y =
if x < 10 then x
else if x = 42 then
failwith “42”
else 1 / y
x < 10 or x >= 10
x = 42 or x <> 42
y = 0 or y <> 0
17. THE PROBLEM, PART 3
But it’s too complicated to do by
hands:
- too many cases,
- lots of overlapping paths,
- difficult to solve when the number
of variables grows,
- not everything is a quotation…
18. IDEA: USE A
CONSTRAINT SOLVER
- Check out Z3 homepage
- Try Z3 in your browser
- LINQ to Z3 (ch9 video and a post by
Bart de Smet)
Specially for fsharpers:
- Z3Fs on github
19. THERE’S A TOOL FOR
THAT – MEET PEX
- Dynamic symbolic execution
- Analyses .NET instructions
- Uses constraint solver to find the
inputs
20. PEX REFERENCES
- Project homepage
- Pex for fun in your browser
- Code Digger, addin for VS
- Code Hunt website
- Documentation, videos
and more
22. TEST GENERATION
Static:
- Conditional
statements
- Check the
formulas
satisfiability
Dynamic:
- Collect the
information during
the program
execution
- Unknown
environments
- Enhanced values
generation
- Better Performance
23. DYNAMIC EXECUTION
EXAMPLE
int obscure(int x, int y) {
if (x == hash(y)) return -1; // error
return 0;
}
“Compositional Dynamic Test Generation”,
Patrice Godefroid (paper)
24. THE PROBLEM, PART 4
Limitations:
- Nondeterministic cases (e.g. native
code)
- Concurrency
- Constraint solver limitations
25. MORE AND LESS
TYPES WITH F*
F* - an ML-like verification-oriented
language
- F* project homepage
- GitHub repo
- Try F* in your browser
27. THE PROBLEM, PART 5
- currently under development
- issues on mono
- in more complex cases, the errors
become quite cryptic
- you still need to come up with a
way to define the requirements, at
the type level