Automating Good Coding Practices Silicon Valley Code Camp 2010 Kevin Peterson @kdpeterson kaChing.com
Working Code vs. Good Code <ul><li>Looks good to me
Solves today's problem
Easiest to measure from the outside </li></ul><ul><li>Looks good to you
Solves tomorrow's problem
Evaluating it is just as hard as writing it
Falls to the tragedy of the commons </li></ul>
Policies must encourage good code
Types of Automation <ul><li>Unit tests
Integration tests
Style and illegal calls checks
Static analysis
Automated monitoring </li></ul>
Some context: kaChing <ul><li>Financial services
Continuous deployment
Trunk stable
DevOps – no QA, no Ops
Swiss compiler geeks </li></ul>
Types of non-automation <ul><li>Culture
Buy-in
Fix-it days </li></ul>
Unit Tests <ul><li>JUnit
jMock
DBUnit
Domain-specific helpers
Framework-specific helpers </li></ul>
Helpers
Helpers
Global Tests <ul><li>Using Kawala declarative test runner
Style
Dangerous methods
Minimal coverage tests
Dependencies and Visibility
Easy to add exceptions
Any failure breaks the build </li></ul>
Dependency Test
Java Bad Code Snippets Test
Forbidden Calls Test
Lib Dir Well Formed Test
Open Source <ul><li>code.google.com/p/kawala
AbstractDeclarativeTestRunner
BadCodeSnippetsRunner
Upcoming SlideShare
Loading in...5
×

Automating good coding practices

2,488

Published on

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,488
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
21
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide
  • The price of clean code is eternal vigilance. Everyone wants to work with clean code, but no one wants to be the enforcer. In this session we&apos;ll look how at KaChing integrates style checkers and static analysis tools into the build process to keep errors out without getting in the way of developers. Many of the tools discussed are specific to Java or Scala, but the techniques are generally applicable. Room 8338 1:15pm Sunday
  • If you can&apos;t produce working code, you aren&apos;t going to be working for very long as an engineer. But whether you produce good code or just working code depends almost as much as the environment as your own abilities. The business guy says “hey, I need a chart that shows X”. It&apos;s easy to see if he has a chart, and that it looks like it&apos;s showing X. When I say “solves tomorrows problem” I don&apos;t mean over-engineering. I believe YAGNI. I mean it&apos;s extensible, understandable.
  • The programmer isn&apos;t sufficiently motivated to go beyond working code. Something external needs to push him that way. Nobody wants to be the asshole. The computer doesn&apos;t mind being the asshole.
  • No distinction between Unit, Functional and Integration tests. These are all tests that your code does what you intended it to do. Style and static analysis test that your code does it in the right way. Is the code likely to have unforeseen bugs, is the code going to be hard to understand to the next guy. Automated monitoring keeps testing your code after it goes out to production. If it throws an exception, that needs to get back to the engineer. This can happen automatically.
  • KaChing lets you invest with professional money managers who normally only take on wealthy clients. If you know investing, we simplify separately managed accounts and provide a transparent marketplace. If not, personal mutual fund. This is a different tolerance for errors than petswearingfunnyhats.com Continuous deployment means the automation is all that stands between your editor window and the customer. Compiler technologies are our secret sauce for testing the untestable.
  • I&apos;m here today to talk about automation. Automation is only part of why we are successful. Some key items that I&apos;m not addressing are the quality of engineers, our culture of test-worship, and real buy-in from the engineers that everything that can be automated, should be automated. We also use Fix-it days when we go back and try to fix a problem that has crept into our code. For example, we may have a fix-it day soon to eliminate all our outstanding PMD warnings.
  • I&apos;m going to assume you know what junit is. It might surprise you that we don&apos;t just test Java with junit, but also Scala. No on has gotten around to setting up something like specs. We run it in Eclipse Jmock is such a powerful too you owe it to yourself to learn how to use it, unless you already have a favorite mock system. If a unit test fails, you cannot deploy. You can&apos;t even check-in unless it&apos;s #rollback or #buildfix
  • Java BigDecimal says 1.0 and 1.00 are different. We usually consider them the same. Same sort of thing for XML, JSON
  • First test ensures that Guice can find bindings for everything that gets injected. The second test ensures that the queries follow rules about not performing logic in constructors that&apos;s needed for Pascal&apos;s instantiator framework.
  • We&apos;d really like something more expressive than package or public. Dependency test and Visibility test using the @VisibleForTesting annotation let us express the difference between “this is public, anyone can use it” vs. “this is public because I need to access it from another package”.
  • This is minor, but I just wanted to include it as a point of expressing conventions as code.
  • This isn&apos;t as ready to go out of the box as PMD or FindBugs, but you can use it without much effort. DeclarativeTestRunner lets you set up a bunch of rules that will be matched in turn against a set of instances, and any failures will be reported. Iteration goes the wrong way if you try to express this as individual tests.
  • Show precommit tests. More or less a collection of all the tests that have a tendency to fail. People run the tests for the class they are working on, but don&apos;t want to spend the 3 minutes or so to run the full test suite.
  • Considering moving this to part of the main build. Some question about whether this will mean we need to run the tool before checkin, since we can&apos;t really know whether it will find a problem.
  • We
  • This isn&apos;t some crazy bureaucratic stuff. This comes from someone who really gets test driven development. We reject it though.
  • Engineers don&apos;t need to be watched like prisoners. They just need a jolt “hey are you sure you mean to do that”.
  • We discussed this for QueriesAreTestedTest. If I annotate the class, then I know if I go in there an modify it, it&apos;s not tested so be careful. But if you annotate the class, then I can&apos;t easily go to a centralized place to find all the exceptions. Easy to see what “rules” need to be worked on. Java leans towards in-place, which I guess is a good argument against it.
  • Engineers don&apos;t need incentives from above when they have personal incentives to make sure the code is good. It&apos;s all about aligning interests. Try to internalize all the relevant competing interests. That said, I&apos;m not on the ops rotation yet, but I believe someone just obligated me to be up at 6:30am when the markets open in NY.
  • Queries are the high level building blocks. David recently added a test that they are tested (very loosely – a test must instantiate it). Still hundreds of exceptions, but new code generally covered. If it doesn&apos;t need to be tested, add an exception. Checkin a new class without a new test without #notestneeded results in email. No further action needed, but you should either add a test or reply explaining why not “covered by existing tests” Coverage very useful at NexTag adding tests after the fact. Pretending coverage was a useful metric led to gaming at another company. Test that just instantiates the class, worse than nothing because you think it&apos;s covered.
  • I&apos;m not being strict about the meanings here, but as an example, one test starts up two servers which talk to each other over FIX, a brokerage protocol. Slow and fragile. We don&apos;t run it as part of build. Initial validation and testing new functionality. Rely on lower level unit tests to check regressions.
  • This might get into the politics of things. No one likes to tell people their code stinks. I mean, maybe it isn&apos;t how you would write it, but you can&apos;t point to anything really wrong about it. Because of automation, these lose a lot of their value. “Whoops, you can&apos;t use BigDecimal.divide like that” That&apos;s already caught
  • I somewhat disagree with this. I think demanding comments on “what the hell is this object” is entirely reasonable. Enforcing a rule of “public has a comment or @SelfExplanatory annotation” is reasonable.
  • Engineers like finishing things. I&apos;m done, I&apos;ll send it to QA, few days later, QA sends it back. Mentally, these are new tasks. I already finished it. And now I get to accomplish more by doing these tasks QA has brought to me.
  • Arguments on both sides. We recently turned on “no commits while the build is broken” to put pressure on people to fix it. Concerned that the build queue will slow down pushing things to production.
  • Hazy. I&apos;m mentioning it
  • We do continuous deployment, so we don&apos;t really want to stage things for QA to test, but we need an environment for more experimental things, and to see whether some approach will work.
  • I&apos;ve seen people act like it&apos;s heresy to check in your IDE settings. This might be the case for a big distributed open source project. It&apos;s not any more effort to save the project specific code formatting settings than it is to throw up a page on a wiki saying 2 spaces, no tabs.
  • I&apos;d love to hear your feedback on this presentation or on technologies you have found useful for testing. If you found this interesting, you should check out our blog where you&apos;ll find lots more on a similar theme. If you are interested in an alternative to mutual funds, I suggest you check out kaching.com.
  • Transcript of "Automating good coding practices"

    1. 1. Automating Good Coding Practices Silicon Valley Code Camp 2010 Kevin Peterson @kdpeterson kaChing.com
    2. 2. Working Code vs. Good Code <ul><li>Looks good to me
    3. 3. Solves today's problem
    4. 4. Easiest to measure from the outside </li></ul><ul><li>Looks good to you
    5. 5. Solves tomorrow's problem
    6. 6. Evaluating it is just as hard as writing it
    7. 7. Falls to the tragedy of the commons </li></ul>
    8. 8. Policies must encourage good code
    9. 9. Types of Automation <ul><li>Unit tests
    10. 10. Integration tests
    11. 11. Style and illegal calls checks
    12. 12. Static analysis
    13. 13. Automated monitoring </li></ul>
    14. 14. Some context: kaChing <ul><li>Financial services
    15. 15. Continuous deployment
    16. 16. Trunk stable
    17. 17. DevOps – no QA, no Ops
    18. 18. Swiss compiler geeks </li></ul>
    19. 19. Types of non-automation <ul><li>Culture
    20. 20. Buy-in
    21. 21. Fix-it days </li></ul>
    22. 22. Unit Tests <ul><li>JUnit
    23. 23. jMock
    24. 24. DBUnit
    25. 25. Domain-specific helpers
    26. 26. Framework-specific helpers </li></ul>
    27. 27. Helpers
    28. 28. Helpers
    29. 29. Global Tests <ul><li>Using Kawala declarative test runner
    30. 30. Style
    31. 31. Dangerous methods
    32. 32. Minimal coverage tests
    33. 33. Dependencies and Visibility
    34. 34. Easy to add exceptions
    35. 35. Any failure breaks the build </li></ul>
    36. 36. Dependency Test
    37. 37. Java Bad Code Snippets Test
    38. 38. Forbidden Calls Test
    39. 39. Lib Dir Well Formed Test
    40. 40. Open Source <ul><li>code.google.com/p/kawala
    41. 41. AbstractDeclarativeTestRunner
    42. 42. BadCodeSnippetsRunner
    43. 43. DependencyTestRunner
    44. 44. Assert, helpers, test suite builders </li></ul>
    45. 45. Precommit Tests <ul><li>Dependencies
    46. 46. Forbidden Calls
    47. 47. Visibility
    48. 48. Java Code Snippets
    49. 49. Scala Code Snippets </li></ul><ul><li>Json Entities
    50. 50. Queries Are Tested
    51. 51. Front end calls
    52. 52. Injection
    53. 53. Username-keyed tests </li></ul>
    54. 54. Static Analysis <ul><li>FindBugs – all clear
    55. 55. PMD – 175 warnings right now
    56. 56. Easy to add exceptions
    57. 57. Runs as separate build (we're working on it)
    58. 58. Adding a warning breaks the build (Hudson) </li></ul>
    59. 59. Hudson Analysis Plugin
    60. 60. FindBugs Example
    61. 61. PMD Example
    62. 62. PMD Fix
    63. 63. How to add exceptions Step two : Give each member of the team two cards. Go over the list of rules with the team and have them vote on them. Voting is done using the cards where: <ul><ul><li>No cards: I think the rule is stupid and we should filter it out in the findbugsExclude.xml
    64. 64. One card: The rule is important but not critical.
    65. 65. Two cards: The rule is super important and we should fix it right away. </li></ul></ul>David from testdriven.com via Eishay
    66. 66. How to add exceptions II <ul><li>Add the exceptions you want
    67. 67. No oversight, no questions asked
    68. 68. Lets you be strict with rules
    69. 69. Makes you consider whether adding the exception is right </li></ul>
    70. 70. Where do exceptions go? <ul><li>SuppressWarnings
    71. 71. Annotate in-place
    72. 72. PMD, VisibilityTest </li></ul><ul><li>All in one place
    73. 73. Most of our tests
    74. 74. FindBugs </li></ul>
    75. 75. Monitoring <ul><li>It doesn't end with the build
    76. 76. Self tests on startup
    77. 77. Nagios
    78. 78. ESP
    79. 79. Daily report signoff – business rules </li></ul>
    80. 80. Monitoring Example: ESP
    81. 81. Dev-Ops An engineer with a pager is an engineer passionate about code quality.
    82. 82. Summary Unit Tests Global Tests Monitoring Does my code do what I intended it to do? Is my code likely to break? Is my code actually working right now?
    83. 83. What we don't do (much)
    84. 84. Code Coverage <ul><li>QueriesAreTestedTest
    85. 85. No tests email
    86. 86. No Emma or Cobertura in build process
    87. 87. Ad hoc use of Emma or MoreUnit
    88. 88. Are coverage numbers useful? </li></ul>
    89. 89. Integration Tests <ul><li>Mostly manual at this time
    90. 90. If we can't run it every commit, does it have value? </li></ul>
    91. 91. Formal Code Reviews <ul><li>Are humans any better than automation?
    92. 92. Enough better that it's worth the time cost?
    93. 93. Post-commit, pre-deployment SQL review
    94. 94. Informatal “hey, this is hairy” reviews
    95. 95. Pair on difficult components </li></ul>
    96. 96. Writing Comments “ We do not write a lot of comments. Since they are not executable, they tend to get out of date. Well-written tests are live specs and explain what, how and why. Get used to reading tests like you read English, and you'll be just fine. “ There are two major exceptions to the sparse comments world: <ul><li>open-sourced code
    97. 97. algorithmic details” </li></ul>– kaChing Java Coding Style
    98. 98. What we don't do (and never will)
    99. 99. QA <ul><li>No QA department
    100. 100. “Throw it over the wall” leads to false sense of accomplishment
    101. 101. Fixing QA-found bugs seems productive
    102. 102. But it's actually a sign you screwed up </li></ul>
    103. 103. What we might do soon
    104. 104. Build Queue <ul><li>Commit, see if tests pass
    105. 105. Require #rollback or #buildfix to commit </li></ul>Strong social pressure to not break the build Holding up other engineers vs. <ul><li>Considering moving to build queue </li></ul>
    106. 106. By-request Code Reviews <ul><li>Probably depends on build queue
    107. 107. Gerrit?
    108. 108. Dependent on switch to git? </li></ul>
    109. 109. Better staging environment <ul><li>Hard for us to test data migration issues
    110. 110. Hard to test inter-service calls
    111. 111. Errors don't occur on dev boxes
    112. 112. Front end has it, with Selenium testing </li></ul>
    113. 113. One more thing
    114. 114. Standardize tools <ul><li>Check in your Eclipse project
    115. 115. Configure code formatting
    116. 116. Share your templates
    117. 117. Organize imports on save
    118. 118. Keeps your history clean </li></ul>
    119. 119. Kevin Peterson kaChing.com @kdpeterson http://eng.kaching.com
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×