Automating good coding practices
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Automating good coding practices

on

  • 2,440 views

 

Statistics

Views

Total Views
2,440
Views on SlideShare
1,344
Embed Views
1,096

Actions

Likes
2
Downloads
18
Comments
0

3 Embeds 1,096

http://eng.wealthfront.com 1092
http://static.slidesharecdn.com 3
https://ymodules.yammer.com 1

Accessibility

Categories

Upload Details

Uploaded via as OpenOffice

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • The price of clean code is eternal vigilance. Everyone wants to work with clean code, but no one wants to be the enforcer. In this session we'll look how at KaChing integrates style checkers and static analysis tools into the build process to keep errors out without getting in the way of developers. Many of the tools discussed are specific to Java or Scala, but the techniques are generally applicable. Room 8338 1:15pm Sunday
  • If you can't produce working code, you aren't going to be working for very long as an engineer. But whether you produce good code or just working code depends almost as much as the environment as your own abilities. The business guy says “hey, I need a chart that shows X”. It's easy to see if he has a chart, and that it looks like it's showing X. When I say “solves tomorrows problem” I don't mean over-engineering. I believe YAGNI. I mean it's extensible, understandable.
  • The programmer isn't sufficiently motivated to go beyond working code. Something external needs to push him that way. Nobody wants to be the asshole. The computer doesn't mind being the asshole.
  • No distinction between Unit, Functional and Integration tests. These are all tests that your code does what you intended it to do. Style and static analysis test that your code does it in the right way. Is the code likely to have unforeseen bugs, is the code going to be hard to understand to the next guy. Automated monitoring keeps testing your code after it goes out to production. If it throws an exception, that needs to get back to the engineer. This can happen automatically.
  • KaChing lets you invest with professional money managers who normally only take on wealthy clients. If you know investing, we simplify separately managed accounts and provide a transparent marketplace. If not, personal mutual fund. This is a different tolerance for errors than petswearingfunnyhats.com Continuous deployment means the automation is all that stands between your editor window and the customer. Compiler technologies are our secret sauce for testing the untestable.
  • I'm here today to talk about automation. Automation is only part of why we are successful. Some key items that I'm not addressing are the quality of engineers, our culture of test-worship, and real buy-in from the engineers that everything that can be automated, should be automated. We also use Fix-it days when we go back and try to fix a problem that has crept into our code. For example, we may have a fix-it day soon to eliminate all our outstanding PMD warnings.
  • I'm going to assume you know what junit is. It might surprise you that we don't just test Java with junit, but also Scala. No on has gotten around to setting up something like specs. We run it in Eclipse Jmock is such a powerful too you owe it to yourself to learn how to use it, unless you already have a favorite mock system. If a unit test fails, you cannot deploy. You can't even check-in unless it's #rollback or #buildfix
  • Java BigDecimal says 1.0 and 1.00 are different. We usually consider them the same. Same sort of thing for XML, JSON
  • First test ensures that Guice can find bindings for everything that gets injected. The second test ensures that the queries follow rules about not performing logic in constructors that's needed for Pascal's instantiator framework.
  • We'd really like something more expressive than package or public. Dependency test and Visibility test using the @VisibleForTesting annotation let us express the difference between “this is public, anyone can use it” vs. “this is public because I need to access it from another package”.
  • This is minor, but I just wanted to include it as a point of expressing conventions as code.
  • This isn't as ready to go out of the box as PMD or FindBugs, but you can use it without much effort. DeclarativeTestRunner lets you set up a bunch of rules that will be matched in turn against a set of instances, and any failures will be reported. Iteration goes the wrong way if you try to express this as individual tests.
  • Show precommit tests. More or less a collection of all the tests that have a tendency to fail. People run the tests for the class they are working on, but don't want to spend the 3 minutes or so to run the full test suite.
  • Considering moving this to part of the main build. Some question about whether this will mean we need to run the tool before checkin, since we can't really know whether it will find a problem.
  • We
  • This isn't some crazy bureaucratic stuff. This comes from someone who really gets test driven development. We reject it though.
  • Engineers don't need to be watched like prisoners. They just need a jolt “hey are you sure you mean to do that”.
  • We discussed this for QueriesAreTestedTest. If I annotate the class, then I know if I go in there an modify it, it's not tested so be careful. But if you annotate the class, then I can't easily go to a centralized place to find all the exceptions. Easy to see what “rules” need to be worked on. Java leans towards in-place, which I guess is a good argument against it.
  • Engineers don't need incentives from above when they have personal incentives to make sure the code is good. It's all about aligning interests. Try to internalize all the relevant competing interests. That said, I'm not on the ops rotation yet, but I believe someone just obligated me to be up at 6:30am when the markets open in NY.
  • Queries are the high level building blocks. David recently added a test that they are tested (very loosely – a test must instantiate it). Still hundreds of exceptions, but new code generally covered. If it doesn't need to be tested, add an exception. Checkin a new class without a new test without #notestneeded results in email. No further action needed, but you should either add a test or reply explaining why not “covered by existing tests” Coverage very useful at NexTag adding tests after the fact. Pretending coverage was a useful metric led to gaming at another company. Test that just instantiates the class, worse than nothing because you think it's covered.
  • I'm not being strict about the meanings here, but as an example, one test starts up two servers which talk to each other over FIX, a brokerage protocol. Slow and fragile. We don't run it as part of build. Initial validation and testing new functionality. Rely on lower level unit tests to check regressions.
  • This might get into the politics of things. No one likes to tell people their code stinks. I mean, maybe it isn't how you would write it, but you can't point to anything really wrong about it. Because of automation, these lose a lot of their value. “Whoops, you can't use BigDecimal.divide like that” That's already caught
  • I somewhat disagree with this. I think demanding comments on “what the hell is this object” is entirely reasonable. Enforcing a rule of “public has a comment or @SelfExplanatory annotation” is reasonable.
  • Engineers like finishing things. I'm done, I'll send it to QA, few days later, QA sends it back. Mentally, these are new tasks. I already finished it. And now I get to accomplish more by doing these tasks QA has brought to me.
  • Arguments on both sides. We recently turned on “no commits while the build is broken” to put pressure on people to fix it. Concerned that the build queue will slow down pushing things to production.
  • Hazy. I'm mentioning it
  • We do continuous deployment, so we don't really want to stage things for QA to test, but we need an environment for more experimental things, and to see whether some approach will work.
  • I've seen people act like it's heresy to check in your IDE settings. This might be the case for a big distributed open source project. It's not any more effort to save the project specific code formatting settings than it is to throw up a page on a wiki saying 2 spaces, no tabs.
  • I'd love to hear your feedback on this presentation or on technologies you have found useful for testing. If you found this interesting, you should check out our blog where you'll find lots more on a similar theme. If you are interested in an alternative to mutual funds, I suggest you check out kaching.com.

Automating good coding practices Presentation Transcript

  • 1. Automating Good Coding Practices Silicon Valley Code Camp 2010 Kevin Peterson @kdpeterson kaChing.com
  • 2. Working Code vs. Good Code
    • Looks good to me
    • 3. Solves today's problem
    • 4. Easiest to measure from the outside
    • Looks good to you
    • 5. Solves tomorrow's problem
    • 6. Evaluating it is just as hard as writing it
    • 7. Falls to the tragedy of the commons
  • 8. Policies must encourage good code
  • 9. Types of Automation
    • Unit tests
    • 10. Integration tests
    • 11. Style and illegal calls checks
    • 12. Static analysis
    • 13. Automated monitoring
  • 14. Some context: kaChing
    • Financial services
    • 15. Continuous deployment
    • 16. Trunk stable
    • 17. DevOps – no QA, no Ops
    • 18. Swiss compiler geeks
  • 19. Types of non-automation
    • Culture
    • 20. Buy-in
    • 21. Fix-it days
  • 22. Unit Tests
    • JUnit
    • 23. jMock
    • 24. DBUnit
    • 25. Domain-specific helpers
    • 26. Framework-specific helpers
  • 27. Helpers
  • 28. Helpers
  • 29. Global Tests
    • Using Kawala declarative test runner
    • 30. Style
    • 31. Dangerous methods
    • 32. Minimal coverage tests
    • 33. Dependencies and Visibility
    • 34. Easy to add exceptions
    • 35. Any failure breaks the build
  • 36. Dependency Test
  • 37. Java Bad Code Snippets Test
  • 38. Forbidden Calls Test
  • 39. Lib Dir Well Formed Test
  • 40. Open Source
    • code.google.com/p/kawala
    • 41. AbstractDeclarativeTestRunner
    • 42. BadCodeSnippetsRunner
    • 43. DependencyTestRunner
    • 44. Assert, helpers, test suite builders
  • 45. Precommit Tests
    • Dependencies
    • 46. Forbidden Calls
    • 47. Visibility
    • 48. Java Code Snippets
    • 49. Scala Code Snippets
    • Json Entities
    • 50. Queries Are Tested
    • 51. Front end calls
    • 52. Injection
    • 53. Username-keyed tests
  • 54. Static Analysis
    • FindBugs – all clear
    • 55. PMD – 175 warnings right now
    • 56. Easy to add exceptions
    • 57. Runs as separate build (we're working on it)
    • 58. Adding a warning breaks the build (Hudson)
  • 59. Hudson Analysis Plugin
  • 60. FindBugs Example
  • 61. PMD Example
  • 62. PMD Fix
  • 63. How to add exceptions Step two : Give each member of the team two cards. Go over the list of rules with the team and have them vote on them. Voting is done using the cards where:
      • No cards: I think the rule is stupid and we should filter it out in the findbugsExclude.xml
      • 64. One card: The rule is important but not critical.
      • 65. Two cards: The rule is super important and we should fix it right away.
    David from testdriven.com via Eishay
  • 66. How to add exceptions II
    • Add the exceptions you want
    • 67. No oversight, no questions asked
    • 68. Lets you be strict with rules
    • 69. Makes you consider whether adding the exception is right
  • 70. Where do exceptions go?
    • SuppressWarnings
    • 71. Annotate in-place
    • 72. PMD, VisibilityTest
    • All in one place
    • 73. Most of our tests
    • 74. FindBugs
  • 75. Monitoring
    • It doesn't end with the build
    • 76. Self tests on startup
    • 77. Nagios
    • 78. ESP
    • 79. Daily report signoff – business rules
  • 80. Monitoring Example: ESP
  • 81. Dev-Ops An engineer with a pager is an engineer passionate about code quality.
  • 82. Summary Unit Tests Global Tests Monitoring Does my code do what I intended it to do? Is my code likely to break? Is my code actually working right now?
  • 83. What we don't do (much)
  • 84. Code Coverage
    • QueriesAreTestedTest
    • 85. No tests email
    • 86. No Emma or Cobertura in build process
    • 87. Ad hoc use of Emma or MoreUnit
    • 88. Are coverage numbers useful?
  • 89. Integration Tests
    • Mostly manual at this time
    • 90. If we can't run it every commit, does it have value?
  • 91. Formal Code Reviews
    • Are humans any better than automation?
    • 92. Enough better that it's worth the time cost?
    • 93. Post-commit, pre-deployment SQL review
    • 94. Informatal “hey, this is hairy” reviews
    • 95. Pair on difficult components
  • 96. Writing Comments “ We do not write a lot of comments. Since they are not executable, they tend to get out of date. Well-written tests are live specs and explain what, how and why. Get used to reading tests like you read English, and you'll be just fine. “ There are two major exceptions to the sparse comments world:
    • open-sourced code
    • 97. algorithmic details”
    – kaChing Java Coding Style
  • 98. What we don't do (and never will)
  • 99. QA
    • No QA department
    • 100. “Throw it over the wall” leads to false sense of accomplishment
    • 101. Fixing QA-found bugs seems productive
    • 102. But it's actually a sign you screwed up
  • 103. What we might do soon
  • 104. Build Queue
    • Commit, see if tests pass
    • 105. Require #rollback or #buildfix to commit
    Strong social pressure to not break the build Holding up other engineers vs.
    • Considering moving to build queue
  • 106. By-request Code Reviews
    • Probably depends on build queue
    • 107. Gerrit?
    • 108. Dependent on switch to git?
  • 109. Better staging environment
    • Hard for us to test data migration issues
    • 110. Hard to test inter-service calls
    • 111. Errors don't occur on dev boxes
    • 112. Front end has it, with Selenium testing
  • 113. One more thing
  • 114. Standardize tools
    • Check in your Eclipse project
    • 115. Configure code formatting
    • 116. Share your templates
    • 117. Organize imports on save
    • 118. Keeps your history clean
  • 119. Kevin Peterson kaChing.com @kdpeterson http://eng.kaching.com