Mountains to Molehills: A Story of QA

12,916 views
11,028 views

Published on

Dave Haeffner had just changed jobs within his company. Completely shifting his career paradigm from Systems Administration to Quality Assurance. If only he knew the challenges that faced him in the year ahead. He might get excited, or maybe, he would have thought twice.

There was very little he knew about his new role but that didn’t worry him. The mountainous challenges that faced QA is what kept him up at night.

Join him as he retraces his steps, sharing lessons learned, how he helped change QA from the bottom up, and how he turned those mountainous challenges into molehills.

Published in: Technology
3 Comments
10 Likes
Statistics
Notes
No Downloads
Views
Total views
12,916
On SlideShare
0
From Embeds
0
Number of Embeds
2,489
Actions
Shares
0
Downloads
80
Comments
3
Likes
10
Embeds 0
No embeds

No notes for slide

Mountains to Molehills: A Story of QA

  1. Mountains to MolehillsA story of QA<br />By Dave Haeffner<br />
  2. Hello, my name is Dave Haeffner and I work at The Motley Fool; an online Financial Investment Community (Fool.com).<br />I used to work in IT Operations. That’s all I knew. My undergrad was a Bachelor of Tehcnology , and I’ve held roughly every job that has to do with that field. But a year and a half ago I transitioned into the role of Quality Assurance Analyst, Tester, Quanalyst… let’s just call it “QA”. <br />The change was interesting. In Operations I had a reactionary perspective. I would see/be notified of things when they were broken and have to fix them.<br />But when going into QA I thought, “Oh, terrific! I get to find things that are broken and not have to fix them!”. I soon came to realize that QA was less about finding things that are broken, and more about helping to build things that aren’t broken in the first place.<br />
  3. Everything I need to know I learned in Kindergarten… and at Agile 2009.<br />I learned a lot at Agile 2009. A lot of best practices and good ideas. <br />But this talk focuses on the top 3 things I learned from Agile 2009 and how they have guided me and my work over the last 12 months.<br />
  4. Chris McMahon gave a talk titled “History of a Large Test Automation Project using Selenium” in which he discussed how SocialText approached testing.<br />He had 4 killer take-away points that helped paint a picture of what it takes to have a large automated web testing suite that works well.<br />
  5. Create/maintain fixtures (DSL)<br /> Feature coverage, like a web<br /> Fast/routine reporting of failures<br /> Quick response/analysis of failures<br />Chris McMahon<br />“History of a Large Test Automation Project Using Selenium”<br />
  6. I had a chance to participate in a lightning talk with some of the major minds in the Agile Community at a lightning talk titled “Slow and Brittle: Replacing End-to-End Testing. <br />During this talk an idea started to take root; Why is testing so custom and seemingly hard? Why doesn’t some kind of universal web testing harness exist? Why can’t testing be as easy as drinking a cup of coffee?<br />An idea can be a dangerous thing.<br />
  7. ArloBelshee, James Shore<br />“Slow and Brittle: Replacing End-to-End Testing”<br />
  8. On the last day of the conference I had the opportunity to attend an open jam put on by Adam Goucher (with special guest Jason Huggins).<br />Jason Huggins, creator of Selenium, co-founder of Sauce Labs<br />Adam Goucher, contributor to the Selenium open-source community (aka maintainer of Selenium IDE), testing evangelist<br />We chatted about the intended use of Selenium IDE and the power of exporting to a programming language.<br />Selenium IDE: Likened to a flight simulator<br />Selenium RC: How you can effectively fly the plane<br />
  9. Adam Goucher<br />Jason Huggins<br />“Selenium Open Space”<br />
  10. I left the conference with a new perspective. <br />I felt like this QA stuff was starting to make sense and that I would be able to really make a difference when I got back to work.<br />
  11. When I got back? I saw nothing but mountains. <br />
  12. 20/20<br />
  13. It turns out that we were flying the plane with a flight simulator. All automated tests were built using Selenium IDE and grew into a massive set of Smoke and Regression suites that were brittle, slow to run, and provided very poor feedback.<br />20/20: Our Smoke suite took 20 minutes to run and roughly 20 minutes to interpret. Once the errors were understood, this information would be placed into an e-mail and shipped off to the appropriate Development team. Unfortunately, this information is viewed as a distraction by the Developers since it is out of band with their workflow.<br />
  14. Much like the man with two brains, QA has 2 minds; technical and analytical. Unfortunately, a majority of the QA’s are more analytical than technical.<br />
  15. Limited resources, both funding and human.<br />There was roughly 1 QA for every 6 Developers, and, the training budget was fairly lax (especially given the economy). <br />
  16. What story wouldn’t be complete without spaghetti code? 17 years of a growing code base can have that affect. <br />And as a result, there are often discrepancies between our production and pre-Live environments. <br />This presented some interesting challenges when finding issues on the live website that somehow breezed right by our Testing environments.<br />
  17. There was a bit of aversion to change within Tech and the QA Department. Because when you mess with someone’s spaghetti, it can get messy. <br />
  18. The QA Department was viewed as Outsiders and as a result there was a significant communication gap between Developers and QA. <br />There was also a bit of a throw it over the wall mentality. When an issue was found Devs would often say “works on my machine”.<br />
  19. I started to question my transition from IT Operations and felt like I was going through the 4 stages of grieving. But after much soul searching, I had a thought…<br />
  20. What would Chuck Norris do <br />(if he were in QA)?<br />
  21. “<br />He would subdueThe Motley Fool’s use of Selenium IDE with a round-house kick to the face and build something in its place. <br />Perhaps something Ruby-flavored that leveraged open-source libraries that could be used by everyone; Business, Developerment, QA. <br />Thus bridging the gap between what is perceived that QA tests and what is actually tested. And he would call it “Testerfield”!<br />“<br />
  22. + + = Testerfield<br />
  23. The solution we built was an assembly of innovation; cobbling together a bunch of different tools and concepts into a concatenation that we needed and wanted. <br />And we gave it a name in an attempt to shake the “Selenium” nomenclature since it was a buzz word with some negative connotations. <br />That and when you name something, you give it an identity. You make it your own. You care for it.<br />A fun side effect of this tool was that we started to gain support/respect from the Dev’s since with it we were able to write some tests that saved them some time.<br />
  24. Too bad our first go-round with Testerfield was a failure… Perhaps Chuck Norris’s business acumen needs some work. This is what you get when you try to solve a problem as a technologist rather than business perspective.<br />There was a bit of signal to noise with Testerfield:<br /> Signal: We could write stable and robust tests<br /> Noise: This was a slightly different approach to testing than QA was used to. The thought of code seemed scary to most of QA, and the resulting output was difficult to interpret<br />
  25. Failure posed a significant problem for this movement. Testerfield was developed through unofficial channels (aka me and an intern). <br />There was no management mandate for this, no sound from on high, not even an angry mob or surly swear words at the computer screen from QAs when writing tests, just a vision in my head of how things could be. <br />This meant no management decision would be made until a better solution was presented and captured value<br />
  26. Testerfield AND the old Selenium IDE suites needed to be maintained simultaneously (read: double work)<br />Push back started to appear (aka I was encouraged to write new tests ONLY in Selenium IDE)… But this was before they saw Testerfield 2.0 :-) <br />I reveled in failure, listened for feedback, adapted the solution, and looked to re-tool my pitch<br />
  27. E = Q * A<br />
  28. I thought that if you can’t measure it, you can’t improve it. <br />Failure led me to learn about the effectiveness formula (E=Q*A) which offered a good framework on how to view the task at hand.<br />Effectiveness = Quality * Acceptance. If you have a low quality idea that is widely accepted, it is the same as have a high quality idea that is poorly adopted. <br />Here’s a good write-up on it http://www.prescientdigital.com/articles/best-practices/change-management-strategies-to-support-intranet-adoption/<br />I felt that our use of Selenium IDE was a low quality solution that was poorly accepted and that Testerfield was a high quality idea. It just needed to be well received.<br />
  29. So I started to wonder about what drove me to action which led me to the Human Action Model (re: Ludwig von Mises – Human Action: A Treatise of Economics). <br />It basically states that someone suffers from discomfort, has a vision for a better world, and takes action.<br />If I noticed discomfort, had a vision, and took action. How could I get others to follow? How can I get through to people? To have them see the value in this intrinsically?<br />
  30. Enter ‘The Golden Circle’. It is a simple but powerful model for inspirational leadership that all starts with the question “Why?”.<br />Simon Sinek – author of ‘Start With Why’. He offers some historic examples such as Apple, Martin Luther King, and the Wright brothers as well as a counterpoint example; Tivo. There’s a good Ted Talk about this; http://www.ted.com/talks/simon_sinek_how_great_leaders_inspire_action.html<br />So I thought about the messaging that I wanted for Testerfield, and here’s what I came up with:<br />We believe that in order to create the World’s Greatest Online Financial Investment Community we need to craft quality software. We plan to do this by providing quick, reliable, and robust feedback to Developers and the Business. We just happen to have a new tool that provides this. Want to take a look?<br />After re-tooling I was ready. Oh, and… propaganda helps.<br />
  31. 20 to 20<br />
  32. Testerfield 2.0 (now with propaganda!) <br />New error reporting could be read by anyone. It was now possible for people to understood what the test suite did (plain English), if it passed, if it failed, and why. The feedback loop (after information receipt) was cut from 15-20 minutes down 15-20 seconds. The communication gap between Dev and QA was beginning to narrow. Respect thermometer increasing. <br />We started to gain some real traction and picked up a couple of champions along the way. A movement started to take hold.<br />As a result, an opportunity to present to the entire Tech Department appeared, and things started to change (read: funding started to appear).<br />
  33. QA received in house training for Ruby through a company called Jumpstart Lab.<br />As a result, the more Analytical QA’s are now writing new and converting old tests in Testerfield<br />
  34. But what about the speed of the tests? The smoke suite takes 20 minutes to run! And just because you have good reporting doesn't mean that its not a distraction to the Devs who receive it. And, it may be old news by the time they get around to reading it. The process is still out of cycle with the development workflow. <br />What about parallelization? <br />Parallelization was something that we thought would be challenging but a problem that had already been solved (within our type of setup). But we were wrong, it was going to be much harder. What options we found either didn't work as we wanted, worked exactly as we wanted but was no longer maintained and outdated (wouldn't work with our version of Ruby) or looked promising but required a rework of our platform architecture and test design. That is, of course, until we found the answer.<br />
  35. 20 to 2<br />
  36. Enter Sauce Labs, a provider of cloud testing goodness. <br />I like to call this performance gain “20 to 2” (actually it's more like 3-1/2 minutes, but 20 to 2 is catchier). We were able to keep our existing reporting, offload the heavy lifting to their grid, get the added benefit of video capture, with minimal changes on our end<br />Some minor additions/changes to our code base, standing up a single linux box at our data center to fork and send the test processes, and configure a secure tunnel). <br />If you’re not using them, you should. <br />
  37. What’s the score?<br />
  38. I would like to claim this mole hill in the name of QA.<br />
  39. QA<br />
  40. Check out Testerfield.com<br />Dave Haeffner<br />Twitter: @TourDeDave<br />E-mail: dhaeffner@fool.com<br />
  41. Here is an example output from one of our Selenium IDE tests<br />Title name (can you easily tell what this test does?)<br />The whole test looks fairly gnarly; it’s got teeth<br />What can you deduce from the error? Not much, right?<br />
  42. Testerfield error output (leveraging the Ruby selenium-client gem http://github.com/ph7/selenium-client)<br />Useful category heading and test name – you can tell what the test is and what it does<br />Error – same as previous test, BUT<br />Shows you the pertinent parts of the test - the step that failed AND the steps before and after<br />And more importantly, the screenshot<br />

×