SVCC 2011 - 0 - 60: QA Automation @ Box

500 views

Published on

Co-authored and presented at Silicon Valley Code Camp '11. Describes the organization, culture, and tools that I led the development of as Director of Quality Engineering @ Box.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
500
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

SVCC 2011 - 0 - 60: QA Automation @ Box

  1. 1. 0  –  60:  QA  AutomaAon  @  Box  Peter  White  Dave  Duke  Silicon  Valley  Code  Camp  2011  
  2. 2. Box  Biz  •  Cloud  Content  Management  and  Collaboraon  •  7M  users  •  100K  businesses  •  Adop8on  in  77%  of  the  Fortune  500  •  150M  file  accesses  /  month  •  270  employees,  up  from  125  at  previous  year-­‐end  •  15-­‐20  main-­‐branch  commits  per  day,  and  steadily   growing  
  3. 3. Box  Tech  •  LAMP  Stack  Web  App  •  Na8ve  and  HTML5  (m.box.net)  Mobile  Apps  •  Sync  for  Mac  and  Windows  •  MS  Office  &  Outlook  Plugins  on  Windows  •  Open  PlaVorm  -­‐  REST,  SOAP,  and  XML-­‐RPC  APIs  •  Front-­‐end  Presenta8on  •  Back-­‐end  Services  
  4. 4. Agenda  •  Overview   –  Business  Decisions   –  Technical  Decisions  •  Demos  •  Lots  of  Code  •  Lessons  Learned  •  Immediate  and  Future  Plans…  •  Q&A  Not  necessarily  all  in  that  order  of  precedence…  
  5. 5. Disclaimer    We’ll  be  focusing  primarily  on  browser-­‐based   func8onal  acceptance  tes8ng  for  this  presenta8on.    The  test  automa8on  landscape  is  rapidly  changing.    Please  keep  this  in  mind  when  you  decide  to   implement  a  test  automa8on  strategy  to  make  sure   that  you  make  the  best  decision  for  your  given   situa8on.  
  6. 6. Why  automate  tesAng?  
  7. 7. Because  of  this  guy…  Disclaimer:  “The  Most  Interes8ng  Man  in  the  World”  DOES  NOT  work  at  Box,  although  I’ve  been  known  to  occasionally  drink  and  code  –  that’s  (par8ally)  what  tests  are  for!    
  8. 8. Before  Automated  TesAng  •  Black  box  and  exploratory  manual    tes8ng   –  Separate  QA  staff   –  Off-­‐shore  commercial  tes8ng  partner  •  Insufficient  coverage  •  Slow  turn-­‐around  /  delayed  feedback  •  Lifle  engineering  involvement  •  Developer  code  reviews  were  a  huge  but  imperfect   safety  net  
  9. 9. With  Automated  TesAng  •  Developers  “own”  the  tests   –  Tests  are  authored,  maintained,  and  executed  on  demand  on   developer’s  worksta8on  and  /  or  VM  •  QA  “owns”  the  test  suites   –  Approves  automated  test  cases   –  Determines  which  tests  belong  in  which  suites   –  Executes  tests  against  mul8ple  target  environments  and  verifies   results  •  QA  no  longer  spends  8me  manually  tes8ng  func8onal  areas   with  approved  test  coverage  •  Coverage  is  befer  and  steadily  improving  •  S8ll  doing  code  reviews  but  now  we  have  a  safety  net  
  10. 10. Our  IniAal  Goals  •  Prevent  crazy  late-­‐night  deployments  by  completely   automa8ng  our  “acceptance  suite”  •  Ease  of  test  authoring,  easily  maintainable  tests  •  Fast  scalable  plaVorm  –  provide  feedback  ASAP    
  11. 11. Humble  beginnings    
  12. 12. (Some  think)  AutomaAon  is  Easy…   Selenium  IDE  Screencast  available  aier  10/28    Screencast  demonstrates  using  Selenium  IDE  to  record  a  test  case  that  creates  a  folder  and  navigates  to  it  through  the  Box  web  UI.  
  13. 13. Yeah,  right…     Selenium  IDE  Screencast  available  aier  10/28    Screencast  demonstrates  execu8ng  the  previously  recorded  test  –  fail!    
  14. 14. And  for  the  non-­‐believers…   Selenium  IDE  Screencast  available  aier  10/28    Screencast  demonstrates  execu8ng  the  previously  recorded  test.  This  8me  we  execute  the  test  at  the  slowest  possible  speed,  since  it  failed  due  to  8ming  issues  the  first  8me,  but  it  s8ll  fails  for  other  reasons  –  record  and  playback  tools  tend  to  struggle  with  modern  Web  2.0+  websites.      
  15. 15. AutomaAon  is  Hard…  •  All  marketers  are  liars  –  things  rarely  work  100%  as   adver8sed…  •  Easily  maintainable  high-­‐value  automa8on  requires   soiware  development  skills  and  a  fair  amount  of   effort.  
  16. 16. But  worth  it!   600   QA  Engineer  minutes  to  perform  test  suite   500   400   300   Smoke   Sanity   200   100   0   QA  Engineers   TAF  We  also  replaced  1  year  of  API  tes8ng  with  2  weeks  of  development  effort    
  17. 17. Signup_Lite  Selenium  Version  
  18. 18. Signup_Lite  TAF  Version  
  19. 19. Defining  Your  High-­‐Level  Requirements   Build   Buy  Time   I  love  building  tools   I  need  it  yesterday  Money   “You  can  use  whatever  you  want  as   I’m  going  to  lose  my  budget  if  I   long  as  it’s  free”   don’t  spend  it!  Resources   We  have  skilled  developers  who  are   I  want  to  automate  as  much  of   interested  in  and  capable  of  building,   my  QA  tasks  as  possible  but  I   maintaining,  and  supporAng  test   don’t  have  any  development   automa8on   support   Framework   Pla_orm  Execu8on   I  want  to  develop,  execute,  and   I  want  to  parallelize  my  tests  as  Environment   monitor  all  of  my  tests  on  my  local   much  as  possible  and  don’t   machine   want  to  8e  up  my  machine   while  execu8ng  them  
  20. 20. Commercial  vs.  Open  Source  Commercial   Pros   Cons   Vendor  Support   High  Cost   Documenta8on/Training  Materials   Long-­‐term  rigidity   Quick  Win   Dependent  on  vendor  for  fixes   Trained  Candidate/Employee  Pool  Open  Source   Pros   Cons   Low  ini8al  cost   Community  Support  +/-­‐   “Use  the  Source”   Lack  of  vendor  support/docs  You  may  also  want  to  consider  what  kind  of  3rd-­‐party  support  exists  in  terms  of  books,  blogs,  tools,  forums,  etc…  
  21. 21. High-­‐Level  Weekly  Release  Process  “Full  Push”     C1  …  C20     Single  Test  Cycle    Incremental  Pushes  aka  “Con8nuous  Delivery”   C1   C2   C3   C4   C5   …   C15   C16   C17   C18   C19   C20   TC   TC   TC   TC   TC   TC   TC   TC   TC   TC   TC   TC  
  22. 22. Key  Design  Decisions  •  JVM-­‐compa8ble  language  •  Op8mize  for  throughput   –  Massive  parallelism  (all  tests  are  idempotent)  •  Ease  of  test  development  •  Flexibility,  Adaptability,  Maintainability,  Scalability  
  23. 23. Choosing  a  Framework  and  Pla_orm  !"#$%&"()! *+, *#-#, ./, 01234, 5671, 080, 0%"9, :#";%2,   ! ! ! ! ! ! <)%"! "#$#%&(!)! ! *#+!   ! ! ! ! ! ! ! ! ! ! ! ! "#$#%&(!,! ! ! ! *#+!  -.#/*0&+#01! ! ! ! ! ! ! ! ! ! ! .23&04! ! ! ! ! ! 56!  .&%7(&$$! ! ! ! ! ! ! ! ! ! ! ! ! ! *#+856!  "29&!:"4! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 56! !  ! ! ! ! ! !  !!Pla_orms   Pros   Cons  Selenium  Grid   It’s  your  grid   It’s  your  grid…  Sauce  Labs   Grid  is  set  up  and  maintained   Ability  to  customize  solu8on  may   for  you   be  limited  or    non-­‐existent  
  24. 24. The  path  we  chose  @  Box  •  Func8onal  Acceptance  Tes8ng   –  Scala   –  Started  with  Selenium  1,  recently  migrated  to  Selenium  2*   –  ScalaTest/TestNG  •  API  Tes8ng   –  Jump-­‐started  full  API  coverage  with  home-­‐grown  Ruby  framework  •  Unit  tes8ng  (adopted  MUCH  later…)   –  PHPUnit   –  specs  2   –  PyUnit  •  Sta8c  analysis  tools  (JSLint)  
  25. 25. Developer  ConfiguraAon  
  26. 26. ConAnuous  Test/Grid  ConfiguraAon   EC2 Box VPN
  27. 27. High-­‐level  Architecture  BaseTestCase   Driver   Shared  API   PageObject   Ac8ons  BoxTestCase   SeleniumDriver   Component  API   BoxPageObject   MyTest   WebDriverDriver   Element  Locators   MyPageObject  
  28. 28. High-­‐level  ExecuAon  Flow  Execute   Execute   Handle   Ini8alize  Test  or   test   Failures/ Test   Suite   Body   Cleanup  
  29. 29. Two  demos  •  HelloName  test   –  Illustrates  the  typical  workflow   –  Simple  and  ar8ficial  •  Sample  Box.net  test   –  Includes  more  advanced  func8onality   –  Tests  the  real  site  
  30. 30. HelloName  test  –  Basic  structure  HelloSuite  •  Lists  and  describes  tests   TestGree8ng   •  High-­‐level  test  defini8on   WelcomePageObject   •  Reusable  page-­‐specific  func8ons  
  31. 31. CompilaAon  and  ExecuAon   •  Compiles  Scala  code   sbt  build   •  Copies  resources   •  Scans  for  test  annota8ons   sbt  build-­‐suites   •  Produces  TestNG  epr/xml  files   •  Runs  TestNG  on  a  suite   runsuite  /   •  Invokes  @BeforeSuite  runtest  /  runfailed   •  Records  results  
  32. 32. CompilaAon  and  ExecuAon   •  Instan8ate  testcase  Ini8alize   •  Instan8ate  driver   •  Invoke  executeAndCapture()   test   •  Setup  execu8on  trace   •  Invoke  testcase’s  execute()   •  Take  screenshot  or  save  HTML   on  failure  (if  configured  to  do  so)  Cleanup   •  Close  browser  on  failure  (if   configured  to  do  so)  
  33. 33. Two  demos  •  HelloName  test   –  Illustrates  the  typical  workflow   –  Simple  and  ar8ficial  •  Sample  Box.net  test   –  Includes  more  advanced  func8onality   –  Tests  the  real  site  
  34. 34. Logging  Tools   “describes”  func8on  Groups  and  categorizes  test  steps   Can  be  nested  
  35. 35. Logging  Tools   Execu8on  trace       Prints  a  tree  of  described  steps   Expands  nodes  based  on  user’s  config    
  36. 36. Framework  –  Tests  and  SupporAng  Classes  BaseTestCase   PageObject   Ac8ons   BoxTestCase   BoxPageObject   MyTest   MyPageObject  
  37. 37. Selenium  2:  A  Variety  of  Modes  Compa8bility  Mode  •  Supports  Selenium  1  features  •  Does  not  provide  advanced  WebDriver  func8onality  Webdriver  –  Na8ve  Events  •  Simulates  a  mouse  cursor  moving  and  clicking  •  Windows  only,  CPU  intensive,  slow,  unreliable  Webdriver  –  Default  Mode  •  Not  available  for  IE  •  Produces  most  reliable  results  
  38. 38. Cross-­‐browser  and  cross-­‐pla_orm  tesAng  •  How  do  we  get  the  same  tests  to  work  on  different   browsers/plaVorms  when  mixing  Selenium  modes?  •  How  do  we  provide  a  consistent  API  for  test  writers   while  allowing  for  flexibility  in  how  the  Selenium   calls  are  made?   –  Major  changes  to  Selenium  are  not  unheard  of  
  39. 39. Driver  Structure  (During  MigraAon)   Driver   SeleniumDriver   WebDriverDriver  
  40. 40. Driver  Structure  (MigraAon  Complete)   Driver   SeleniumDriver   WebDriverDriver  
  41. 41. Driver  Structure  w/App-­‐Specific  Support   Driver   SeleniumDriver   WebDriverDriver   BoxSeleniumDriver   BoxWebDriverDriver  
  42. 42. ImplementaAon  of  “click”  •  Driver  •  SeleniumDriver  •  WebDriverDriver  
  43. 43. ModificaAons  and  Extensions  •  Sizzle  (jQuery)  selectors   –  Efficient  and  powerful   –  We  recommend  test-­‐writers  find  elements  by  ID  or   through  a  Sizzle  selector  •  Flex  extensions   –  Interact  with  flash/flex  elements  in  the  page   –  Necessary  for  tes8ng  Box’s  file  preview  func8onality  
  44. 44. Good  pracAces  •  Perform  setup  ac8ons  through  your  app’s  API   –  Bypasses  the  web  UI   –  Improves  test  speed   –  Reduces  spurious  failures  •  Parameterize  tests   –  Many  tests  will  share  iden8cal  or  similar  steps   –  Reuse  code  by  parameterizing  tests   –  Add  descrip8ons  to  each  test  call  to  differen8ate  them  
  45. 45. Bad  PracAces  •  Avoid  using  XPath   –  Incredibly  slow  in  IE  version  <  9   –  Extremely  brifle  •  Avoid  Thread.sleep()  or  equivalent  –  poll  instead   –  Timing  issues  are  the  main  cause  of  spurious  test  failures   –  Pausing  test  execu8on  is  rarely  the  correct  solu8on   –  Try  wai8ng  for  a  specific  element  to  be  visible  
  46. 46. AdopAon  •  Know  your  target  end-­‐user   –  Framework  design   –  Diagnos8c/Analysis  tools   –  Docs,  Training,  Mentoring,  Support  •  Eliminate  FUD   –  Automa8on  is  NOT  going  to  replace  your  QA  organiza8on.   Instead,  it’  going  to  transform  their  work  in  a  more  rewarding   and  highly  leveraged  manner.   –  Expect  a  learning  curve  and  a  bit  of  a  rough  start   –  Progressive  adop8on  policies  •  Provide  con8nuous  feedback…   –  Metrics   –  Apply  social  pressure  if/when  necessary  
  47. 47. Git  Hash   Author   Firefox  tests   Passed!  
  48. 48. Status  Matrix  Screenshot  
  49. 49. Lessons  Learned  •  Automa8on  is  Rewarding!  •  But  some8mes  frustra8ng…  •  The  cultural  challenges  are  much  greater  than  the   technical  challenges  –  we’re  having  some  difficul8es  with   establishing  a  “stop  the  line”  mentality  •  Broken  tests  or  tests  that  don’t  run  don’t  count  –  we’ve   recently  begun  to  file  “Blocker”  bugs  for  all  broken  tests  •  Func8onal  tes8ng  can  be  flakey,  especially  when  external   dependencies  are  involved  •  Few  companies  do  this  well  (because  it’s  hard  work,  real   engineering,  and  oien  underes8mated)  
  50. 50. Our  Current  Goals  •  Con8nually  and  rapidly  expand  the  coverage  of  our  full-­‐ regression  suite   –  Iden8fy  gaps  in  test  coverage   –  Make  test  authoring  easier  and  less  intrusive   –  Make  test  execu8on  faster   –  Expand  browser  support  •  Provide  befer  feedback  mechanisms   –  No8fica8on  mechanisms  and  dashboards    •  Con8nuous  Delivery   –  Improve  test  throughput  by  scaling  the  test  environment  
  51. 51. Outstanding  QuesAons  •  Granularity  of  8meout  values  •  When  will  “Na8ve  Mode”  work  acceptably  with  IE?  •  Ajax  8ming  issues  for  extremely  dynamic  content  
  52. 52. If  we  had  to  start  all  over…  •  Unit  tes8ng  before  func8onal  acceptance  •  Wa8r  vs.  Selenium  if  you  only  care  about  IE  and   don’t  need  grid*  •  Iron  out  the  kinks  before  rolling  it  out  –  we  got   buried  in  training/support  and  weren’t  able  to   resolve  issues  quickly  enough  as  a  result  •  Dedicated  hardware  for  complete  AUT  and  plaVorm   (RC’s  being  an  acceptable  excep8on)  •  Ac8onable  test  failure  no8fica8ons  –  don’t  spam   everyone  
  53. 53. Box  is  Hiring!  •  Roughly  25  open  engineering  &  opera8ons  posi8ons  •  Quality  Engineering  Posi8ons   –  Soiware  Engineer  –  Tools  and  Frameworks   –  QA  Engineer  •  Visit  www.box.net/jobs/  for  more  info  •  Send  resumes  to  referrals@box.net  and  men8on  you   saw  us  at  Code  Camp!  
  54. 54. Box  SVCC  2011  PresentaAons  0  –  60:  QA  Automa8on  at  Box  (Sat  11:15  AM  -­‐  Room  5501)  Speakers:  Peter  White,  Dave  Duke    Where  is  my  data?  Consistency,  availability,  security  of  cloud  file  storage  at  Box  (Sat  1:45  PM  –  Rm  1401)  Speaker:  Antoine  Boulanger    Achieving  Cloud-­‐Scale  Test  Automa8on  at  Box  (Sat  3:30  PM  –  Room  5501)  Speakers:  Randall  Schulz  ,    Jordan  Sterling  ,    David  Wake    DRY  CSS  &  Images  (Sat  5:00  PM  –  Room  1500)  Speaker:  Kimber  Lockhart    HTML5  Uploading  and  Beyond  (Sun  10:45  AM  –  Room  8403)  Speaker:  Ben  Trombley      
  55. 55. Thank  you!  Resources   •  hfps://www.box.net/developers   •  hfp://seleniumhq.org   •  hfp://wa8r.com   •  hfp://sahi.co.in   •  hfp://www.getwindmill.com   •  hfp://testng.org   •  hfp://code.google.com/p/flash-­‐selenium/  Contact  Info   •  Peter  White  (peter@box.net),  Director  of  Quality  Engineering   •  Dave  Duke  (dduke@box.net),  Soiware  Engineer  Ques8ons?  

×