Your SlideShare is downloading. ×
Testing Plug-in Architectures
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

Testing Plug-in Architectures

579
views

Published on

Invited presentation for the 6th Brazilian Workshop on Systematic and Automated Software Testing (SAST), Natal, September 2012.

Invited presentation for the 6th Brazilian Workshop on Systematic and Automated Software Testing (SAST), Natal, September 2012.

Published in: Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
579
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Testing Plug-in Architectures Arie van Deursen @avandeursenJoint work with Michaela Greiler Margaret-Anne Storey
  • 2. The TU DelftSoftware Engineering Research GroupEducation Research• Programming, • Software architecture software engineering • Software testing• MSc, BSc projects • Repository mining • Collaboration • Services • Model-driven engineering • End-user programming 2
  • 3. Crawljax: Automated Testing of Ajax AplicationsAli Mesbah, Arie van Deursen, Danny Roest: Invariant-Based Automatic Testing of Modern Web Applications. IEEE Trans. Software Eng. 38(1): 35-53 (2012) 3
  • 4. Plug-in Architectures Create series of tailored productsby combining, configuring, & extending plug-ins 4
  • 5. 5
  • 6. WordPress “The number one reason people give usfor not upgrading to the latest version ofWordPress is fear that their plugins won’t be compatible.” http://wordpress.org/news/2009/10/plugin-compatibility-beta/ 6
  • 7. http://www.eclipsesource.com/blogs/author/irbull/ 7
  • 8. Eclipse-based SoftwareEclipse Plug-in Architecture 8
  • 9. Underneath: OSGi• Routers, Modems, Gateways, Control Panels, Phones, Cars, Trains, Trucks, Healthcare devices… 9
  • 10. One Product = Many Plug-ins Set of Plug-ins = Many ProductsWhat are the test implications? How should we test plug-in architectures? 10
  • 11. [ Plug-in Testing Issues to Consider ]Fault model? Test Approaches?• Interacting plug-ins, • Combinatorial• Plug-in configurations • Multi-version• Plug-in versions • As part of product line• Plug-in extensions engineering• Resource usage • Search-based through• … interaction space • … See related work (slide 5) of #sast2012 Teste de Linha de Produto de Software Baseado em Mutação de Variabilidades. Marcos Quinaia (UNICENTRO), Johnny Ferreira (UFPR), Silvia Vergilio (UFPR). 11
  • 12. What do EclipsersThink about Testing? 12
  • 13. Research Questions1. What testing practices are prevalent in the Eclipse community?2. Does the plug-in nature of Eclipse have an impact on software testing?3. Why are certain practices adopted, and why are others not adopted?4. Are there additional compensation strategies used to support testing of plug-ins? 13
  • 14. 25 Interviews 14
  • 15. Grounded Theory• Systematic procedure to • Coding discover theory from • Code, concept, category (qualitative) data • Memoing • Theoretical sensitivity • Theoretical sampling • Constant comparison • Saturation S. Adolph, W. Hall, Ph. Kruchten. Using Grounded Theory to study the experience of software development. Emp. Sw. Eng., Jan. 2011. B. Glaser and J. Holton. Remodeling grounded theory. Forum Qualitative Res., 2004. 15
  • 16. What’s a Theory? “A set of well-developed categories (e.g. themes, concepts) that are systematically inter-related through statements of relationships to form a framework that explainssome relevant social, psychological, educational, or other phenomenon.” Corbin & Strauss, Basicis of Qualitative Research, 1998 16
  • 17. P Code CommentP1, P4 The best Unit test are the bestP1 Legacy code legacy code can be problematic to testsP1 Fast, easy to execute They are fast, and easy to executeP1 Misused as Unit tests grows into PDE test Integration testP1 Refactoring easy refactoring is easier when you have a well design unit testP4 More important than Are more important Integration testP4 Limited applicability Leave parts out that cannot be tested by unit tests, Remote callsP4 High amounts You can have manyP4 Frequent execution Can be executed oftenP8 Only complex stuff Only for complex stuff like state machinesP13 Comfort for It gives you a certain level of comfort to know that when you make a change and you break refactoring something that that would be apparent in your test caseP16 Limited applicability ConcurrencyP20 Limited applicability For code within browser 17
  • 18. Resulting TheoryTheory comprises four main categories:1. Testing practices used by Eclipsers2. Impact of the plug-in characteristic3. Factors affecting test practice adoption4. The role of the communityAlso 12 concepts, 100 codes, 100 memos Full codes & quotes: Technical Report TUD-SERG-2011-011 18
  • 19. Triangulation1. Resonance @ EclipseCon2. Survey among 151 developers 19
  • 20. Practices: Unit testing is popular “Unit testing is where “Ultimately, unit test are ouryou find the most bugs” best friends” P3 P18 P14 “At least 70% of our test effort is spent on unit testing.” 20
  • 21. Other forms of testing are less popular “The Capture and Replay nature“We think that with a high of QF-tests was too rigid when test coverage through the system was evolving” unit tests, integration tests are not necessary.” P18 P14 P20 “We haven’t been 100% satisifed with capture-replay: too much is captured.” 21
  • 22. Findings 1: Practices• Common practice to have no separate test teams• Eclipsers are proud of their unit testing• Eclipsers tend to dislike system, integration, UI, and acceptance testing – Substantially less automation 22
  • 23. Automated or Manual? 23
  • 24. Cross plug-in testing is optional “We do bug-driven cross plug-in testing” P18 P19“We have no automated tests for cross plug-in testing, but we do manual testing.” 24
  • 25. Version testing is minimal“A lot of people put version ranges in their bundle dependencies, and they say they can run with 3.3 up to version 4.0 of the platform.” P13 “But I’m willing to bet that 99% of the people do not test that their stuff works.” 25
  • 26. Findings 2: Plug-ins• Testing deferred to `application engineering’ – No special effort during `product line engineering’• Integration testing on demand: – Bug occurring in the field• No test effort aimed at integration faults per se – Versions, configurations, interactions, … 26
  • 27. Testing combinations or versions?43% don’t test integration of different products only 3% test this thoroughly55% don’t test for platform versionsonly 4% test this thoroughly63% don’t test for dependency versionsonly 10% test this thoroughly 27
  • 28. Barriers “And you never know, once you “It’s complicated to write a good test, then it will integrate Junit with the become obsolete with the next build. Another version of Eclipse”framework? I didn’t want to take the trouble” P4 P7 P21 “Especially for plug-ins, we would need some best practices.” 28
  • 29. Findings 3: Barriers• Responsibility for integration unclear• Requirements for composite unclear• Lack of ownership• Insufficient plug-in knowledge• Set-up of test infrastructure too complicated• Test execution too long• Poor testability of the platform 29
  • 30. Community Testing (I) Testing is done by the user community. […] We have more than 10,000 installations per month.If there should be a bug it gets reported immediately.” P12 P17 “The community helps to test the system for different operating systems, and versions. They are very active with that.” 30
  • 31. Community Testing (II) “I would say the majority of the bug reports come from the community. […] We have accepted more than 800 patches.” P20 P19““We make all infrastructure available, […], so that somebody who writes a patch has the opportunity to run the same tests […]” 31
  • 32. Downstream Testing “We’re a framework. If the user downloads a new version and lets his application run with it, then this is already like a test.” P20 P13“They have extensive unit tests, and so I am quite sure that when Ibreak something, somebody downstream very rapidly notices and reports the problem.” 32
  • 33. Findings 4: “Compensation Strategies”• Community plays key role in finding and reporting issues• Downstream testing (manual and automatic) provides additional tests of upstream framework.• Open test infrastructure facilitates patching 33
  • 34. Summary: Findings1. (Automated) unit testing is widely adopted; Integration, system, UI and acceptance testing are much less automated2. The plug-in nature has little direct impact on test practices3. Barriers to adopt techniques include unclear ownership, accountability, and test effort & execution time4. Limited integration testing is compensated by community 34
  • 35. Scope• Beyond the participants: – Challenged results in survey among 150 Eclipsers• Beyond Eclipse: – Open source, developer centric, plug-in architecture, services, …• Beyond the people: – Repository mining, code analysis 35
  • 36. (Eclipse) Implications1. Community tolerance for failures determines (integration) test effort2. Need to strengthen community3. Need to strengthen plug-in architecture with “self testing” capabilities4. Test innovations must address adoption barriers 36
  • 37. 37
  • 38. Issue of concern Positivist View Interpretive ViewRepresentative- Objectivity: free from Confirmability: conclusionsness of findings researcher bias depend on subjects, not on researcherReproducibility Reliability: findings Auditability: process is can be replicated consistent & stable over timeRigor of method Internal validity: Credibility: finding relevant statistically and credible to people we significant studyGeneralizability External validity: Transferability: how far canof findings domain of findings be transferred to generalizability other contexts? "Ameaças à validade". 38
  • 39. Impact for Your (SAST 2012) Paper?1. Empirical Studies in Software Testing2. Experience in Organizing Test Teams3. Teste de Linha de Produto de Software Baseado em Mutação de Variabilidades4. Execução Determinística de Programas Concorrentes Durante o Teste de Mutação5. Uso de análise de mutantes e testes baseados em modelos: um estudo exploratório6. Framework para Teste de Software Automático nas Nuvens7. Usando o SilkTest para automatizar testes: um Relato de Experiência8. Automating Test Case Creation and Execution for Embedded Real-time Systems9. Controlando a Diversidade e a Quantidade de Casos de Teste na Geração Automática a partir de Modelos com Loop10. Geração Aleatória de Dados para Programas Orientados a Objetos 39
  • 40. Conclusions (1) Increasing Dynamism• We must accept that many deployed compositions are in fact untested.• As the level of dynamism grows, we need to move from a priori testing to “in vivo” testing Rethink what your test approach might mean in a run time setting. 40
  • 41. Conclusions (2) In Vivo / Run Time / On line Testing• Continuous assertion & health checking• Active stimuli upon configuration change• Learn relevant stimuli from past behaviorFirst steps in research, little adoption so far[ DevOps is just the beginning ] 41
  • 42. Conclusions (3): Involving the Tester in Your Research• To understand the problem to begin with – Qualitative study to start research• To evaluate your results• More painful, … and more rewarding! 42
  • 43. Summary• A Grounded Theory study is a great way to understand what people are struggling with.• Integration testing in plug-in architectures is hard and possibly costly• A cooperative community is a invaluable• We must and can strengthen post- deployment testing and reporting 43
  • 44. Further ReadingMichaela Greiler, Arie van Deursen & Margaret-Anne Storey.Test Confessions: A Study of Testing Practices for Plug-in SystemsProceedings International Conference Engineering (ICSE2012), IEEE, 2012.Full report: TUD-SERG-2011-010 44