Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
QA Best Practices
1. QA Best Practices
QA folk: how to interview, work with, and be them.
Lessons learned over years of coding, testing, and coding tests.
1
2. About Robert Yates
1. CS degree from UMBC
2. Certified Scrum Master
3. 9+ years testing
4. Started in .Net, moved to Java & OSS
5. Interested in everything computery
2
6. System Architecture
1. Know it better than anyone except the architect
2. Know the points of failure
3. Determine which component a bug is in
4. First thing to do when coming to a new system
5. Full stack QA
6. Great interview question for QA
6
7. Think from all viewpoints
1. Developer
2. Customer
3. Product Owner
4. Designer (and a little customer again)
5. IT/NOC & Release Engineer
6. Jack of all trades, king of one
7
8. Embrace and understand your CI
1. Tests should run in CI
2. Build/Understand the jobs
3. It's just more automation!
4. Rube Goldberg machines are fun!
8
11. Make it easy to test across a variety of platforms
1. The most popular platforms
2. Then the most problematic platforms
3. If possible, run auto tests cross platform
4. Have platforms available to you easily and quickly
11
12. Review every commit you test
1. Find bugs before you build
2. Prevent future bugs by keeping quality high
3. Prevent FUD
4. See which parts changed
5. Standard reasons for code review
6. Have test code reviewed too!
12
13. There's a difference between assuring quality
and being obstructionist
1. Your view vs. others
2. Convey severity and risks → others’ decision
3. Minor issues can pass temporarily so progress continues
4. Don't let the perfect be the enemy of the good
5. Varies per target market
13
15. If You See Something, Enter Something
1. Search for existing bugs first
2. Enter it even if it's only seen once
3. Seemingly unrelated bugs may turn out to be related
15
16. Bug fields - what to write/expect (pt. 1)
1. Summary/Description
2. Reproducibility
3. Impact on user
4. Workaround or temporary fix?
16
17. Bug fields - what to write/expect (pt. 2)
1. How to reproduce
a. Environment conditions
b. Steps
c. Current result
d. Expected result
2. System environment seen in
3. Platform 17
18. Relevant Attachments - Always
1. Screenshots
2. Videos
3. Logs
4. Do so even if a bug seems reliably reproducible
5. Lots of tools to do these things
18
19. Write as much detail as possible in issue comments
1. What you did, results
2. Put in the build version/commit
3. Can't say how many times that has helped me
19
20. Add auto tests for each bug when possible
1. Ensure it reproduces the bug
2. Helps understand the issue better
3. Builds confidence in your suite
4. Mention bug # in the comments for the test method
5. Conversely, put test name in the bug
20
23. Decide your approach
Different resources and subject matter call for different
approaches
1. Unit testing only
2. Documentation as tests
3. External test project
23
24. Research the tools available
1. Lots of tools for everything imaginable
2. Look at strengths and weaknesses for your case
3. Each tool often has its own conventions
4. Ensure the tool can fit into your CI
5. If OSS, check for active development
24
25. Keep tests hermetic
1. Spin up new environments per test run
2. Isolate other environment variables if possible
3. Ensure one test doesn't affect another
4. If doing full end-to-end, use APIs to do setup
25
26. Keep test methods narrow
1. Don't test a dozen cases in one method
2. Faster to determine underlying cause
3. Early failures can mask other issues
26
27. Same code quality issues as all other code
1. Keep it clean
2. Keep it DRY
3. Others have gone into way more detail on this
4. Ensure it gets reviewed too
27
28. Useful patterns + code samples
1. Test behavior, not pixel perfect UI
2. Don’t estimate manual + auto separately
3. Share common test prep
4. Test matrices - lots of coverage and code reuse
5. Negative tests - a few different common ways to do
28
29. @Test
public void testFab() {
User user = UserFactory.createUser(api, orgID);
BacklinePage blpage = login(user, serverName);
switch (homePage) { // go to initial home page
case ONEONONE:
blpage = blpage.openNavDrawer().goToOneOnOne();
break;
case GROUP:
blpage = blpage.openNavDrawer().goToGroups();
break;
}
switch (fabPage) { // go to fab page
case USER:
blpage.goToUserSearch();
break;
case GROUP:
blpage.goToGroupSearch();
break;
}
}
@RunWith(Parameterized.class)
public class HomeScreenFloatingActionButtonTests
extends TestCase {
private HomePage homePage;
private FabPage fabPage;
public
HomeScreenFloatingActionButtonTests(HomePage homePage,
FabPage fabPage) {
this.homePage = homePage;
this.fabPage = fabPage;
}
@Parameters
public static Collection<Object[]> generateData()
{
ArrayList<Object[]> tests = new ArrayList<>();
for (HomePage homePage : HomePage.values()) {
for (FabPage fabPage : FabPage.values()) {
tests.add(new Object[]{homePage,
fabPage});
}
}
return tests;
}
JUnit Parameterized Test Example
29
30. JUnit Negative Test Examples - method expects
@Test(expected = TimeoutException.class)
public void loginWithBadPassword() {
User user = UserFactory.createUser(api, orgID);
// next line should throw exception
login(user.getEmail(), user.getPassword() + "z", user.getPin(), serverName);
}
30
31. JUnit Negative Test Examples - rule expects
@Rule public ExpectedException thrown = ExpectedException.none();
[...]
@Test
public void loginWithBadPin() {
[...]
LoginPage login = new LoginPage(driver);
PinPage pinPage = login.loginAs(email, password));
thrown.expect(AssertionError.class);
pinPage.enterPin("9999");
// end of method -- nothing beyond the line above will be executed
}
31
32. JUnit Negative Test Examples - try/catch
@Test
public void testOneOnOneChatWithInactiveUser() {
[...]
OneOnOneChatPage chat = oneOnOnePage.goToUserChat(userB.getLastName());
try {
chat.sendMessage("ShouldNotSend"); // should throw
NoSuchElementException
fail("User was able to send message to inactive user.");
} catch (NoSuchElementException e) {
// swallow expected exception
}
// more code can be executed after this
}
32
34. Write an app for the platform you’re testing
1. Understand general concepts + capabilities
2. Informs many parts of testing
3. Learn platform tools
34
35. Be Scrum Master
1. Gives perspective on all components
2. Makes sense to have overall pulse on the project
3. Discipline naturally translates to running meetings
4. Rotate Scrum Masters
35
39. GTAC - Google Test Automation Conference
1. At least be aware of, great resource for new
developments in the field
2. Find out about new tools and techniques
3. Streamed free!
4. Presentations on YouTube after the conference
39
40. Various Links!
1. Google Testing Blog
2. Agile Testing by Crispin and Gregory
3. OWASP Testing Guide
4. Security Now podcast
5. GTAC
40
41. Thank You and Q&A
Contact Info:
robert.k.yates@gmail.com
https://www.linkedin.com/in/robert-yates-5441b061
41
Editor's Notes
4. Currently automating an Android app using Appium in Java with JUnit, run in Gradle
4.a. Also primarily in Windows before, moving to *nix environments with OSX as my primary development machine, only using Windows box as a souped up gaming console
5. Including…
Coding
Testing
Security
Gaming
Building computers
Android
Ask to interrupt with questions/comments/stories/rants .. ok maybe not rants
Ask for show of hands people in which roles
Dev
Qa
Build/release eng
Management
Product?
Testing is the Dark Souls of software development: it requires a great deal of patience and awareness
https://www.flickr.com/photos/15539352@N02/9354155815/in/album-72157634774669900/
1. Draw lots of diagrams! If there isn’t one, make one.
5. Once you know how all the components work and interact, be able to test all
6. Ask to explain + diagram the architecture of a system they’ve tested. Gives a good idea of whether they think about the systems they test as a whole instead of just a cookie cutter part.
Dev
code review
know possibilities
own code
Customer
Severity
impact
PO - prioritize
UX and flow
Manage environment + CI jobs, see next slide for more details
Not your desktop, though obviously you develop and experiment there
So you can fix jobs or at least to better understand the reasons they could fail
Specifically, it’s automation of formerly manual processes
Ex: try to login with bad credentials
Ex: special characters, long strings
Break or turn off one component, ensure others fail gracefully
AWS status dashboard showed all green when S3 and other things were down.. turned out the status dashboard depended on the very things it was supposed to monitor! Feb 28 2017
Find out desired performance at load levels
Determine an average level of load and check at that as your baseline
Lower, higher, much higher
Load beyond its breaking point, see how it breaks
Crash?
Just slow?
Could be multiple other presentations on just this. Some high-level examples:
Protocols
Sql injection
Authentication
Authorization
See OWASP
Does UX and flow make sense? Intuitive?
Most users shouldn’t have to RTFM
XKCD Conditionals http://xkcd.com/1652/
Look at both overall market and your target market. Examples:
Android is highest mobile platform, mostly Samsung, but your target market is mostly iOS users
Chrome is the most popular browser, but your target market has to use IE for some FSM-forsaken reason
E.g. older versions of IE or devices with lower hardware spec or older OS
Usually requires a fair deal of tweaking, but can be worth it
Examples:
Use Synergy to share kb+m+cb between OSX, Windows, and others if desired
Have lots of Android devices of all major manufacturers and screen sizes
Have various VMs or containers to spin up
Found many bugs in app or DB code before building/running
High quality means less chance for future devs to misunderstand purpose of code
Fear, Uncertainty, and Doubt are bad and make devs hesitant to modify sections of code they might need to change
Good to know which things changed in order to see what to test. Ideally, this could be explained, but people often forget parts
Others have said the reasons for code review far better than I could
QA is not immune!
Others may see reasons a test may not make sense or should be altered
You may think an issue is unacceptable, but it may not be as bad to the customer
Once others, like product, understands an issue, it’s their job to determine if it’s worth it to push out with a known issue and fix it later
May be worse for the company to not push anything for another few days
E.g. small code style issues, low-severity bugs that very few customers would see
Enter as issues for a future release so they’re not forgotten
A bit cliché, I know, but it’s a good point
Markets like defense or aerospace would be much more bug-averse
Mention NASA story - “that would never happen” - a known bug nearly killed a few astronauts
http://www.wired.com/2015/10/margaret-hamilton-nasa-apollo/#slide-1
Guidelines to follow if you’re a tester or request from your testers if you’re a different role
On that note, ensure your titles are easy to search for (e.g. use keywords)
It might happen more than once but people forget between, or multiple people see it but blow it off
High-level
How many times has it been seen? Out of how many attempts? Does it always happen?
By multiple people?
Will the user be frustrated? Confused? Blocked from using some functionality?
Can the user work around the issue? Is it realistic that a user would attempt to do so?
Gotta have it
E.g. need specific users and chats already existing
Step-by-step, write for an average person who understands the system, as the dev is not the only one who might see this
E.g. Dev, QA, prod
E.g. Browser, phone model
If you can, highlight or circle relevant areas of the screen
Ensure it’s obvious what you’re doing, don’t go too fast that no one can follow
Can do things like using cursor to circle important areas of the screen
From whatever components are relevant
Can be reliable one day but not the next due to factors not yet realized
Lots of tools for all kinds of systems
JIRA plugin
Jing
Built-in OS functions for screenshots
Android Studio lets you take videos
Again, this all applies to all roles but especially testers and developers.
Can’t tell you how many times this has helped me to know at least a rough procedure and result
If you’re a dev trying to reproduce an issue, don’t just say “can’t reproduce”, state what you did and what you saw, what test users/data/whatever you used
Do so when entering, editing, or closing bugs
Therefore need to ensure that the specific build commit hash is in your version string (in development builds)
Often come back to issues later if it gets reopened or looked at for whatever reason, infinitely more helpful than closing something unceremoniously
Therefore fails in the old build and passes in the new build
Many times, I’ve tried to automate an issue and found that there were extra factors I didn’t realize when I reproduced it manually
Also ensures that, for example, future merges or refactorings don't leave it out
In case you need to refer back to later
Helps to have the info available both directions
Ideally in the Reproducibility field
Boss Robot Poster (yes, I have one) http://store.valvesoftware.com/product.php?i=P2215
Describe each level of the pyramid and why
Different for every project
More suited to APIs, but more and more GUI apps can do this too
Can do a mix, with UI unit tests (e.g. Espresso on Android)
E.g. Cucumber, Fitnesse
Often used when non-technical resources are much more abundant than technical
For black-box, full end-to-end
Written from scratch or record/playback?
Recordings are faster to get started and don't require as much technical staff, but they’re much more brittle and require a lot more maintenance
Almost impossible to enumerate the amount of web auto testing tools/frameworks/whatever
How your specific case works may determine which tool you use
Framework the UI was made with can change things, e.g. Angular -> protractor
If your org already has a large infrastructure around a tool, may be better to use something that hooks into that
Your workflows may work better in one tool or another
Makes it much easier for others to jump in
E.g. Selenium has the PageObject design pattern
PageObject pattern can work for lots of other things, but try to defer to the tool’s conventions if they conflict
Does a plugin exist? If so, does it do what you need? If not can you make/modify an existing one?
Are the reports good and easily readable?
Perhaps even more so than typical OSS tools, testing tools especially need to have a community actively developing it. a) this is what you’re relying upon to ensure your software works correctly, so this had better work correctly, and b) things like UI automation already has some jank built in, worse if it won’t get fixed
Goes back to embracing CI
Mocking APIs such can be great for this, just ensure you keep them up-to-date
e.g. Don't use the same test users between tests
Don't just use your other UI methods
Faster and more isolated
Test for feature B could fail because feature A (used to set up B) was broken
Bleeds into next slide
Can be tempting if there’s shared setup
You can extract that setup out to another method to reduce code repetition
Assuming it applies to just a few and therefore shouldn’t be put into the @before method
Doing so does mean that the setup gets repeated, costing more time, but in the end, computer time is cheaper than yours
Seen some people separate these out by creating multiple small scopes within a method. This is an indication these should all be separate methods!
If the test method name is all it tests, simply seeing the failure means you know exactly what failed--not just “something in this general area of functionality”.
Those other issues would require repeated test runs to see
Wastes time, especially if you run the whole suite each time
Basically as previously stated.
Relying heavily on specific coordinates or areas of the UI will lead to brittle tests. Use identifiers instead of clicking by images or locations on the screen.
Providing separate estimates/tasks for auto just makes it easier for others to say “oh well we’ll just do the auto later, we need to get it out now” and then the auto task rots forever
Can be ok in rare situations but try to get assurance that the task won’t rot
Increase code reuse
Before test class, e.g. JUnit @BeforeClass method
Before each test method, e.g. JUnit @Before method
Probably already have test matrices in spreadsheets or some such
Requires some tweaking to figure out how the flows intertwine, but the code reuse is often worth it
Sometimes get 20+ tests from one method
e.g. JUnit -> Parameterized test class (example on next slide)
Another great interview question for QA!
Test method expects an exception - not ideal, passes if exception occurs ANYWHERE in it
Expect exception on next line - much better, good enough for most cases, but method ends
Catch expected exception - most control, can keep going after, kind of ugly
Run with parameterized class
Use enums or other lists to setup the collection of object arrays
Just 1 test method
Switch between values for the parameters
Doesn’t have to be the most complex app, at least a basic one. For example, I wrote a basic android app for my own use
At least at a high level. For example, I knew a little about Android lists, cursors, activity life cycle, and information privacy to the app.
Examples:
UI conventions
Troubleshooting
Code review
Feature discussion
The above examples I learned
E.g. Android Studio has lots of performance metrics for apps
Goes back to being full stack QA
Knowing how each component is changing can give insight into where new issues arise
E.g. if backend is changing and a test starts failing without the app changing in that area, check backend
This includes keeping people on task and following up
I haven’t done personally but plan on implementing or at least discussing
Can give others an appreciation for other components
As mentioned earlier
Doesn’t have to be anything totally ground-breaking, but personal projects always show a lot of motivation and that software development is more than just a 9-5
Usually do fizzbuzz, modifications
Nothing fancy, checking for some basic joins (inner, outer, etc.)
Choose a feature that had a lot more test scenarios or use cases than you thought, describe the feature and ask for the candidate to come up with as many scenarios as they can
Ensure this contains both positive and negative, and see if they go into more of the non-functional test types mentioned earlier
I went in 2015, great experience, got to see lots of people in the field and discuss approaches