17. choices = {}
counts.each do |word, succ|
choices[word] = succ.inject([]) do |memo, obj|
item, times = obj
memo + ([item] * times)
end
end
# => {'a' => ['ball', 'ball', 'red'],
# 'red' => ['ball'],
# 'ball' => ['a', 'a']}
18. victim = words.first
50.times do
print victim + ' '
candidates = choices[victim] || [words.first]
victim = candidates[rand(candidates.length)]
end
19. What people had four years ago. That's when
an automated script---especially a set of the
programmer's more discerning. In the
Examples This book that your customers (or
someone undoes all of RSpec example for text
into a human ingenuity. So, we're going to refer
to read or missing requirements.
29. module Win32
[['SetWindowsHookEx', 'LKLL', 'L'],
['UnhookWindowsHookEx', 'L', 'I'],
['CallNextHookEx', 'LLLP', 'L'],
['GetMessage', 'PLLL', 'I'],
['TranslateMessage', 'P', 'I'],
['DispatchMessage', 'P', 'L'],
['GetCursorPos', 'P', 'I']].each do
|name, sig, ret|
var = '@@' + name.snake_case
api = API.new name, sig, ret, 'user32'
class_variable_set var, api
end
end
30. class String
def snake_case
gsub(/([a-z])([A-Z0-9])/, '1_2').downcase
end
end
31. module Win32
WH_MOUSE_LL = 14
WM_LBUTTONDOWN = 0x201
WM_LBUTTONUP = 0x202
class API
def [](*args)
call *args
end
end
end
32. class MouseWatcher
include Win32
private
def mouse_hook(code, w, l)
case w
when WM_LBUTTONDOWN then @down = true
when WM_LBUTTONUP then @down = false
else if @down then
point = quot;0quot; * 8
@@get_cursor_pos[point]
x, y = point.unpack 'LL'
puts quot;#{x} #{y}quot;
end
end
@@call_next_hook_ex[@hook, code, w, l]
end
end
33. class MouseWatcher
def initialize
@down = false
@callback = API::Callback.new
'LLP',
'L',
&method(:mouse_hook)
end
end
34. class MouseWatcher
def go
mod = @@get_module_handle[0]
@hook = @@set_windows_hook_ex[
WH_MOUSE_LL, @callback, mod, 0]
msg = quot;0quot; * 28
while 0 != @@get_message[msg, 0, 0, 0]
@@translate_message[msg]
@@dispatch_message[msg]
end
rescue Interrupt
@@unhook_windows_hook_ex[@hook]
@hook = nil
end
end
37. require 'rmagick'
include Magick
canvas = Image.new 640, 480 do
self.background_color = 'black'
end
finger = Image.new 20, 20 do
self.background_color = 'black'
end
41. File.open('mouse.txt') do |f|
f.each_line do |l|
x, y = l.split.map {|s| s.to_i}
background.composite! finger, x, y,
PlusCompositeOp
end
end
background.write 'heatmap.png'
The first leg of the epic, hopefully transcontinental book tour kicked off in my own backyard, at the technical annex of the huge Powell’s Books.
Just in case you were at Ignite Portland that night, here’s a recreation of my talk.
As the name implies, the book will teach you how to introduce automation into your user interface testing efforts, without making a bunch of false promises about its capabilities.
I was going to give the presentation this provocative title, as a signal that I’d be revisiting various points of view in the book and looking at them from a different angle.
But this title conveys the idea much more succinctly. If I can convey an idea from the book, and then talk about the edge cases where it doesn’t apply, then this talk will have a little something both for people who have read the book and those who haven’t.
So, let’s get started. One of my big motivations for writing the book was my dissatisfaction with testing tools that are based on recording a bunch of mouse clicks and then replaying them.
Such tools often generate unreadable code like this. How do you even know where you’re supposed to add the tests?
The problem is that many of these tools capture the wrong things. They dutifully record exact screen coordinates, instead of what the user was trying to do (e.g., delete a document, zap an alien, etc.).
It’s impossible to capture the user’s intent when all we have are keyboard and mouse actions, right? That’s the prevailing wisdom. But then I saw a talk by Dr. Atif Memon, who pretty much knocked all of us out of our chairs.
Instead of capturing raw mouse actions, Dr. Memon proposes to capture relationships between actions—specifically, the fact that one behavior, such as saving a document, frequently follows another.
From a directed graph of actions following actions, Dr. Memon’s team used a random walk to generate test cases. He didn’t say how they chose which nodes to traverse, but I suspect they used a dash of probabilistic techniques, such as the Markov Chains used for Garkov here.
Garkov, by the way, uses a corpus of Garfield captions to generate random, surreal dialogue for the comic strip. Here’s another one from Josh Millard’s site; I couldn’t resist.
Dr. Memon’s technique is a work in progress. But that doesn’t mean you can’t apply probability to your own test cases on a much smaller scale. The technique you’re about to see was inspired by Sammy Larbi’s article above, but with completely different Ruby code.
Let’s say you want to generate random sequences of words, based on the way they occur naturally in this text. For example, the word “red” will always be followed by “ball,” but the word “a” could have “ball” or “red” after it.
First, split the document into words.
Next, build a table linking each word to the list of all words that directly follow it, and how frequently they follow it. For example, you can see that “a” has been followed by two words: “ball” twice, and “red” once.
After all the counting is done, build a list of possible successors for each word. Use repetition to signal that one word has followed another one multiple times. That way, when you pick a random word from the list, you’re more likely to get a common follower than a rare one.
Now you can generate as many random words as you like. Simply grab a word, print it, and then choose a random item from the list of words that are “allowed” to follow it. Repeat until you’ve generated a sufficiently absurd document.
Here’s what the technique looks like when you apply it to the first chapter of my book. There are lots of ways to modify this technique, like looking at sequences longer than just pairs, or by considering letters instead of words.
You can also apply it to a test log instead of a sentence. Here’s a real sequence of actions I captured from my computer, and a brand new test script generated from them. As you can see, you really need a longer capture log before these scripts get interesting.
Okay, on to viewpoint #2. In the book, I talk a little bit about test-driven development and behavior-driven development in the context of automation. But there are plenty of times when completely manual tests give you information that you use to shape a product.
For example, when you turn off this touch-screen instrument and shine a light at the right angle...
...you can see fingerprints on the most frequently-used parts of the interface. This is a developer’s machine, and he was comfortable using the on-screen data entry “knob” in the corner. But the machines we showed to customers came back with almost no prints on it.
After getting a brief facelift, the data entry control received a warmer reception from our users’ fingertips.
Even though this test started as a lo-fi, non-automated, analog kind of technique, there’s no reason we can’t go back after the fact and make an automated version. The forum linked above gives a quick intro to capturing mouse and keyboard events using C on Windows platforms.
Of course, this is a talk on Ruby, so let’s apply the techniques in Ruby using the win32 gem. First, there are a handful of Windows functions to define. We could define them one by one, like this.
But the book gives a technique (you can download the source for free) for automatically assigning Ruby names to the Windows functions.
The technique uses this method of turning Windows-style “CamelCase” names into Ruby-style “snake_case” ones.
Finally, you’ll need to define a few common Windows constants, and a shorthand for invoking the C functions from Ruby.
Here’s the meat of the mouse hook. Windows will call into this Ruby function when there’s mouse activity, passing in enough information to reconstruct exactly what happened. In this case, you’re just going to record the screen coordinates whenever the user drags his finger/mouse.
Here’s the bit of glue that makes the callback magic happen.
And finally, here’s the main loop. After you assign the mouse hook, you just sit in a message loop until you’ve logged all the data you want.
The output is a series of space-delimited x/y pairs, one per line. Each point represents one place where the user has dragged the mouse.
What can we do with that list of coordinates? How about a heat map? The article linked above describes a very detailed technique; I’m going to give a much lighter one that fits the broad scope of this talk.
You’ll need ImageMagick and its Ruby wrapper, RMagick. Start with a screen-sized empty black background, and a small tile to represent the “finger” smudging the screen.
Draw a semi-transparent white circle into the finger tile to represent the fingertip.
Now, bias the low end of the finger towardd blue, and the high end toward pink. There are much cleverer techniques that can get you a rainbow of hot and cool colors, but this simple trick will work well enough for now.
Now, load the screenshot into the picture and dim it quite a bit.
Then for each place the user clicked or dragged, add the fingertip image in. The effect is cumulative; frequently-touched parts of the screen will be brighter.
Here’s what the result looks like after a very short interaction.
And with a pang of guilt for throwing this stuff together practically on the eve of the talk, here are links to all the code you’ve seen.
The book purposefully skirts around the subject of testing philosophy, since that subject is extremely well covered. But I don’t want to leave the impression that the topic is completely exhausted.
Writers, coders, and testers are still challenging old notions and documenting new techniques. So please take tonight’s talk as a call to tinker, explore, and flip old approaches onto their heads.