Show Lecture.Testing as a slide show.
CT320 Testing
Philosophy of testing
- Get over it
- Trust, but verify
- Test the typical cases
- Test invalid input
- Test the limits
- Automation
Get over it
- Some programmers don’t enjoy testing. They want to create,
not test. Too bad.
- I don’t enjoy doing dishes, but that’s the price I pay
for using a plate.
- Try to think of it as a step in perfecting your creation.
- Many big companies have QA departments, who do testing. Hooray!
- It’s fun work.
- Still, don’t send them trash. Everything runs more smoothly
if you manage to do a minimum of testing.
- Nearly all programmers have a mental block against testing.
It’s like asking parents to find flaws in their children.
- This is why industry uses non-author testing.
- You just have to deal with it.
Trust, but verify
Sure, whoever programmed this is really smart
(especially if they’re you). It’s probably correct.
- Faith can be a wonderful thing.
- Data is better.
Test the typical cases
- Testing the typical cases is often more of a social thing
than an engineering thing. For non-author testing, if you
can present a typical, non-extreme, case that fails, the
author will take it seriously. “Dude, it doesn’t even work
for this case!”
- If you don’t bother verifying the test cases that your
teacher/programmer gave to you, then you’re an idiot.
Test invalid input
- Arguments
- too many/too few
- bad (e.g.,
-z
)
- duplicate (
--high --high
)
- conflicting (
--high --low
)
- Files
- don’t exist (
/foo/bogus
)
- bad permissions (
/etc/shadow
)
- contain bad data (
/bin/sync
)
- Numbers
- integer
- real
- ±0.0
- NaN
- ±∞
- too small
- too big
- too negative
fish
3fish
Test the limits
- Consider a program that’s a guessing game. You think of
a number 1–100, and it has to guess the number. You tell it
“too high”, “too low”, etc.
- If this program works for 42, it’ll probably work for 56.
- Test it for 1, 100, 0, and 101. Those are the edge cases.
They’re at the boundary between “should work” and “shouldn’t work”.
You want to ensure that a < isn’t accidentally a ≤.
- Also, 50 might be an edge case, for a binary search.
Case study
Consider a command, zot
, which takes a number of files as arguments,
and an option which determines which files should be output.
Think of it as a conditional cat
.
zot [-f first-last] file ...
Where first and last are optional inclusive ordinal file indices.
zot -f 2-3 alpha beta gamma delta
is equivalent to cat beta gamma
.
zot iota
is equivalent to cat iota
.
How do we test this command?
Test cases (valid cases)
zot alpha
zot alpha beta gamma delta
zot -f2-3 alpha beta gamma delta
zot -f 2-3 alpha beta gamma delta
zot -f 2-2 alpha beta gamma delta
zot -f 2- alpha beta
zot -f -2 alpha beta gamma
Test cases (invalid cases)
zot
zot -fx alpha
zot -f3 alpha
zot -f1 -f2 alpha beta
zot -f0-1 alpha
zot -f1-2 alpha
zot -f2-1 alpha beta gamma
zot -q alpha
zot badfile
Automation
- Poor testers test manually. When the code changes, they test again.
- Well, some of the tests.
- If they remember what the tests are.
- And have the energy.
- Good testers write an automated test suite (as simple as a shell script).
They run the test suite after every code change.
#! /bin/bash
(
zot -fx alpha
zot -f3 alpha
zot -f1 -f2 alpha beta
…
) >&out
diff out known-good-output