Cem Kaner's abstract

The Challenges of Educating Testers on Security (and Security People on Testing)

Cem Kaner

When I served on the United States Election Assistance Commission’s Technical Guidelines Development Committee, my primary responsibilities involved helping draft guidelines for certification of electronic  voting machines. (See, for example, http://www.kaner.com/pdfs/commentsOnVVSSubmittedToEAC.pdf)

The revisions of the voting systems guidelines that I reviewed all called for a testing phase in which security experts would do exploratory testing for security issues.  In contrast, there was no provision for any exploratory testing by testers. Testing by testers was routinized and, in my opinion, trivialized. As far as I could tell, any time I suggested that skilled testers could find a lot of interesting stuff if they were allowed to look, those suggestions fell on ears that were deaf, incredulous, or motivated to keep the cost, scope and results of testing predictable.
Why should security testing be exploratory and testing associated with all other quality characteristics be pre-announced, repetitious, and routine? The answer that I often got was that (to quote Richard’s title):
“Security Testing != Testing.”

With that came a belief that testers couldn’t make good use of the freedom that we seem to take for granted in security work.
The bias continues to appear in graduate degree programs on information security. My understanding is that most (or all?) of these degree programs have no courses on security-related testing.

Let me suggest that there is a continuum of need:

  • We need some number of security gurus who have deep knowledge of system internals and a deep technical understanding of risks. But there is a limited supply of these people and it will stay limited.
  • Is there a place for people who have less knowledge (much more than zero), less technical skill (much more than zero), but a creative, skeptical, critical mindset? Are there weaknesses in systems that such people could hunt effectively?

There are drones in testing who can’t do anything without a script or who can’t design any test without a detailed specification. But there are drones in security too, people who latch on to penetration testing tools or “best practices” or standards that they don’t understand.

We’ll probably discuss cognitive psychology throughout the workshop, so I’ll skip it here. Instead, in this talk, I want to focus on techniques.

High Volume Automated Testing

In particular, I want to highlight a collection of testing techniques that are like fuzzing in the sense that they hammer the program with long sequences of tests that are created algorithmically. However, where I come from, fuzzing is “dumb monkey” testing. Dumb monkeys are useful—this is not a derogatory term. But they have no oracle, no underlying insight into the program under test. Many types of intermittent failures that are hard to replicate with traditional methods are perhaps-more-effectively attacked with smart monkeys. One of the interesting questions in practical testing is how to create smarter monkeys.

I’ll sketch this area because I think it exposes a style of testing (intensely automated exploratory testing) that involves many of the skills that I suspect are involved in some skilled security testing. It also highlights techniques that are not dependent on deep understanding of operating system internals.

Testing != Debugging.

Hunting bugs doesn’t necessarily entail fixing them. Or knowing how to fix them. Or understanding the system internals that make them possible.

I wonder what other techniques we could teach that could be adapted to security-related testing?

Risk-Based Testing

In risk-based testing, we imagine how the software could fail, then we hunt for ways to trigger failures of those types. We can imagine functional risks, performance risks, real-time-related risks, risks of types of failures or misrepresentations that lead to litigation, risks to market impact, lots of fundamentally different types of risks.

I suspect that we (us, here at WTST) can imagine security-related risks too. I think there’s a big literature on this (I don’t claim to have read it). If that’s true, I suspect that we can imagine ways to train testers to hunt for some of the bugs those risks point to.

  • Not to create risk-inspired test scripts that become obsolete almost immediately.
  • To create instances from a family of tests (e.g. one risk inspires one family). To create specific instances that are particularly relevant now (when they are created / used) based on what else we have learned about the program and its risks.

How much of this work could be done by someone who is not a security guru? How much training will this person need? What of that training should be supplied in the security testing course?

What Other Perspectives Could Lead to Useful Training?