Process
How to Test Smart Contracts
Last updated:
December 13, 2021

⚠️ Disclaimer. We are not a professional security auditing firm. We have merely listed techniques that we have found useful from time to time, use them carefully and at your own judgment.

This document outlines a simple approach for testing smart contracts and how to measure different forms of test coverage.

IMPORTANT. We cannot provide a guarantee that this process will lead to completely verifying the code, but it's a good starting point to refining your own process whether you are doing this internally or on behalf of clients. There may be a temptation to consider this document as an exhaustive "checklist". Unfortunately, testing is more an art than a science and each set of contracts require their own distinct treatment. We hope that this document is a helpful reference if you forgot to consider one or the other testing approach, but it should not constitute the entirety of your testing process.

Objectives of testing

Gary Bernhardt's raised three goals for testing in his talk "Fast Test, Slow Test":

  • Prevent regression, i.e., catch changes that break existing behavior
  • Prevent fear, i.e., allow developers to refactor code with confidence
  • Prevent bad design, i.e., better modularity as a side-effect of making code testable.

While these apply in traditional software development, the context for smart contracts is different. First, one should always be fearful when it comes to smart contract development, tests won't change that. Second, security at every release is crucial and often testing has a direct objective of building confidence in the developed code. Third, it's not always easy to refactor smart contracts for perfect testability due to gas optimization concerns. As a result, testing smart contracts is primarily done for code security purposes.

We can articulate a different set of objectives for smart contract testing:

  • Build confidence in smart contracts at every point in time
  • Maintain all security properties during development
  • Communicate how the code works to facilitate understanding in the team and externally.

Testing vs. security reviews

Since the primary objective of testing is security there is a lot of overlap with security reviews. In particular we advocate for a trust model based approach - testing based on possible attack vectors rather than simply testing "features" of the code. We encourage the reader to read How to do a Security Review for our breakdown of the trust model and use that as a guide in testing.

How to improve testing productivity

Smart contract tests are incredibly slow. In our dedicated guide How to Improve Testing Productivity , we discuss productivity and speed improvements that will ultimately allow teams to consider and incorporate more tests especially in their continuous integration workflows.

Testing approaches

There are 6 different testing approaches based on the level of automation and the nature of testing involved. Each type of testing requires different tools to perform:

Automated Testing

Automated Test Campaigns

Manual Testing

Private Network Testing

Testnet and Acceptance Testing

Mutation Testing

What is test coverage

When testing code, a good question to ask is "how do we know that we haven't done enough testing?".  Test Coverage is a metric that can be used to answer this question and suggest additional tests that could be created.

Test design

A related topic to coverage is test design. When coming up with a test there are often meaningful choices about the set of values that should be tested, the configuration and style of the test. We discuss general considerations in How to Design Test Cases and focus on methods here:

Black Box or Specification Based Techniques:

White Box or Structure-based Techniques:

Vulnerability Based Techniques:

In testing literature, there is a third set of techniques called "Experience-based" techniques. These techniques are discussed extensively in our How to do a Security Review guide.

Each testing technique comes with its own unique coverage measurement (where applicable), we provide an overview below.

Testing Technique Coverage Measurement
SPECIFICATION BASED
Equivalence Partitioning
Proportion of equivalence partitions that are tested
Boundary Value Analysis
Proportion of boundary values that have been exercised by a test
Domain Analysis
Proportion of "In and Out" points and "On and Off" points that have been exercised by a test
Decision Table Testing
Proportion of total number of combinations (2^n) tested
Cause-Effect Graph Testing
Proportion of possible combinations of inputs tested
State Transition Testing
For given n: Proportion of all state transition sequences of length n tested
Classification Tree Testing
Proportion of total leaf classes tested
Pairwise Testing
Proportion of total pairs tested
Use Case and Behavior Testing
Model Based Testing
Model-dependent (see Peterson's Lattice)
STRUCTURE BASED
Statement Testing
Proportion of executable statements tested
Decision/Branch Testing
Proportion of all decision points tested or proportion of all branches tested
Condition Testing
Proportion of condition outcomes in every decision that have been tested Multiple condition testing: Proportion of all possible combinations of outcomes in every decisions that have been tested Condition determination testing: Proportion of all branch condition outcomes that independently affect a decision outcome
LCSAJ or Loop Testing
Proportion of Linear Code Sequence and Jump (LCSAJ) that are tested
Path Testing
Proportion of paths exercised by test cases
Protocol Interface Testing
Proportion of possible external contract calls that are tested
VULNERABILITY BASED
Vulnerability Taxonomy Testing
Proportion of listed vulnerabilities used in test design
See Also: