Skip to content

Unit Testing File I/O

A somewhat tricky question when it comes to unit testing is how to test components involving file I/O. Recall that a good unit tests should run fast and independent of its execution environment. A test involving file I/O inevitably breaks these rules, which is why some people don’t even consider such a test a unit test.

Whatever your definition of a unit test is: There is a need for testing I/O components. Almost every moderately complex software system will involve some form of file I/O, e.g., for data exchange, serialization, or debugging. In other words: If you take software testing seriously, you’ll probably need to deal with testing I/O components at some point.

Testing Strategies

In the following, I’ll briefly outline some testing strategies that I’ve come across so far and highlight their strengths and weaknesses. The main use case I’ve got in mind is the Polygon Mesh Processing Library, in which we support numerous mesh file formats for reading and writing. Therefore, my selection of strategies might be a bit biased towards this use case.

Checksums

The basic idea is to check the output files (or input objects) for equality using previously computed checksums. This is probably one of the simplest tests to implement. It basically only requires some baseline data and a checksum function, and it will ensure correctness for known inputs or outputs. However, the simplicity comes at a cost. The resulting test will be highly fragile, i.e., the slightest change in the code or the input data requires a baseline update. This in turn can drastically slow down your development speed when using test-driven development.

Round-Trip

A slightly more advanced strategy is to do a round-trip: Generate some test data, write it to a file, read the contents back in, and make sure that the resulting object is (approximately) identical to the baseline. This approach is still rather simple and straightforward to implement. It also gives you much more flexibility in how strict you want your comparisons to be. However, it also requires code for both reading and writing a particular format, which might not always be feasible.

Formal Verification

An even more sophisticated approach would be to parse the file and verify that it conforms to a formal specification of the file format, e.g., by using a formal grammar. This strategy eventually results in a high level of correctness and comprehensive test coverage. At the same time, the implementation effort might be considerable. In case of missing or incomplete specifications it might be difficult if not impossible to realize.

Image-Based Comparisons

Sometimes, a picture is worth a thousand words. The same applies to a rendered image of your data. Similar to the checksum-based method mentioned above, you can use image-based comparisons to test for strict equality. In addition, this approach opens up the possibility to test within a certain tolerance. It can also help to spot visual artifacts that are not easily captured by a numeric characteristic. A main drawback of this strategy is that it requires a certain amount of infrastructure, i.e., to do off-screen rendering and to read and compare image data. However, it might come in handy for general testing as well, so it might be well worth the effort.

File System Independence

The strategies outlined above should give you some ideas how to write tests for components involving file I/O. However, they do not yet help with the dependence on the file system itself and the problems associated with this. Therefore, a major goal should be to break this dependence so that the unit test runs fast and independent of the execution environment.

One possible way to achieve this is to apply the Single Responsibility Principle (SRP), one of the SOLID principles of object-oriented software design. It basically states that every module or class should only be responsible for a single functionality.

Let’s consider a simplistic writer function as an example:

void write(const std::string& filename)
{
    // additional checks on filename omitted for brevity
    std::ofstream ofs(filename.c_str());
    ofs << "test" << std::endl;
    ofs.close();
}

int main(int argc, char** argv)
{
    write("/tmp/test.txt");
    return 0;
}

Usually, the write() function would take care of checking the file specified in filename for readability, open it, write the contents to disk, and finally close the file again. This means, however, that the function has more than one responsibility:

  1. File handling and
  2. Data conversion to the disk format

Whenever you can describe the responsibility of a class or function with an and, this should give you a hint that this part of the code eventually violates the SRP principle.

The obvious solution is to split the responsibility so that the file handling is not part of the write() function. Instead, it should only handle the conversion of the data to the desired file format.

In C++, we can achieve this rather easily by using the output stream class std::ostream. Using this interface allows for both output to an actual file using std::ofstream as well as output to a string buffer using std::ostringstream for unit testing.

The write() function simply would become:

void write(std::ostream& ofs)
{
    ofs << "test" << std::endl;
}

The regular invocation would pass an already opened std::ofstream:

int main(int argc, char** argv)
{
    std::ofstream ofs("/tmp/test.txt");
    write(ofs);
    ofs.close();
    return 0;
}

In contrast, the unit test invocation would pass an instance of std::ostringstream and then validate the resulting contents of the string stream for correctness:

int main(int argc, char** argv)
{
    std::ostringstream ofs;
    write(ofs);
    if (ofs.str() != "test\n")
        std::cerr << "test failed" << std::endl;
    return 0;
}

Of course, the above is only a totally simplistic and fictitious example, but it should be sufficient to get the idea of separating responsibilities across so that you can make your tests independent of the file system.

For a real-world application you’d probably add some intermediate class to handle the actual file system interactions such as checking for existence, permissions, opening, and closing. You could then test this abstraction layer even further by using mock objects. For now, however, that’s way out of scope for this post.

Wrapping Up

In this post, I outlined basic strategies for unit testing software components involving file I/O as well as a concrete way to refactor your I/O functions to have clearer responsibilities and to be less dependent on the underlying file system.

My current strategy of choice for validating I/O results is to do a round trip of export and re-import. It’s not perfect, that’s for sure, but it offers a good compromise between correctness, completeness, flexibility, and implementation effort.

In case you are aware of any additional testing strategies for file I/O, I’d be glad to hear!

References and Further Reading