Skip to content

Unit Testing Geometric Algorithms

Comprehensive testing is a key aspect of developing high-quality software solutions. In this article, I’ll briefly motivate the use of testing, summarize what makes up a good unit test, and finally dive into some of the challenges when it comes to unit testing geometric algorithms.

The Importance of Testing

Over the last couple of years, extensive software testing has become a cornerstone of high-quality software development. First of all, industrial-strength, production-ready code requires testing in order to make sure the software behaves correctly. Beyond that, having an extensive test suite is the key ingredient to evolve a code base over time as requirements, features, and people working on the code change. As Michael Feathers puts it in his book Working Effectively With Legacy Code: A code base without proper test coverage is legacy code, i.e., a code base that is extremely difficult to work with. There’s practically no way to change the code without risking to break some part of the software’s functionality. Often enough, subtle bugs can be introduced and go unnoticed until the software goes into production—something you definitely want to avoid. In case you’re not yet convinced or you’re all new to software testing, I highly recommend reading Kent Beck’s seminal book Test-Driven Development: By Example.

Note that for the most part of this article, I’ll focus on unit tests, i.e., the type of tests used to assure the correctness of an individual unit of code such as a class or a function. To be more precise, I’m following Michael Feathers’ unit testing rules stating that a test is not a unit test if:

What Is a Good Unit Test?

Unfortunately, not all tests are created equal. A good test suite allows you to change and evolve your code while assuring correctness. In contrast, a sloppily done test suite can get in your way all too often, thereby significantly slowing down your development speed. While it is trivial to come up with a test, it is much more difficult to come up with a good test. In the extreme case, you could just write one big test function exercising all of your code and simply check for successful execution. Such a test, however, would not fulfill two of the most basic and important characteristics of high-quality unit tests:

  1. They run fast.
  2. They help to localize problems.

Obviously, the one big test function won’t be fast as the code base grows and it won’t help to localize problems due to its monolithic nature. In contrast, a suite of individual test cases that can be run in isolation will allow you to track down problems much faster. Beyond the two basic criteria of speed and localization, a good test suite will exhibit the following properties:

The excellent book Clean Code from Robert C. Martin contains a full chapter on clean unit tests, and I can only recommend to read it.

Another major aspect is the stability of a test over time. A test that frequently fails upon minor changes is likely to produce lots of false positives. Keeping such a test up-to-date can be a real time sink. Some people refer to such a test as being brittle or flaky. As it turns out, this aspect can be a major challenge when testing geometric algorithms.

Unit Testing Geometric Algorithms

Now, why should unit testing geometric algorithms be any different from other types of algorithms? At a first glance, one might think everything is relatively easy and straightforward. After all, it’s basically just math, so you should be able to calculate the correct outcome of your algorithm, right? Well, this is not quite true. In fact, even for basic operations such as determining whether a point is inside a sphere or not we have to take into account potential floating point precision issues. The folks from the CGAL project made a huge effort in building a library that robustly deals with such precision issues.

Aside from precision issues, for more complex algorithms—such as those encountered in real-world geometry processing tasks—it becomes increasingly difficult to exactly specify what the correct output of an algorithm is. The notion of correctness is no longer as easy to define. Instead, we gradually move to test for acceptable output, i.e., we consider the output to be correct within a certain tolerance. Note the connection to the self-validity criteria mentioned above: Testing for a tolerance does not directly map to a simple pass or fail. Even worse, when we hit the boundaries of the tolerance and the test begins to fail, we eventually need to resort to time-consuming and error-prone manual inspection.

The problem gets aggravated even more when we need to test for multiple tolerances. Let’s consider mesh simplification as an example. For such an algorithm we typically don’t care about individual resulting vertex positions but about more global properties such as the number of resulting triangles as well as deviation from the initial surface. This is, however, not the only criterion we might need to consider. If the resulting mesh is meant to be used in downstream applications for which mesh element quality is relevant—such as finite element simulations—we also need to make sure that the algorithm does not generate degenerate mesh elements. So in this example we eventually need to test for at least three different criteria:

  1. Element count reduction
  2. Distance to the initial surface
  3. Mesh element quality

Obviously, coming up with good default values and tolerances for such an algorithm is not completely straightforward. The next section outlines some strategies and guidelines I found to be useful in the past.

Testing Strategies

So how do we come up with a good test for our geometry processing algorithm? Sorry to disappoint, but there’s not a single definite answer to that. To some extend, this will always require experimentation and previous experience. If you happen to have found the silver bullet for this problem, I’d be glad to hear. In the meantime, however, there are still some guidelines we can follow.

The goal is to have a good and clean test. Recall the localization criterion from above: A good test should help to localize problems. Therefore, it is only natural to concentrate on a single concept per test. An ideal test would only have a couple of lines of code: a setup function, a call to the algorithm, and a set of assertions. Even if you need to test for multiple criteria as in the mesh simplification example above, I recommend to create separate test cases for each of the criteria. Now when writing the test case we basically need to answer three questions:

  1. What exactly is the property I want to test?
  2. What is a reasonable default value?
  3. What is an acceptable tolerance?

By applying the single-concept-per-test strategy you should already have sufficient clarity about the property under test. The only thing you need is some way to compute that property, e.g., a function to compute the minimum element quality in a mesh. Next, to come up with a good default value, I typically start by searching for cases with known outcome. Consider curvature computation as an example. In this case, we know what the curvature should be for certain surfaces such as a plane or a sphere. If there is no such known or obvious case, I tend to think about the minimum viable application scenario of the algorithm and start with the value computed for this case. Finally, unless you already have some knowledge about a reasonable tolerance range, go ahead, vary the input parameters of the algorithm, slightly change the input model, check how sensible the test is to this variation, and adjust the tolerance accordingly.

When you’re done with testing the primary functionality of the algorithm, I recommend not to stop there. Nasty bugs and odd behavior frequently occur for borderline cases. Take some time to think about these and write a test for each one of them. Use a code coverage tool to identify cases not yet covered by your test suite. Finally, it’s always a good practice to make sure your algorithm gracefully handles invalid input. Write a test for it.

Wrapping Up

Time to wrap up, this article already got much longer than expected. In any case, I hope I could somewhat convince you that comprehensive testing is important and that having a high quality test suite will help you to write more clean and robust code in the long run. As for testing geometric algorithms, I hope that the above guidelines will come in handy if you ever need to write a test for a geometry processing algorithm. In case you already have experience in this area, I’d be more than glad to hear any insights or recommendations. Simply drop me a mail.