Verilog Simulators and ctest

If you are someone with a software engineering background getting your hands dirty with hardware design, first thing you’d want to use – some kind of testing framework/runner for all the tests you write. If you are using myhdl you’ll already use all the stuff python offers for unit-testing.

But if you are using more conventional tools for a bigger project with a bunch of third-party libraries, chances are you are not happy with shitty bash/csh tools and instead of wasting the precious minutes of your life writing those you’d want to use something existing. After all, why reinvent the wheel?

In this post I will describe the troubles of integrating verilog simulators with existing test runners. Namely – ctest (that comes from cmake).

So, what’s the purpose of the test runner? Cpt. Obvious says it’s needed to run your tests (perhaps multi-threaded to speed things up), handle timeouts and generate nice and shiny reports in different formats that we can later integrate in, say, jenkins, post on cdash or print on soft paper for later usage.

Cool stuff, but how does a test (which is usually just a program) tell the runner that it is ‘passed’ or ‘failed’? Obviously via exit code. If it’s 0 – PASSED. Anything else – EPIC FAIL. Sounds simple.

So, if our test is a verilog testbench, the only thing we have to do is finish the simulation with a non-zero exit code. Sounds simple? Well, turns out it’s not.

The classic verilog $finish() takes no argument and always terminates with 0 exit code (unless we crashed the simulator… somehow).

Some simulators allow to supply an exit code, but some don’t.


SystemVerilog adds a new $fatal() call that terminates the simulation with a non-zero exit code so we should be okay unless… we’re using a huge load of third-party libraries and behavioral models that just call $finish() if something went wrong. And the latter is almost always the case for a big project.

Solution here is utter and ugly hackery – but it works and with all this mess there’s little else we can do:

  1. Write a result.txt file a non-zero exit code
  2. Actually do the simulation.
  3. Overwrite result.txt with zero exit code if we’re good
  4. Wrap the actual simulation run in a shell script that runs the simulation and after it’s done – reads out our result.txt and passes the code to the shell.

This way if anything terminates the simulation prematurely due to some assert in the libraries – will still have a non-zero exit code.

Here’s an example implementation:

task exit(int fd, int code);
   $display("Exiting with code %02d", code);
   $fwrite(fd, "%02d\n", code);
   /* We're done */
module tb;
   int result_fd, tmp;
   string resultfile = "result.txt";
   initial begin
      $display("Initializing resultfile");
      result_fd = $fopen(resultfile,    "w");
      /* Assume a crash by default */
      $fwrite(result_fd, "1\n");
      tmp = $rewind(result_fd);
   initial begin
      #100 exit(result_fd, 0);

You can compile and simulate it with icarus verilog. Just save the source to and type:

iverilog -g2012

And that’s how a wrapper shell script may look like:

vvp a.out
[ ! -f "result.txt" ] && code=1 || code=`cat result.txt`
echo "[!] Simulation complete, exit code $code"
exit $code

Btw, you can generate the wrapper directly from cmake via configure_file.


Parallel test runs and bad patterns

So, we have lots of tests. And lots of cores on our PC.  A natural solution would be running things in parallel. If we’re using ctest, the only thing we have to do – type ctest -jN, where N is the number of concurrent tests we want to run.

And ctest will do all the rest for us. There can be several instances of a simulation model running at the same time – all verilog simulator I know of allow this, including icarus verilog (which is my favorite!). That would’ve been easy in a perfect world, but in the reality there’s another pitfall:

Traditionally in verilog, different IP cores, libraries and etc. read and write their data/logs from one single directory: the directory where we’ve started the simulator. And as the project gets bigger the option “edit the models by hand” looks less and less neat.

Solution – run the model you’ve compiled&elaborated from DIFFERENT working directories. You can do it with WORKING_DIRECTORY argument to add_test().

I guess that’s it for today, happy hacking.

Leave a Reply