15 October 2021

Benchmarking Clojure Code, Part 1

My first Clojure macro!

UPDATE: After you've read this, head over to part 2 for some exciting improvements!


In the last post I demonstrated the results of some benchmarks I performed. This post explains the approach in detail.

There's a nifty little macro provided by clojure.core called time.

(time expr)

Evaluates expr and prints the time it took. Returns the value of expr.

This macro is meant to wrap any code you want to benchmark and the beauty is that it simply prints the benchmark to stdout and still provides the end result of the benchmarked code. This means you can just temporarilly drop it into any running app and get live benchmarks. My goal was to create a dedicated benchmark tool for use in tests, that prints more succint output, and wouldn't need to also return the end result of the code being timed.

There's a nifty little "live template" provided by Cursive called bench which leverages time with the following generated code:

(dotimes [_ 10]
  (time
    (dotimes [_ 10000]
      #_TODO)))

You insert an expression you want benchmarked in place of #_TODO. Here's a simple example:

(dotimes [_ 10]
  (time
    (dotimes [_ 10000]
      (. System (nanoTime)))))

When executed, it does 10 runs of your expression executed 10000 times. Running this code will produce something like:

"Elapsed time: 0.674879 msecs"
"Elapsed time: 0.680447 msecs"
"Elapsed time: 0.591798 msecs"
"Elapsed time: 0.497664 msecs"
"Elapsed time: 0.49108 msecs"
"Elapsed time: 0.496651 msecs"
"Elapsed time: 0.59438 msecs"
"Elapsed time: 0.546505 msecs"
"Elapsed time: 0.360529 msecs"
"Elapsed time: 0.371137 msecs"

Not the prettiest thing in the world, but it gives you an idea of how fast something run (well, how fast something run 10,000 times).

I decided to adapt/simplify the time macro and the above bench template for my own purposes:

(defmacro time-ns [ex]
  `(let [start# (. System (nanoTime))]
     (do ~ex (- (. System (nanoTime)) start#))))

(defn benchmark
  ([title f] (benchmark 10 10000 title f))
  ([n title f] (benchmark 10 n title f))
  ([x n title f]
   (let [times   (for [_ (range x)] (time-ns (dotimes [_ n] (f))))
         per-ops (map #(/ % n) times)
         average (int (/ (apply + per-ops) (count per-ops)))]
     (println
       (format "## %s - %d x %d ops, average per-op: %dns" title x n average)))))

Adapting the sample System.nanoTime() example:

(perf/benchmark "System.nanoTime()" #(. System (nanoTime)))

And the [much more succinct] result:

## System.nanoTime() - 10 x 10000 ops, average per-op: 56ns

This gets somewhat closer to how the Go benchmarking tool works. That tool will run your code a small number of times, and, if your code is fast, it increases the number of repetitions to drill down on the average duration per operation, which would be a neat addition to this function.

The benchmark function must run the target code b.N times. During benchmark execution, b.N is adjusted until the benchmark function lasts long enough to be timed reliably.

The above benchmark function allows you to specify the execution parameters, x and n, where x is the total number of timed runs and n is the total number of times to execute the expression between timings.

; defaults: 10 runs, 10,000 executions per run
(perf/benchmark "System.nanoTime()" #(. System (nanoTime)))

; default 10 runs, 100,000 executions per run
(perf/benchmark 100000 "System.nanoTime()" #(. System (nanoTime)))

; 5 runs, 1000 executions per run
(perf/benchmark 5 1000 "System.nanoTime()" #(. System (nanoTime)))