Once again a boring-but-specific title, I don't want to clickbait the audience after all. Even so, let's get some "disclaimers" out of the way.

- "All" includes both boring and interesting identities. I tried to remove the most boring ones, so it's no longer truly
*all*identities, but still a lot of boring ones remain. The way I see it, the biggest problem which the approach that I describe in this blog post has, is generating too much "true but boring" junk. - This approach is, as far as I know, absolutely limited to fixed-size bitvectors, but that's what I'm interested in anyway. To keep things reasonable, the size should be small, which does result in some differences with eg the arithmetic of 64-bit bitvectors. Most of the results either directly transfer to larger sizes, or generalize to larger sizes.
- The set of operations is restricted to those that are cheap to implement in CNF SAT, that is not a hard limitation but a practical one.
- "Of a certain size" means we have to pick in advance the number of operations on both sides of the mathematical identity, and then only identities with exactly that number of operations (counted in the underlying representation, which may represent a seemingly larger expression if the result of an operation is used more than once) are found. This can be repeated for any size we're interested in, but this approach is not very scalable and tends to run out of memory if there are too many identities of the requested size.

The results look something like this. Let's say we want "all" identities that involve 2 variables, 2 operations on the left, and 0 operations (ie only a variable) on the right. The result would be the following, keep in mind that a bunch of redundant and boring identities are filtered out.

(a - (a - b)) == b (a + (b - a)) == b ((a + b) - a) == b (b | (a & b)) == b (a ^ (a ^ b)) == b (b & (a | b)) == b // Done in 0s. Used a set of 3 inputs.

Nothing too interesting so far, but then we didn't ask for much. Here are a couple of selected "more interesting" (but not by any means new or unknown) identities that this approach can also enumerate:

((a & b) + (a | b)) == (a + b) ((a | b) - (a & b)) == (a ^ b) ((a ^ b) | (a & b)) == (a | b) ((a & b) ^ (a | b)) == (a ^ b) ((~ a) - (~ b)) == (b - a) (~ (b + (~ a))) == (a - b)

Now that the expectations have been set accurately (hopefully), let's get into what the approach is.

## The Approach

The core mechanism I use is CounterExample-Guided Inductive Synthesis (CEGIS) based on a SAT solver. Glucose worked well, other solvers can be used. Rather than asking CEGIS to generate a snippet of code that performs some specific task however, I ask it to generate two snippets that are equivalent. That does not fundementally change how it operates, which is still a loop of:

- Synthesize code that does the right thing for each input in the set of inputs to check.
- Check whether the code matches the specification. If it does, we're done. If it doesn't, add the counter-example to the set of inputs to check.

Both synthesis and checking could be performed by a SAT solver, but I only use a SAT solver for synthesis. For checking, since 4-bit bitvectors have so few combinations, I just brute force every possible valuation of the variables.

When a pair of equivalent expressions has been found, I add its negation as a single clause to prevent the same thing from being synthesized again. This is what enables pulling out one identity after the other. In my imagination, that looks like Thor smashing his mug and asking for another.

Solving for *programs* may seem odd, the trick here is to represent a program as a sequence of instructions that are constructed out of boolean variables, the SAT solver is then invoked to solve for those variables.

The code is available on gitlab.

## Original motivation

The examples I gave earlier only involve "normal" bitvector arithmetic. Originally what I set out to do is discover what sorts of mathematical identities are true in the context of *trapping* arithmetic (in which subtraction and addition trap on signed overflow), using the rule that two expressions are equivalent if and only if they have the same behaviour, in the following sense: for all valuations of the variables, the two expressions either yield the same value, or they both trap. That rule is also implemented in the linked source.

Many of the identities found in that context involve a trapping operation that can never actually trap. For example the trapping subtraction (the `t`-suffix in `-t` indicates that it is the trapping version of subtraction) in `(b & (~ a)) == (b -t (a & b))` cannot trap (boring to prove so I won't bother). But the "infamous" (among who, maybe just me) `(-t (-t (-t a))) == (-t a)` is also enumerated and the sole remaining negation *can* trap but does so in exactly the same case as the original three-negations-in-a-row (namely when `a` is the bitvector with only its sign-bit set). Here is a small selection of nice identities that hold in trapping arithmetic:

((a & b) +t (a | b)) == (a +t b) ((a | b) -t (a & b)) == (a ^ b) ((~ b) -t (~ a)) == (a -t b) (~ (b +t (~ a))) == (a -t b) (a -t (a -t b)) == (a - (a -t b)) // note: one of the subtractions is a non-trapping subtraction

## Future directions

A large source of boring identities is the fact that if `f(x) == g(x)`, then also `f(x) + x == g(x) + x` and `f(x) & x == g(x) & x` and so on, which causes "small" identities to show up again as part of larger ones, without introducing any new information, and multiplied in myriad ways. If there was a good way to prevent them from being enumerated (it would have to be sufficiently easy to state in terms of CNF SAT clauses, to prevent slowing down the solver too much), or to summarize the full output, that could make the output of the enumeration more human-digestible.

There is a part 2 for this post.