Sunday 13 April 2014

A bitwise relational domain

This is the thing that I was referring to in my previous post where I said it didn't work out as well as I'd hoped. That's still the case. But it's not a complete failure, it's just not as good as I had hoped it might be, and it's still interesting (and, as far as I know, new).

The basic idea is really the same one as in MinĂ©'s A Few Graph-Based Relational Numerical Abstract Domains, namely a matrix where the element i,j says something about the relation between the variables i and j, and there's a closure operator that very closely resembles the Floyd–Warshall algorithm (or matrix multiplication, which comes down to the same thing). Of course saying that the difference of two variables must be in some set isn't great for bitwise operations, so my first thought was to change it to a xor, with MinĂ©'s bitfield domain (the one I've written several other posts about) as the Basis. For the closure operator, the multiplication is represented by the bitwise-xor from the bitfield domain, addition is bitwise-and from the bitfield domain. The rest is similarly easy.

The problem with that idea is that it tends finds almost no relations. It can represent some non-trivial relations, for example that one variable is (partially) equal to and other xor-ed with a constant, but it's just not enough.

The problem is that xor just doesn't remember enough. So the next idea I had was to abandon xor, and go with an operator that doesn't throw away anything, and use 4-tuples to represent that set instead of 2-tuples. Due to lack of inspiration, I named the elements of the tuple (a, b, c, d), and they work like this:

ax = 0, y = 0
bx = 0, y = 1
cx = 1, y = 0
dx = 1, y = 1
Meaning that if, say, bit 0 in a is set in the element at (x,y) then variables x and y can be even simultaneously. On the diagonal of the matrix, where x=y, b and c must both be zero, because a variable can not be unequal from itself.

The "multiplication" operator for the closure works like this:

aij &= aik & akj | bik & ckj
bij &= aik & bkj | bik & dkj
cij &= dik & ckj | cik & akj
dij &= dik & dkj | cik & bkj
For all k. This essentially "chains" i,k with k,j to get an implied constraint on i,j.

To show how powerful this is, here is what happens after asserting that z = x & y

x
y
z
x-1,0,0,-1-1,-1,-1,-1-1,-1,0,-1
y-1,-1,-1,-1-1,0,0,-1-1,-1,0,-1
z-1,0,-1,-1-1,0,-1,-1-1,0,0,-1
And now some cool things can happen. For example, if you assert that z is odd, the closure operation will propagate that back to both x and y. Or if you assert that w = x | z, it will deduce that w == x.

There is a symmetry between the upper triangle and the lower triangle. They're not the same, but an element (a, b, c, d) is mirrored by an element (a, c, b, d). One of those triangles can easily be left out, to save some space. Elements that are (-1, -1, -1, -1) could also be left out, but that doesn't typically save many elements - if something non-relational is known about a variable (that is, there is an element on the diagonal that isn't (-1, 0, 0, -1)), then everything in the same row and everything in the same column will have that information in it as well. But an element (i,j) exactly equal to (ai,i & aj,j, ai,i & dj,j, di,i & aj,j, di,i & dj,j) can be left out. That removes all elements that describe a trivial relation of the kind that says that the combination of x and y must be in the Cartesian product of the sets that x and y can be in, which is completely redundant.

But it does have problems. If you xor (or add or subtract) two unconstrained variables, nothing happens. That sort of 3-variable relation isn't handled at all. This led me to explore a 3D variant of this domain (with 8-tuples), which is really great if you only consider its expressive power, but it likes to take a large amount of space, and its closure operator is very inefficient.

The algorithms I presented earlier for addition in the bitfield domain still work in this relational domain, and with just a couple of simple changes they can make use of some relational information. They want to know something about p = x ^ y and g = x & y, relational information can be available for both of them. For example, if it is known that x & y == 0 (indicated by a 4-tuple (a, b, c, 0)), g will be known to be 0, all carries will be known to be 0, and two relations will be made that look exactly the same as if you had asserted z = x | y. Of course that only works if x & y == 0 is known before the addition.

Assertions such as x & y == 0 (more generally: x ⊗ y is in (z, o) where (z, o) is an element from the unrelational bitfield domain) can be done directly (without inventing a temporary to hold x ⊗ y and asserting on it), and actually that's simpler and far more efficient.