Bitfield domains are abstract domains that track the possible values of every bit independently. For example, they can express things like "all values where the third bit is set".

There are two obvious "simplest useful" bitfield domains:

- The first one is probably the most obvious one: a bitvector
`m`

that has a 1 at every position that has a known value, and a bitvector`v`

with the values of the known bits (ones at unknown positions are ignored). This domain turns out to be largely inconvenient, for several reasons. - The second one is perhaps slightly less intuitive: the one defined in MinĂ©-WING12, section 2.8, namely a bitvector
`z`

that has a 1 at every position that*can*be zero, and a bitvector`o`

with a 1 for every position that can be one. This domain turns out to be pretty good, but the "`o`

" naming is sometimes confusing.

So, what's so bad about the first domain?

Firstly, this:

Empty set | Universal set | |

first | not representable | `m = 0 (r)` |

second | `(z | o) != -1 (r)` |
`(z & o) == -1` |

`z = ~m | ~v; o = ~m | (m & v)`

, and most values of the second domain can be converted to the first domain by `m = z ^ o; v = o`

or `m = z ^ o; v = ~z`

, though that converts empty sets into non-empty sets.
But that's not all. The other problem with the first domain is the considerably more annoying implementation of some abstract operations. For example

- bitwise AND (first domain)
`m = (a.m & b.m) | (a.m & ~a.v) | (b.m & ~b.v);`

v = a.v & b.v;- bitwise AND (second domain)
`z = a.z | b.z;`

o = a.o & b.o;- bitwise OR (first domain)
`m = (a.m & b.m) | (a.m & a.v) | (b.m & b.v);`

v = a.v | b.v;- bitwise OR (second domain)
`z = a.z & b.z;`

o = a.o | b.o;- Union (first domain)
`m = a.m & b.m & (~a.v ^ b.v);`

v = a.v;- Union (second domain)
`z = a.z | b.z;`

o = a.o | b.o;- Intersect (first domain)
`if ((a.v ^ b.v) & a.m & b.m) Empty`

m = a.m | b.m;

v = a.v | b.v;- Intersect (second domain)
`z = a.z & b.z;`

o = a.o & b.o;

- left shift by a constant (first domain)
`m = ~(~m << n);`

v = v << n;- left shift by a constant (second domain)
`z = ~(~z << n);`

o = o << n;

- bitwise XOR (first domain)
`m = a.m & b.m;`

v = a.v ^ b.v;- bitwise XOR (second domain)
`z = (a.z & b.z) | (a.o & b.o);`

o = (a.o & b.z) | (a.z & b.o);

`z`

with the o. So which domain is better for bitwise complement? It depends. If you really have to *do*the swap, that's probably worse (hey Intel and AMD, please implement

`xchg`

with register renaming like you do with `fxch`

). But you may be able to do a "virtual swap", where you just swap the meanings of the fields instead of their contents. That's free, but it's usually not applicable. However, you can use it to define subtraction in terms of addition at no extra cost, in the worst case by swapping a couple of things manually.
Incidentally, one interesting thing about the first domain is that the operations for `v`

is the same as the concrete operation that you're abstracting. That's because whatever garbage happens to be in the unknown bits, by definition they can not affect bits that are known in the result, so the whole "bit is unknown"-business can be ignored for `v`

and instead all the difficulty is moved to the computation of `m`

.

That's all very nice, but the interesting part is:

# Addition, as promised.

All abstract operations up till now were very simple. Addition, however, is a little tricky. First I'll describe the addition algorithm for the first domain .. and then I won't describe one for the second domain (it can easily be defined in terms of the addition algorithm for the first domain though).

It's based on the concepts of a carry look-ahead adder, with "propagates" and "generates" information, but of course it's not actually a CLA because the input isn't fully known and the result is a mask of where the result of the addition would be known.

The basic idea, that the entire algorithm is based around, is that if a carry is known somewhere, a whole stretch of positions that would propagate can be selected from the point of the known carry upwards. That's seems very easy: `(p + g & ~p) - g`

but that's not quite right and most of the algorithm have to do with fixing that up.

For example, if there is more than one generating bit in the same run of propagators, then things go south. But that can be fixed, just select only the rightmost ends of all runs of generators:`g & ~(g << 1)`

That actually doesn't even solve the problem though. The runs of generators are broken now, but a single run of propagators could still span all of them. So instead of checking whether `p + g`

has bits that weren't in `p`

, check whether it has bits there are not in `p`

or that are *both* in `p`

*and* in `g`

(the new `g`

).

The fixups just keep piling up really, so I'm almost certain that at least a couple of operations can be shaved off with some further cleverness. Anyway, here's what I have now:

av &= am; bv &= bm; uint an = ~av & am; // not set in a uint bn = ~bv & bm; // not set in b uint g0 = an & bn; // generates a 0-carry uint g1 = av & bv; // generates a 1-carry uint p = an ^ bn; // propagates a carry uint pg1 = av | bv; // propagates or generates a 1-carry uint g0l = ~g0 & ((g0 << 1) | 1); // the bit directly to the left of every run of 1s uint g1f = g1 & ~(g1 << 1); // the lowest bit of every run of 1s uint nc = (p + g0l & ~p) - g0l | g0; // bits that have a carry-out of 0 uint m1 = ~pg1 | (pg1 & g1f); uint c = (pg1 + g1f & m1) - g1f; // bits that have a carry-out of 1 uint cm = (c | nc) << 1 | 1; // bits that have a known carry-in uint m = cm & am & bm; // result is known if carry-in and both inputs are known uint v = av + bv;

Subtraction (for both discussed bitfield domains) can be implemented using the identity`a - b = ~(~a + b)`

, either by actually doing the complements (which are trivial don't lose any information) or by rewriting a couple of things in the addition algorithm.