Saturday 16 December 2017

Notes on negation

The well known formulas

Most readers will be familiar with -x = ~x + 1 = ~(x - 1). These are often just stated without justification, or even an explanation for why they are equivalent. There are some algebraic tricks, but I don't think they explain much, so I'll use the rings from visualizing addition, subtraction and bitwise complement. ~x + 1, in terms of such a ring, means "flip it, then draw a CCW arrow on it with a length of one tick". ~(x - 1) means "draw a CW arrow with a length of one tick, then flip". Picking CCW first is arbitrary, but the point is that the direction is reversed because flipping the ring also flips the arrow if it is drawn first, but not if it is drawn second. Equivalent to drawing an arrow, you may rotate the ring around its center.

So they're equivalent, but why do they negate. The same effect also explains
a - b = ~(~a + b), which when you substitute a = 0 almost directly gives -b = ~(b - 1). Or using the difference between one's complement and proper negation as I pointed out in that visualization post: the axis of flipping is offset by half a tick, so the effect of flipping introduces a difference of 1 which can be removed by rotating by a tick.

Bit-string notation

I first saw this notation in The Art of Computer Programming v4A, but it probably predates it. It provides a more "structural" view of negation: $$-(a10^k) =\; {\sim} (a10^k - 1) =\; {\sim} (a01^k) = ({\sim} a)10^k$$ Here juxtaposition is concatenation, and exponentiation is repetition and is done before concatenation. a is an arbitrary bit string that may be infinitely long. It does not really deal with the negation of zero, since the input is presumed to end in 10k, but the negation of zero is not very interesting anyway.

What this notation shows is that negation can be thought of as complementing everything to the left of the rightmost set bit, a property that is frequently useful when working with the rightmost bit. A mask of the rightmost set bit and everything to the right of it can be found with
x ^ (x - 1) or, on a modern x86 processor, blsmsk. That leads to negation by XOR: $$-x = x\oplus {\sim}\text{blsmsk}(x)$$ which is sort of cheating since ~blsmsk(x) = x ^ ~(x - 1) = x ^ -x, so this said that
-x = x ^ x ^ -x. It may still be useful occasionally, for example when a value of "known odd-ness" is being negated and then XORed with something, the negation can be merged into the XOR.

Negation by MUX

Using that mask from blsmsk, negation can be written as $$-x = \text{mux}(\text{blsmsk}(x), {\sim} x, x)$$ which combines with bit-level commutativity in some fun ways:

  • (~x + 1) + (x - 1) = mux(blsmsk(x), ~x, x) + mux(blsmsk(x), x, ~x) = ~x + x = -1
  • (~x + 1) | (x - 1) = ~x | x = -1
  • (~x + 1) ^ (x - 1) = ~x ^ x = -1
  • (~x + 1) & (x - 1) = ~x & x = 0
All of these have simpler explanations that don't involve bit-level commutativity, by rewriting them back in terms of negation. But I thought it was nice that it was possible this way too, because it makes it seem as though a +1 and a -1 on both sides of an OR, XOR or AND cancel out, which in general they definitely do not.

The formula that I've been using as an example for the proof-finder on haroldbot.nl/how.html,
(a & (a ^ a - 1)) | (~a & ~(a ^ a - 1)) == -a, is actually a negation-by-MUX, written using mux(m, x, y) = y & m | x & ~m.

Friday 1 December 2017

Bit-level commutativity

By bit-level commutativity I mean that a binary operator has the property that swapping any subset of bits between the left and right operands does not change the result. The subset may be any old thing, so in general I will call an operator o bit-level commutative if it satisfies the following property $$\forall m,a,b: a \circ b = \text{mux}(m, a, b) \circ \text{mux}(m, b, a)$$ For example, by setting m = b we get a ⊗ b = (a & b) ⊗ (a | b), sort of "bitwise sorting" the operands, with zeroes moved to the left operand and ones moved to the right operand (if possible).

Anyway, obviously AND, OR and XOR (and their complemented versions) are all bit-level commutative, indeed any purely bitwise operation (expressible as a vectorized function that takes two booleans as input) that is commutative is necessarily also bit-level commutative, for obvious reasons. Interestingly, addition is also bit-level commutative, which may be less obvious (at least in a recent coding competition, it seemed that people struggled with this). It may help to consider addition on a slightly more digit-by-digit level: $$ a + b = \sum_i 2^i (a_i + b_i)$$ It should be clear from the bit-level "exploded" sum, that the individual bits ai and bi can be either swapped or not, independently for any i. This should get more obvious the more you think about what representing a number in a positional numeral system even means in the first place: it was always a sum, so adding two numbers is like taking the sum of two "big" sums, of course it does not matter which of the big sums any particular contribution comes from.

Alternatively, the old a + b = (a ^ b) + 2(a & b) (ie computing bit-level sums and then adding the carries separately) can explain it: both XOR and AND are bit-level commutative, so the whole expression is, too.

Anyway, a consequence is that a + b = (a & b) + (a | b), which I have more commonly seen derived as:

a + b = (a ^ b) + 2(a & b)  // add carries separately
      = (a | b) - (a & b) + 2(a & b) // see below
      = (a | b) + (a & b)
Where (a ^ b) = (a | b) - (a & b) can be explained as XOR being like OR, except that unlike OR it is 0 when both operands are set, so just subtract that case out. I always like having two (or more!) explanations from completely different directions like that.

Multiplication (including carryless multiplication and OR-multiplication) is of course not bit-level commutative. For example if one operand is zero and the other is odd and not 1, then the lowest bit could be swapped to make neither operand zero, and a non-zero result could be produced that way. Operations such as comparison and (by extension) min and max are obviously not bit-level commutative.

There is probably more to this, I may add some stuff later.

Sunday 6 August 2017

Visualizing addition, subtraction and bitwise complement

A relatively well-known relation between addition and subtraction, besides the basic relations a - b = a + (-b) and a - b = -(-a + b), is a - b = ~(~a + b). But I suspect most people have simply accepted that as fact, or perhaps proved it from the 2's complement definition of negation. haroldbot can do the latter, though not as succinctly as I hoped.

But it also has a nice one-dimensional geometric interpretation, really analogous to the way a - b = -(-a + b) looks in ℤ.

As negation mirrors ℤ around the origin, complement mirrors the space of two's complement signed bitvectors around the "space" between -1 and 0. Clearly addition in the mirrored space corresponds to subtraction in the unmirrored space, so the obvious way to subtract is mirror, add, and mirror back. That's precisely what -(-a + b) does in ℤ and what ~(~a + b) does for bitvectors. An observant reader may notice that I convenient disregarded the finiteness of the number line of fixed-size bitvectors, that's actually not a problem but the visualization gets a bit trickier.

What is in a way more surprising is that a - b = -(-a + b) works for bitvectors, since negation does not neatly mirror the whole number line when you're talking about two's complement negation. It's around the origin again instead of around a "space", but the most negative number is unaffected.

When we remember that this number line is really a number ring (in the circular sense), that starts to make sense again. To complete this picture, you can think about holding a ring in your hands, flipping it over while holding it at two diametrically opposite points - zero and the most negative number. Of course this visualization also works for complement, just hold the ring at slightly different places: between negative one and zero, and between the minimum and maximum (which are adjacent, you could think of it as where the ring closes). There are images below, but you should probably only look at them if visualization failed to appear in your mind naturally - if you already have one, your own image is probably easier to think about.

But why

OK all this spatial insight is fun (maybe), but what was it actually good for. I've found that thinking about the complement operation this way helps me to relate it to addition-like arithmetic operations (add, subtract, min, max, compare, etc) since they're all simple operations with "arrows" around that ring that we just flipped in our minds.

So it helps to make sense of various "mutants" of De Morgan's Law, such as:

  • ~x < ~y == x > y
  • ~x > ~y == x < y
  • ~min(~x, ~y) == max(x, y)
  • ~max(~x, ~y) == min(x, y)
  • ~avg_up(~x, ~y) == avg_down(x, y) where avg_down is the average rounding down, see also VirtualDub: Weighted averaging in SSE (part 2)
  • ~avg_down(~x, ~y) == avg_up(x, y)
  • ~paddsb(~x, y) == psubsb(x, y) (signed saturation)
  • ~psubsb(~x, y) == paddsb(x, y) (signed saturation)
  • ~paddusb(~x, y) == psubusb(x, y) (unsigned saturation)
  • ~psubusb(~x, y) == paddusb(x, y) (unsigned saturation)

A similar visualization works for the signed/unsigned "conversion" x ^ msb == x + msb == x - msb (msb is a mask with only the most significant bit set), which corresponds to rotating the ring 180 degrees. This may help when thinking about things such as:

  • x <s y == (x ^ msb) <u (y ^ msb)
  • x <u y == (x ^ msb) <s (y ^ msb)
  • max_s(x, y) == max_u(x ^ msb, y ^ msb) ^ msb

The relationship between the different kinds of min/max can be summed up by a nice commutative diagram:

Hope it helps, for me this sort of thing has come in handy occasionally when writing SSE code.








Here are the images two's complement negation:

and for plain one's complement:

This is in the orientation that I usually use when I think about these operations this way, but there is not particular meaning to going counter-clockwise with 0 at/near the bottom.