Wednesday, 3 October 2018

abs and its "extra" result

The abs function has, in its usual (most useful) formulation, one extra value in its codomain than just "all non-negative values". That extra value is the most negative integer, which satisfies abs(x) == x despite being negative. Even accepting that the absolute value of the most negative integer is itself, it may still seem strange (for an operation that is supposed to have such a nice symmetry) that the size of the codomain is not exactly half of the size of the domain.

That there is an "extra" value in the codomain, and that it is specifically the most negative integer, may be more intuitively obvious when the action of abs on the number line circle is depicted as "folding" the circle symmetrically in half across the center and through zero (around which abs is supposed to be symmetric), folding the negative numbers onto the corresponding positive numbers:

Clearly both zero and the most negative integer (which is also on the "folding line") stay in place in such a folding operation and remain part of the resulting half-circle. That there is an "extra" value in the codomain is the usual fencepost effect: the resulting half-circle is half the size of the original circle in some sense, but the "folding line" cuts through two points that have now become endpoints.

By the way the "ones' complement alternative" to the usual abs, let's call it OnesAbs(x) = x < 0 ? ~x : x (there is a nice branch-free formulation too) does have a codomain with a size exactly half of the size of its domain. The possible results are exactly the non-negative values. It has to pay for that by, well, not being the usual abs. The "folding line" for OnesAbs runs between points, avoiding the fencepost issue:

Tuesday, 21 August 2018

Signed wrapping is meaningful and algebraically nice

In this post I defend wrapping, a bit more opinionated than my other posts. As usual I'm writing from the perspective that signed and unsigned integer types are a thin transparent wrapper around bit vectors, of course I am aware that they are often not used that way. That difference between their use and their actual nature is probably the source of the problems.

Signed wrapping is not wrong

It is often said that when signed wraparound occurs, the result is simply wrong. That is an especially narrow view to take, probably inspired by treating fixed-size bit vector arithmetic as if it is arithmetic in ℤ, which it is not. Bit vector arithmetic can be viewed as arithmetic in ℤ so long as no "overflow" occurs, but violating that condition does not make the result wrong, it makes the interpretation wrong.

Signed wrapping is meaningful

The wrapping works exactly the same as unsigned wrapping, it corresponds to taking the lowest k bits of the arbitrary precision result. Such a truncation therefore gives you exactly k meaningful bits, it's just a slice of the result. Some upper bits may be lost, they can be calculated if you need them. If the whole result is meaningful, then part of it is too, namely at least under the interpretation of being "part of the result".

An other well known example of benign wrapping is the calculation of the average of two non-negative signed integers. While (a + b) / 2 gives inconvenient results when the addition "overflows", (uint)(a + b) / 2 (using unsigned division) or (a + b) >>> 1 (unsigned right shift as in Java) are correct even when the addition of two positive values results in a negative value. An other way to look at it is that there is no unsigned wrapping. Nominally the integers being added here are signed but that doesn't really matter. Casting the inputs to unsigned before adding them is a no-op that can be performed mentally.

Wrapping can also sometimes be cancelled with more wrapping. For example, taking an absolute value with wrapping and casting the result to an unsigned type of the same width results in the actual absolute value without the funny int.MinValue edge case:

(uint)abs(int.MinValue) = 
(uint)abs(-2147483648) =
(uint)(-2147483648) =
2147483648

This is not what Math.Abs in C# does, it throws, perhaps inspired by its signed return type. On the other hand, Java's Math.abs gets this right and leaves the reinterpretation up to the consumer of the result, of course in Java there is no uint32 to cast to but you can still treat that result as if it is unsigned. Such "manual reinterpretation" is in general central to integer arithmetic, it's really about the bits, not the "default meaning" of those bits.

The principle of cancelling wrapping also has some interesting data structure applications. For example, in a Fenwick tree or Summed Area Table, the required internal integer width is the desired integer width of any range/area-sum query that you actually want to make. So a SAT over signed bytes can use an internal width of 16 bits as long as you restrict queries to an area of 256 cells or fewer, since 256 * -128 = -215 which still fits a signed 16 bit word.

An other nice case of cancelled wrapping is strength reductions like A * 255 = (A << 8) - A. It is usually not necessary to do that manually, but that's not the point, the point is that the wrapping is not "destructive". The overall expression wraps only iff A * 255 wraps and even then it has exactly the same result. There are cases in which the left shift experience "signed wrapping" but A * 255 does not (for example, in 32 bits, A = 0x00800000), in those cases the subtraction also wraps and brings the result back to being "unwrapped". That is not a coincidence nor an instance of two wrongs making a right, it's a result of the intermediate wrapped result being meaningful and wrapping being algebraically nice.

Signed wrapping is not inherent

Signed and unsigned integers are two different ways to interpret bit vectors. Almost all operations have no specific signed or unsigned version, only a generic version that does both. There is no such thing as signed addition or unsigned addition, addition is just addition. Operations that are actually different are:

  • Comparisons except equality
  • Division and remainder
  • Right shift, maybe, but arithmetic right shift and logical right shift can both be reasonably applied in both signed and unsigned contexts
  • Widening conversion
  • Widening multiplication
One thing almost all of these have in common is that they cannot overflow, except division of the smallest integer by negative one. By the way I regard that particular quirk of division as a mistake since it introduces an asymmetry between dividing by negative one and multiplying by negative one.

The result is that the operations that can "overflow" are neither signed nor unsigned, and therefore do not overflow specifically in either of those ways. If they can be said to overflow at all, when and how they do so depends on how they are being viewed by an outside entity, not on the operation itself.

The distinction between unsigned and signed wrapping is equivalent to imagining a "border" on the ring of integers (not the mathematical Ring of Integers) either between 0 and -1 (unsigned) or between signed-smallest and signed-highest numbers, but there is no border. Crossing either of the imaginary borders does not mean nearly as much as many people think it means.

Signed wrapping is algebraically nice

A property that wrapping arithmetic shares with arbitrary precision integer arithmetic, but not with trapping arithmetic, is that it obeys a good number of desirable algebraic laws. The root cause of this is that ℤ/ℤ2k is a ring, and trapping arithmetic is infested with implicit conditional exceptions. Signed arithmetic can largely be described by ℤ/ℤ2k, like unsigned arithmetic, since it is mostly a reinterpretation of unsigned arithmetic. That description does not cover all operations or properties, but it covers the most important aspects.

Here is a small selection of laws that apply to wrapping arithmetic but not to trapping arithmetic:

  • -(-A) = A
  • A + -A = 0
  • A - B = A + -B
  • A + (B + C) = (A + B) + C
  • A * (B + C) = A * B + A * C
  • A * -B = -A * B = -(A * B)
  • A * (B * C) = (A * B) * C
  • A * 15 = A * 16 - A
  • A * multiplicative_inverse(A) = 1 (iff A is odd, this is something not found in ℤ which has only two trivially invertible numbers, so sometimes wrapping gives you a new useful property)
Some laws also apply to trapping arithmetic:
  • A + 0 = A
  • A - A = 0
  • A * 0 = 0
  • A * 1 = A
  • A * -1 = -A
  • -(-(-A)) = -A

The presence of all the implicit exceptional control flow makes the code very hard to reason about, for humans as well as compilers.

Compilers react to that by not optimizing as much as they otherwise would, since they are forced to preserve the exception behaviour. Almost anything written in the source code must actually happen, and in the same order as originally written, just to preserve exceptions that are not even supposed to ever actually be triggered. The consequences of that are often seen in Swift, where code using the &+ operator is optimized quite well (including auto-vectorization) and code using the unadorned + operator can be noticeably slower.

Humans .. probably don't truly want trapping arithmetic to begin with, what they want is to have their code checked for unintended wrapping. Wrapping is not a bug by itself, but unintended wrapping is. So while canceling a "bare" double negation is not algebraically justified in trapping arithmetic, a programmer will do it anyway since the goal is not to do trapping arithmetic, but removing bad edge cases. Statically checking for unintended wrapping would be a more complete solution, no longer relying on being lucky enough to dynamically encounter every edge case. Arbitrary precision integers would just remove most edge cases altogether, though it would rely heavily on range propagation for performance, making it a bit fragile.

But anyway, wrapping is not so bad. Just often unintended.

Thursday, 2 August 2018

Implementing Euclidean division

While implementing various kinds of division in haroldbot, I had to look up/work out how to implement different kinds of signed division in terms of unsigned division. The common truncated division (written as /s in this post and in haroldbot, /t in some other places) is natural result of using your intuition from ℝ and writing the definition based on signs and absolute values, ensuring that the division only happens between non-negative numbers (making its meaning unambiguous) and that the result is an integer: $$\DeclareMathOperator{\sign}{sign} D /_s d = \sign(d)\cdot\sign(D)\cdot\left\lfloor\cfrac{\left|D\right|}{\left|d\right|}\right\rfloor$$ That definition leads to a plot like this, showing division by 3 as an example:

Of course the absolute values and sign functions create symmetry around the origin, and that seems like a reasonable symmetry to have. But that little plateau around the origin often makes the mirror at the origin a kind of barrier that you can run into, leading to the well-documented downsides of truncated division.

The alternative floored division and Euclidean division have a different symmetry, which does not lead to that little plateau, instead the staircase pattern simply continues:

The point of symmetry, marked by the red cross, is at (-0.5, -0.5). Flipping around -0.5 may remind you of bitwise complement, especially if you have read my earlier post visualizing addition, subtraction and bitwise complement, and mirroring around -0.5 is no more than a conditional complement. So Euclidean division may be implemented with positive division as: $$\DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\xor}{\bigoplus} D /_e d = \sign(d)\cdot(\sgn(D)\xor\left\lfloor\cfrac{D\xor\sgn(D)}{\left|d\right|}\right\rfloor)$$ Where the sgn function is -1 for negative numbers and 0 otherwise, and the circled plus is XOR. XORing with the sgn is a conditional complement, with the inner XOR being responsible for the horizontal component of the symmetry and the outer XOR being responsible for the vertical component.

It would have been even nicer if the symmetry of the divisor also worked that way, but unfortunately that doesn't quite work out. For the divisor, the offset introduced by mirroring around -0.5 would affect the size of the steps of the staircase instead of just their position.

The /e and %e operators are available in haroldbot, though like all forms of division the general case is really too hard, even for the circuit-SAT based truth checker (the BDD engine stands no chance at all).

Saturday, 16 December 2017

Notes on negation

The well known formulas

Most readers will be familiar with -x = ~x + 1 = ~(x - 1). These are often just stated without justification, or even an explanation for why they are equivalent. There are some algebraic tricks, but I don't think they explain much, so I'll use the rings from visualizing addition, subtraction and bitwise complement. ~x + 1, in terms of such a ring, means "flip it, then draw a CCW arrow on it with a length of one tick". ~(x - 1) means "draw a CW arrow with a length of one tick, then flip". Picking CCW first is arbitrary, but the point is that the direction is reversed because flipping the ring also flips the arrow if it is drawn first, but not if it is drawn second. Equivalent to drawing an arrow, you may rotate the ring around its center.

So they're equivalent, but why do they negate. The same effect also explains
a - b = ~(~a + b), which when you substitute a = 0 almost directly gives -b = ~(b - 1). Or using the difference between one's complement and proper negation as I pointed out in that visualization post: the axis of flipping is offset by half a tick, so the effect of flipping introduces a difference of 1 which can be removed by rotating by a tick.

Bit-string notation

I first saw this notation in The Art of Computer Programming v4A, but it probably predates it. It provides a more "structural" view of negation: $$-(a10^k) =\; {\sim} (a10^k - 1) =\; {\sim} (a01^k) = ({\sim} a)10^k$$ Here juxtaposition is concatenation, and exponentiation is repetition and is done before concatenation. a is an arbitrary bit string that may be infinitely long. It does not really deal with the negation of zero, since the input is presumed to end in 10k, but the negation of zero is not very interesting anyway.

What this notation shows is that negation can be thought of as complementing everything to the left of the rightmost set bit, a property that is frequently useful when working with the rightmost bit. A mask of the rightmost set bit and everything to the right of it can be found with
x ^ (x - 1) or, on a modern x86 processor, blsmsk. That leads to negation by XOR: $$-x = x\oplus {\sim}\text{blsmsk}(x)$$ which is sort of cheating since ~blsmsk(x) = x ^ ~(x - 1) = x ^ -x, so this said that
-x = x ^ x ^ -x. It may still be useful occasionally, for example when a value of "known odd-ness" is being negated and then XORed with something, the negation can be merged into the XOR.

Negation by MUX

Using that mask from blsmsk, negation can be written as $$-x = \text{mux}(\text{blsmsk}(x), {\sim} x, x)$$ which combines with bit-level commutativity in some fun ways:

  • (~x + 1) + (x - 1) = mux(blsmsk(x), ~x, x) + mux(blsmsk(x), x, ~x) = ~x + x = -1
  • (~x + 1) | (x - 1) = ~x | x = -1
  • (~x + 1) ^ (x - 1) = ~x ^ x = -1
  • (~x + 1) & (x - 1) = ~x & x = 0
All of these have simpler explanations that don't involve bit-level commutativity, by rewriting them back in terms of negation. But I thought it was nice that it was possible this way too, because it makes it seem as though a +1 and a -1 on both sides of an OR, XOR or AND cancel out, which in general they definitely do not.

The formula that I've been using as an example for the proof-finder on haroldbot.nl/how.html,
(a & (a ^ a - 1)) | (~a & ~(a ^ a - 1)) == -a, is actually a negation-by-MUX, written using mux(m, x, y) = y & m | x & ~m.

Friday, 1 December 2017

Bit-level commutativity

By bit-level commutativity I mean that a binary operator has the property that swapping any subset of bits between the left and right operands does not change the result. The subset may be any old thing, so in general I will call an operator o bit-level commutative if it satisfies the following property $$\forall m,a,b: a \circ b = \text{mux}(m, a, b) \circ \text{mux}(m, b, a)$$ For example, by setting m = b we get a ⊗ b = (a & b) ⊗ (a | b), sort of "bitwise sorting" the operands, with zeroes moved to the left operand and ones moved to the right operand (if possible).

Anyway, obviously AND, OR and XOR (and their complemented versions) are all bit-level commutative, indeed any purely bitwise operation (expressible as a vectorized function that takes two booleans as input) that is commutative is necessarily also bit-level commutative, for obvious reasons. Interestingly, addition is also bit-level commutative, which may be less obvious (at least in a recent coding competition, it seemed that people struggled with this). It may help to consider addition on a slightly more digit-by-digit level: $$ a + b = \sum_i 2^i (a_i + b_i)$$ It should be clear from the bit-level "exploded" sum, that the individual bits ai and bi can be either swapped or not, independently for any i. This should get more obvious the more you think about what representing a number in a positional numeral system even means in the first place: it was always a sum, so adding two numbers is like taking the sum of two "big" sums, of course it does not matter which of the big sums any particular contribution comes from.

Alternatively, the old a + b = (a ^ b) + 2(a & b) (ie computing bit-level sums and then adding the carries separately) can explain it: both XOR and AND are bit-level commutative, so the whole expression is, too.

Anyway, a consequence is that a + b = (a & b) + (a | b), which I have more commonly seen derived as:

a + b = (a ^ b) + 2(a & b)  // add carries separately
      = (a | b) - (a & b) + 2(a & b) // see below
      = (a | b) + (a & b)
Where (a ^ b) = (a | b) - (a & b) can be explained as XOR being like OR, except that unlike OR it is 0 when both operands are set, so just subtract that case out. I always like having two (or more!) explanations from completely different directions like that.

Multiplication (including carryless multiplication and OR-multiplication) is of course not bit-level commutative. For example if one operand is zero and the other is odd and not 1, then the lowest bit could be swapped to make neither operand zero, and a non-zero result could be produced that way. Operations such as comparison and (by extension) min and max are obviously not bit-level commutative.

There is probably more to this, I may add some stuff later.

Sunday, 6 August 2017

Visualizing addition, subtraction and bitwise complement

A relatively well-known relation between addition and subtraction, besides the basic relations a - b = a + (-b) and a - b = -(-a + b), is a - b = ~(~a + b). But I suspect most people have simply accepted that as fact, or perhaps proved it from the 2's complement definition of negation. haroldbot can do the latter, though not as succinctly as I hoped.

But it also has a nice one-dimensional geometric interpretation, really analogous to the way a - b = -(-a + b) looks in ℤ.

As negation mirrors ℤ around the origin, complement mirrors the space of two's complement signed bitvectors around the "space" between -1 and 0. Clearly addition in the mirrored space corresponds to subtraction in the unmirrored space, so the obvious way to subtract is mirror, add, and mirror back. That's precisely what -(-a + b) does in ℤ and what ~(~a + b) does for bitvectors. An observant reader may notice that I convenient disregarded the finiteness of the number line of fixed-size bitvectors, that's actually not a problem but the visualization gets a bit trickier.

What is in a way more surprising is that a - b = -(-a + b) works for bitvectors, since negation does not neatly mirror the whole number line when you're talking about two's complement negation. It's around the origin again instead of around a "space", but the most negative number is unaffected.

When we remember that this number line is really a number ring (in the circular sense), that starts to make sense again. To complete this picture, you can think about holding a ring in your hands, flipping it over while holding it at two diametrically opposite points - zero and the most negative number. Of course this visualization also works for complement, just hold the ring at slightly different places: between negative one and zero, and between the minimum and maximum (which are adjacent, you could think of it as where the ring closes). There are images below, but you should probably only look at them if visualization failed to appear in your mind naturally - if you already have one, your own image is probably easier to think about.

But why

OK all this spatial insight is fun (maybe), but what was it actually good for. I've found that thinking about the complement operation this way helps me to relate it to addition-like arithmetic operations (add, subtract, min, max, compare, etc) since they're all simple operations with "arrows" around that ring that we just flipped in our minds.

So it helps to make sense of various "mutants" of De Morgan's Law, such as:

  • ~x < ~y == x > y
  • ~x > ~y == x < y
  • ~min(~x, ~y) == max(x, y)
  • ~max(~x, ~y) == min(x, y)
  • ~avg_up(~x, ~y) == avg_down(x, y) where avg_down is the average rounding down, see also VirtualDub: Weighted averaging in SSE (part 2)
  • ~avg_down(~x, ~y) == avg_up(x, y)
  • ~paddsb(~x, y) == psubsb(x, y) (signed saturation)
  • ~psubsb(~x, y) == paddsb(x, y) (signed saturation)
  • ~paddusb(~x, y) == psubusb(x, y) (unsigned saturation)
  • ~psubusb(~x, y) == paddusb(x, y) (unsigned saturation)

A similar visualization works for the signed/unsigned "conversion" x ^ msb == x + msb == x - msb (msb is a mask with only the most significant bit set), which corresponds to rotating the ring 180 degrees. This may help when thinking about things such as:

  • x <s y == (x ^ msb) <u (y ^ msb)
  • x <u y == (x ^ msb) <s (y ^ msb)
  • max_s(x, y) == max_u(x ^ msb, y ^ msb) ^ msb

The relationship between the different kinds of min/max can be summed up by a nice commutative diagram:

Hope it helps, for me this sort of thing has come in handy occasionally when writing SSE code.








Here are the images two's complement negation:

and for plain one's complement:

This is in the orientation that I usually use when I think about these operations this way, but there is not particular meaning to going counter-clockwise with 0 at/near the bottom.

Thursday, 24 November 2016

Parallel prefix/suffix operations

After a "brief" hiatus..

Parallel prefix/suffix operations is a family of operations that follow either this template:

x = x ⊕ (x >> 1);
x = x ⊕ (x >> 2);
x = x ⊕ (x >> 4);
x = x ⊕ (x >> 8);
x = x ⊕ (x >> 16);
Or this template:
x = x ⊕ (x << 1);
x = x ⊕ (x << 2);
x = x ⊕ (x << 4);
x = x ⊕ (x << 8);
x = x ⊕ (x << 16);
Where ⊕ is typically OR or XOR. I've never seen ⊕=AND show up naturally, but it might be useful for something.

There is some disagreement (stemming from the usual "which end of an integer is the front" question) about which of them should be called the "parallel prefix" and which the "parallel suffix", for the purpose of this post I'll be calling the first one (with the shifts to the right) "parallel prefix". Either way, these operations are bit-level analogues to the more well known prefix sum, prefix max, prefix product, etc. The word "parallel" in the name refers to the bit-level parallelism, which has the same structure as the simple (not work-efficient) parallel prefix sum algorithm.

Some members of the family

The best-known member of parallel prefix/suffix family is the parallel prefix with ⊕ = XOR (PP-XOR), which converts Gray code back to normally-ordered integers and produces the parity of an integer in the lowest bit.

The parallel suffix with ⊕ = XOR (PS-XOR) is a carryless multiplication by -1, which isn't very interesting by itself (probably), but gives a hint about the algebraic structure that these operations give rise to.

The parallel prefix with ⊕ = OR (PP-OR) often shows up whenever bitwise operations are interacting with ranges/intervals, since PP-OR(low ^ high) gives a mask of the bits that are not constrained by the interval. I have used this operation (though I did not name it then) several times in my series on propagating bounds through bitwise operations.
This operation lends itself to some optimizations (I put "performance" in the title, which I admit I've mostly ignored), for example on x64 you could implement it as

   lzcnt rax, rax  ; input in rax
   sbb rdx, rdx
   not rdx
   shrx rax, rdx, rax
Or:
   lzcnt rax, rax
   mov edx, 64
   sub edx, eax
   or rax, -1
   bzhi rax, rax, rdx
The each have their pros and cons, and hopefully there's a better way to do it, but I couldn't really find any. I made sure to avoid the infamous false dependency that popcnt, tzcnt and lzcnt all share on any Intel processor that implements them to date. Probably the biggest problem here is that both versions require BMI2, that can be avoided, eg (suggested by Jasper Neumann)
   xor edx, edx    // provide a zero register for cmov
   bsr ecx, eax
   mov eax, -1
   not ecx         // flags not modified
   cmovz eax, edx
   shr eax, cl

Properties/structure

To start with the obvious, the prefix and suffix versions are exactly each others mirror image. So I'm going to look just at the suffix part of the family, for the prefix part just mirror everything.

PS-XOR(x) is clmul(x, -1)

Since every step is a clmul by 1 + 2k and if you clmul those constants together you get -1 (the two's complement -1, not the -1 in the ring formed by XOR and clmul over bitvectors of length k, which would just be 1 again), or from a more intuitive angle, what the PS-XOR is supposed to do in the first place is XOR each bit into all higher bits. So it inherits the properties, such as inversibility (-1 is odd). The inverse of PS-XOR is x ^ (x << 1).

It also inherits that PS-XOR(x) ^ PS-XOR(y) == PS-XOR(x ^ y) from the distributivity of clmul.

Since clmul is commutative and associative, the steps in PS-XOR can be reordered.

PS-OR(x) is or_mul(x, -1)

This isn't as nice (XOR and clmul form a ring structure) since OR doesn't form a group because it has no negation (this is just a fancy way of saying that you can't un-OR numbers, in the way you can un-ADD by subtraction or un-XOR with XOR again) and so it doesn't extended to a ring, but it can be extended into a semiring by defining the following multiplication:

static uint or_mul(uint a, uint b)
{
    uint r = 0;
    for (int i = 0; i < 32; i++)
    {
        if ((a & 1) == 1)
            r |= b; // the only difference is here
        a >>= 1;
        b <<= 1;
    }
    return r;
}
It's not fancy math notation but you get the idea.

This (with OR) forming a commutative semiring (proof "left as an exercise to the reader"), it has some nice properties:

  • or_mul(a, b) == or_mul(b, a) commutivity
  • or_mul(a, or_mul(b, c)) == or_mul(or_mul(a, b), c) associativity
  • or_mul(a, b | c) == or_mul(a, b) | or_mul(a, c) left distributivity
  • or_mul(a | b, c) == or_mul(a, c) | or_mul(b, c) right distributivity
But, of course, no multiplicative inverses. Except when multiplying by 1, but that doesn't count since it's the multiplicative identity.

PS-OR(x) == or_mul(x, -1), and the individual steps of the PS-OR are of the form x = or_mul(x, 1 + 2k). The or_mul of all those constants together is, unsurprisingly, -1 (though this notation is now slightly odd since this was all a semiring, I mean the element that has all bits set).

And again, PS-OR inherits PS-OR(x) | PS-OR(y) = PS-OR(x | y) from distributivity.

And again, since or_mul is commutative and associative, the steps in PS-OR can be reordered.

PS-OR(x) is also x | -x

This one doesn't have a nice mirrored equivalent, just the obvious one where you insert a bit-reversal before and after. It also doesn't have an obvious relation to or_mul. As for why it's the case, given that (in string notation) -(a10k) = (~a)10k; a10k | -(a10k) = (~a|a)10k = 110k, so it copies the lowest set bit into all higher bits, just as PS-OR does.