tag:blogger.com,1999:blog-14659869424355382082022-04-22T14:38:57.812-07:00Bits, Math and PerformanceBits and math come together to form interesting algorithms, with a small focus on performance.Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comBlogger29125tag:blogger.com,1999:blog-1465986942435538208.post-57376692920487784482021-07-10T09:43:00.004-07:002021-11-17T07:42:52.013-08:00Integer promotion does not help performance<div style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"> <p>There is a rule in the C language which roughly says that arithmetic operations on short integer types implicitly convert their operands to normal-sized integers, and also give their result as a normal-sized integer. For example in C: </p> <blockquote style="background: #f9f9f9; border-left: 10px solid #ccc; padding: 0.5em 10px;">If an int can represent all values of the original type (as restricted by the width, for a bit-field), the value is converted to an int; otherwise, it is converted to an unsigned int. These are called the integer promotions. All other types are unchanged by the integer promotions.</blockquote> <p>Various other languages have similar rules. For example C#, the specification of which is not as jargon-infested as the C specifiction:</p> <blockquote style="background: #f9f9f9; border-left: 10px solid #ccc; padding: 0.5em 10px;">In the case of integral types, those operators (except the ++ and -- operators) are defined for the int, uint, long, and ulong types. When operands are of other integral types (sbyte, byte, short, ushort, or char), their values are converted to the int type, which is also the result type of an operation.</blockquote> <p>There may be various reasons to include such a rule in a programming language (and some reasons <i>not</i> to), but one that is commonly mentioned is that "the CPU prefers to work at its native word size, other sizes are slower". There is just one problem with that: it is based on an incorrect assumption about how the compiler will need to implement "narrow arithmetic".</p> <p>To give that supposed reason the biggest benefit that I can fairly give it, I will be using MIPS for the assembly examples. MIPS completely lacks narrow arithmetic operations.</p> <h2>Implementing narrow arithmetic as a programmer</h2> <p>Narrow arithmetic is often required, even though various languages make it a bit cumbersome. C# and Java both demand that you explicitly convert the result back to a narrow type. Despite that, code that needs to perform several steps of narrow arithmetic is usually <i>not</i> littered with casts. The usual pattern is to do the arithmetic without intermediate casts, then only in the end use one cast just to make the compiler happy. In C, even that final cast is not necessary.</p> <p>For example, let's reverse the bits of a byte in C#. This code was written by Igor Ostrovsky, in his blog post <a href="http://igoro.com/archive/programming-job-interview-challenge/">Programming job interview challenge</a>. It's not a unique or special case, and I don't mean that negatively: it's good code that anyone proficient could have written, a job well done. Code that senselessly casts back to <tt>byte</tt> after every step is also sometimes seen, perhaps because in that case, the author does not really understand what they are doing.</p> <pre>// Reverses bits in a byte <br />static byte Reverse(byte b)<br />{<br /> int rev = (b >> 4) | ((b & 0xf) << 4);<br /> rev = ((rev & 0xcc) >> 2) | ((rev & 0x33) << 2);<br /> rev = ((rev & 0xaa) >> 1) | ((rev & 0x55) << 1); <br /><br /> return (byte)rev;<br />}</pre> <p>Morally, all of the operations in this function are really narrow operations, but C# cannot express that. A special property of this code is that none of the intermediate results exceed the limits of a <tt>byte</tt>, so in a language without integer promotion it could be written in much the same way, but without going through <tt>int</tt> for the intermediate results.</p> <h2>Implementing narrow arithmetic as a compiler</h2> <p>The central misconception that (I think) gave rise to the myth that integer promotion helps performance, is the assumption that without integer promotion, the compiler must implement narrow operations by inserting an explicit narrowing operation after every arithmetic operation. But that's not true, a compiler for a language that lacks integer promotion can use the same approach that programmers use to implement narrow arithmetic in languages that do have integer promotion. For example, what if two bytes were added together (from memory, and storing the result in memory) in a hypothetical language that lacks integer promotion, and what if that code was compiled for MIPS? The assumption is that it will cost an additional operation to get rid of the "trash bits", but it does not:</p> <pre> lbu $2,0($4)<br /> lbu $3,0($5)<br /> addu $2,$2,$3<br /> sb $2,0($4)</pre> <p>The <tt>sb</tt> instruction does not care about any "trash" in the upper 24 bits, those bits simply won't be stored. This is not a cherry-picked case. Even if there were more arithmetic operations, in most cases the "trash" in the upper bits could safely be left there, being mostly isolated from the bits of interest by the fact that carries only propagate from the least significant bit up, never down. For example, let's throw in a multiplication by 42 and a shift-left by 3 as well for good measure:</p> <pre> lbu $6,0($4)<br /> li $3,42<br /> lbu $2,0($5)<br /> mul $5,$3,$6<br /> addu $2,$5,$2<br /> sll $2,$2,3<br /> sb $2,0($4)</pre> <p>What is true, is that before some operations, the trash in the upper bits must be cleared. For example before a division, shift-right, or comparison. That is not an exhaustive list, but the list of operations that require the upper bits to be clean is shorter (and have less "total frequency") than the list of operations that do not require that, see for example <a href="https://stackoverflow.com/q/34377711/555045">which 2's complement integer operations can be used without zeroing high bits in the inputs, if only the low part of the result is wanted?</a> "Before some operations" is not the same thing as "after every operation", but that still sounds like an additional cost. However, the trash-clearing operations that a compiler for a language that lacks integer promotion would have to insert are not <i>additional</i> operations: they are the same ones that a programmer would write explicitly in a language with integer promotion.</p> <p>It may be possible to construct a contrived case in which a human would know that the high bits of an integer are clean, while a compiler would struggle to infer that. For example, a compiler may have more trouble reasoning about bits that were cancelled by an XOR, or worse, by a multiplication. Such cases are not likely to be the reason behind the myth. A more likely reason is that many programmers are not as familiar with basic integer arithmetic as they perhaps ought to be.</p> <h2>So integer promotion is useless?</h2> <p>Integer promotion may prevent accidental use of narrow operations where wide operations were intended, whether that is worth it is another question. All I wanted to say with this post, is that "the CPU prefers to work at its native word size" is a bogus argument. Even when it is true, it is irrelevant.</p></div>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-42880317106887453982021-06-09T21:50:00.004-07:002021-11-17T07:43:06.567-08:00Partial sums of blsi and blsmsk<div style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"> <p><a href="https://www.felixcloutier.com/x86/blsi">blsi</a> is an x86 operation which extracts the rightmost set bit from a number, it can be implemented efficiently in terms of more familiar operations as <tt>i&-i</tt>. <a href="https://www.felixcloutier.com/x86/blsmsk">blsmsk</a> is a closely related operation which extracts the rightmost set bit and also "smears" it right, setting the bits to the right of that bit as well. <tt>blsmsk</tt> can be implemented as <tt>i^(i-1)</tt>. The smearing makes the result of <tt>blsmsk</tt> almost (but not quite) twice as high as the result of <tt>blsi</tt> for the same input: <tt>blsmsk(i) = floor(blsi(i) * (2 - ε))</tt>.</p> <p>The partial sums of <tt>blsi</tt> and <tt>blsmsk</tt> can be defined as <tt>b(n) = sum(i=1, n, blsi(i))</tt> and <tt>a(n) = sum(i=1, n, blsmsk(i))</tt> respectively. These sequences are on OEIS as <a href="https://oeis.org/A006520">A006520</a> and <a href="https://oeis.org/A080277">A080277</a>. Direct evaluation of those definitions would be inefficient, is there a better way?</p> <h3>Unrecursing the recursive definition</h3> <p>Let's look at the partial sums of <tt>blsmsk</tt> first. Its OEIS entry suggests the recursion below, which is already significantly better than the naive summation:</p> <p><tt>a(1) = 1, a(2*n) = 2*a(n) + 2*n, a(2*n+1) = 2*a(n) + 2*n + 1</tt></p> <p>A "more code-ish"/"less mathy" way to implement that could be like this:</p> <pre>int PartialSumBLSMSK(int i) {<br /> if (i == 1)<br /> return 1;<br /> return (PartialSumBLSMSK(i >> 1) << 1) + i;<br />}</pre> <p>Let's understand what this actually computes, and then find an other way to do it. The overal big picture of the recursion is that <tt>n</tt> is being shifted right on the way "down", and the results of the recursive calls are being shifted left on the way "up", in a way that cancels each other. So in total, what happens is that a bunch of "copies" of <tt>n</tt> is added up, except that at the kth step of the recursion, the kth bit of <tt>n</tt> is reset.</p> <p>This non-tail recursion can be turned into tail-recursion using a standard trick: turning the "sum on the way"-logic into "sum on the way down", by passing two accumulators in extra arguments, one to keep track of the sum, and an other to keep track of how much to multiply <tt>i</tt> by:</p> <pre>int PartialSumBLSMSKTail(int i, int accum_a, int accum_m) {<br /> if (i <= 1)<br /> return accum_a + i * accum_m;<br /> return PartialSumBLSMSKTail(i >> 1, accum_a + i * accum_m, accum_m * 2);<br />}</pre> <p>Such a tail-recursive function is then simple to transform into a loop. Shockingly, GCC (even quite old versions) manages to compile <a href="https://godbolt.org/z/jTjz4vbG6">the original non-tail recursive function into a loop as well</a> without much help (just changing the left-shift into the equivalent multiplication), although some details differ.</p> <p>Anyway, let's put in a value and see what happens. For example, if the input was <tt>11101001</tt>, then the following numbers would be added up:</p> <pre>11101001 (reset bit 0)<br />11101000 (reset bit 1, it's already 0)<br />11101000 (reset bit 2)<br />11101000 (reset bit 3)<br />11100000 (reset bit 4)<br />11100000 (reset bit 5)<br />11000000 (reset bit 6)<br />10000000 (base case)</pre> <p>Look at the columns of the matrix of numbers above, the column for bit 3 has four ones in it, the column for bit 5 has six ones in it. If bit <tt>k</tt> is set bit in <tt>n</tt>, there is a pattern that that bit is set in <tt>k+1</tt> rows.</p> <h3>Using the pattern</h3> <p>Essentially what that pattern means, is that <tt>a(n)</tt> can be expressed as the dot-product between <tt>n</tt> viewed as a vector of bits (weighted according to their position) (𝓷), and a constant vector (𝓬) with entries 1, 2, 3, 4, etc, up to the size of the integer. For example for <tt>n=5</tt>, 𝓷 would be (1, 0, 4), and the dot-product with 𝓬 would be 13. A dot-product like that can be implemented with some bitwise trickery, by using bit-slicing. The trick there is that instead of multiplying the entries of 𝓷 by the entries of 𝓬 directly, we multiply the entries of 𝓷 by the least-significant bits of the entries of 𝓬, and then seperately multiply it by all the second bits of the entries of 𝓬, and so on. Multiplying every entry of 𝓷 at once by a bit of an entry of 𝓬 can be implemented using just a bitwise-AND operation.</p> <p>Although this trick lends itself well to any vector 𝓬, I will use 0,1,2,3.. and add an extra <tt>n</tt> separately (this corresponds to factoring out the +1 that appears at the end of the recursive definition), because that way part of the code can be reused directly by the solution of the partial sums of <tt>blsi</tt> (and also because it looks nicer). The masks that correspond to the chosen vector 𝓬 are easy to compute: each <i>column</i> across the masks is an entry of that vector. In this case, for 32bit integers:</p> <pre>c0 10101010101010101010101010101010<br />c1 11001100110011001100110011001100<br />c2 11110000111100001111000011110000<br />c3 11111111000000001111111100000000<br />c4 11111111111111110000000000000000</pre> <p>The whole function could look like this:</p> <pre>int PartialSumBLSMSK(int n)<br />{<br /> int sum = n;<br /> sum += (n & 0xAAAAAAAA);<br /> sum += (n & 0xCCCCCCCC) << 1;<br /> sum += (n & 0xF0F0F0F0) << 2;<br /> sum += (n & 0xFF00FF00) << 3;<br /> sum += (n & 0xFFFF0000) << 4;<br /> return sum;<br />}</pre> <p><tt>PartialSumBLSI</tt> works almost the same way, with its recursive formula being <tt style="white-space:nowrap;">b(1) = 1, b(2n) = 2b(n) + n, b(2n+1) = 2b(n) + n + 1</tt>. The +1 can be factored out as before, and the other part (<tt>n</tt> instead of <tt>2*n</tt>) is exactly half of what it was before. Dividing 𝓬 by half seems like a problem, but it can be done implicitly by shifting the bit-slices of the product to the right by 1 bit. There are no problems with bits being lost that way, because the least significant bit is always zero in this case (𝓬 has zero as its first element).</p> <pre>int PartialSumBLSI(int n)<br />{<br /> int sum = n;<br /> sum += (n & 0xAAAAAAAA) >> 1;<br /> sum += (n & 0xCCCCCCCC);<br /> sum += (n & 0xF0F0F0F0) << 1;<br /> sum += (n & 0xFF00FF00) << 2;<br /> sum += (n & 0xFFFF0000) << 3;<br /> return sum;<br />}</pre> <h3>Wrapping up</h3> <p>The particular set of constants I used is very useful and appears in more tricks, such as <a href="https://branchfree.org/2018/05/22/bits-to-indexes-in-bmi2-and-avx-512/">collecting indexes of set bits</a>. They are the bitwise complements of a set of masks that Knuth (in The Art of Computer Programming volume 4, section 7.1.3) calls "magic masks", labeled µ<sub>k</sub>.</p> <p>This post was inspired by <a href="https://stackoverflow.com/q/67854074/555045">this question on Stack Overflow</a> and partially based on my own answer and the answer of Eric Postpischil, without which I probably would not have come up with any of this, although I used a used a different derivation and explanation for this post.</p></div>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-48282986168984613092020-09-26T06:27:00.006-07:002021-11-17T07:43:19.426-08:00The range a sum cannot fall within<style type="text/css">.nobr { white-space: nowrap; } </style> <div style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"> <p>Throughout this post, the variables <tt>A</tt>, <tt>B</tt>, and <tt>R</tt> are used, with R defined as <tt>R = A + B</tt>, and <tt class="nobr">A ≤ B</tt>. Arithmetic in this post is unsigned and modulo 2<sup>k</sup>. Note that <tt>A ≤ B</tt> is not a restriction on the input, it is a choice to label the smaller input as <tt>A</tt> and the larger input as <tt>B</tt>. Addition is commutative, so this choice can be made without loss of generality.</p> <h2><tt>R < A || R ≥ B</tt></h2> <p>The sum is less than A <i>iff</i> the addition wraps (1), otherwise it has to be at least B (2).</p> <ol> <li><tt>B</tt> cannot be so high that the addition can wrap all the way up to or past <tt>A</tt>. To make <tt>A + B</tt> add up to <tt>A</tt>, <tt>B</tt> would have had to be 2<sup>k</sup>, which is one beyond the maximum value it can be. <tt class="nobr">R = A</tt> is possible only if <tt>B</tt> is zero, in which case <tt>R ≥ B</tt> holds instead.</li> <li>Since <tt>A</tt> is at least zero, in the absence of wrapping there is no way to reduce the value below the inputs.</li> </ol> <p>Perhaps that all looks obvious, but this has a useful application: if the carry-out of the addition is not available, it can be computed via <tt class="nobr">carry = (x + y) < x</tt>, which is a relatively well-known trick. It does not matter which of <tt>x</tt> or <tt>y</tt> is the smaller or larger input, the sum cannot fall within the "forbidden zone" between them. The occasionally seen <tt class="nobr">carry = (x + y) < max(x, y)</tt> adds an unnecessary complication.</p> <h2><tt>R < (A & B) || R ≥ (A | B)</tt></h2> <p>This is a stronger statement, because <tt class="nobr">A & B</tt> is usually smaller than <tt>A</tt> and <tt class="nobr">A | B</tt> is usually greater than <tt>B</tt>.</p> <p>If no wrapping occurs, then <tt class="nobr">R ≥ (A | B)</tt>. This can be seen for example by splitting the addition into a XOR and adding the carries separately, <tt class="nobr">(A + B) = (A ^ B) + (A & B) * 2</tt>, while bitwise OR can be decomposed similarly into <tt class="nobr">(A | B) = (A ^ B) + (A & B)</tt><sup>(see below)</sup>. Since there is no wrapping (by assumption), <tt class="nobr">(A & B) * 2 ≥ (A & B)</tt> and therefore <tt class="nobr">(A + B) ≥ (A | B)</tt>. Or, with less algebra: addition sometimes produces a zero where the bitwise OR produces a one, but then addition compensates doubly for it by carrying into the next position.</p> <p>For the case in which wrapping occurs I will take a bit-by-bit view. In order to wrap, the carry out of bit <tt class="nobr">k-1</tt> must be 1. In order for the sum to be greater than or equal to <tt class="nobr">A & B</tt>, bit <tt class="nobr">k-1</tt> of the sum must be greater than or equal to bit <tt class="nobr">k-1</tt> of <tt class="nobr">A & B</tt>. That combination means that the carry <i>into</i> bit <tt class="nobr">k-1</tt> of the sum must have been 1 as well. Furthermore, bit <tt class="nobr">k-1</tt> of the sum can't be greater than bit <tt class="nobr">k-1</tt> of <tt class="nobr">A & B</tt>, at most it can be equal, which means bit <tt class="nobr">k-2</tt> must be examined as well. The same argument applies to bit <tt class="nobr">k-2</tt> and so on, until finally for the least-significant bit it becomes impossible for it to be carried into, so the whole thing falls down: by contradiction, <tt class="nobr">A + B</tt> must be less than <tt class="nobr">A & B</tt> when the sum wraps.</p> <h2>What about <tt class="nobr">(A | B) = (A ^ B) + (A & B)</tt> though?</h2> <p>The more obvious version is <tt class="nobr">(A | B) = (A ^ B) | (A & B)</tt>, compensating for the bits reset by the XOR by ORing exactly those bits back in. Adding them back in also works, because the set bits in <tt class="nobr">A ^ B</tt> and <tt class="nobr">A & B</tt> are <i>disjoint</i>: a bit being set in the XOR means that exactly one of the input bits was set, which makes their AND zero.</p></div>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-86301127161752818652020-08-03T06:22:00.001-07:002020-08-03T06:24:30.687-07:00Why does AND distribute over XOR<style>blockquote { background: #f9f9f9; border-left: 10px solid #ccc; margin: 1.5em 10px; padding: 0.5em 10px; quotes: "\201C""\201D""\2018""\2019"; } blockquote:before { color: #ccc; content: open-quote; font-size: 4em; line-height: 0.1em; margin-right: 0.25em; vertical-align: -0.4em; } blockquote p { display: inline; } q { background: #f9f9f9; } </style> <div style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><p> AND distributes over XOR, unsurprisingly both from the left and right, that is: </p><pre>x & y ^ z & y == (x ^ z) & y<br />x & y ^ x & z == x & (y ^ z)<br />a & c ^ a & d ^ b & c ^ b & d == (a ^ b) & (c ^ d)</pre> A somewhat popular explanation for <i>why</i> is, <blockquote cite="https://en.wikipedia.org/wiki/Exclusive_or#Properties">Conjunction and exclusive or form the multiplication and addition operations of a field GF(2), and as in any field they obey the distributive law. <footer><a href="https://en.wikipedia.org/wiki/Exclusive_or#Properties">Wikipedia: Exclusive or#Properties</a></footer></blockquote><p> Which is true and a useful way to think about it, but it is also the type of backwards explanation that relies on a concept that is more advanced than the thing which is being explained. </p><h2>Diagrams with crossing lines</h2><p> Let's represent an expression such as <tt>a & c ^ a & d ^ b & c ^ b & d</tt> by putting the variables on the left of every AND along the top of a grid, and the variables on the right of every AND along the side. Then for example the grid cell on the intersection between the column of <tt>a</tt> and the row of <tt>c</tt> corresponds to the term <tt>a & c</tt>. Further, let's draw lines for variables that are True, in this example all variables are True: </p><img src="https://i.imgur.com/rvAKOhI.png" width="250"/> <p> The overall expression <tt>a & c ^ a & d ^ b & c ^ b & d</tt> counts the number of crossings, modulo 2. Rather than counting the crossings one by one, the number of crossings could be computed by counting how many variables along the top are True, how many along the side are True, and taking the product, again modulo 2. A sum modulo 2 is XOR and a product modulo 2 is AND, so this gives the equivalent expression <tt>(a ^ b) & (c ^ d)</tt>. </p> <p> The simpler cases <tt>x & y ^ z & y</tt> and <tt>x & y ^ x & z</tt> correspond to 1x2 and 2x1 diagrams. </p><h2>Diagrams with bites taken out of them</h2> <p> Such a diagram with a section of it missing can be dealt with by completing the grid and subtracting the difference. For example the unwieldy <tt>a & e ^ a & f ^ a & g ^ a & h ^ b & e ^ b & f ^ b & g ^ b & h ^ c & e ^ c & f ^ d & e ^ d & f</tt> (shown in the diagram below) is "incomplete", it misses the 2x2 square that corresponds to <tt>(c ^ d) & (g ^ h)</tt>. Completing the grid and subtracting the difference gives <tt>((a ^ b ^ c ^ d) & (e ^ f ^ g ^ h)) ^ ((c ^ d) & (g ^ h))</tt>, which <a href="http://haroldbot.nl/?q=a+%26+e+%5E+a+%26+f+%5E+a+%26+g+%5E+a+%26+h+%5E+b+%26+e+%5E+b+%26+f+%5E+b+%26+g+%5E+b+%26+h+%5E+c+%26+e+%5E+c+%26+f+%5E+d+%26+e+%5E+d+%26+f+%3D%3D+%28%28a+%5E+b+%5E+c+%5E+d%29+%26+%28e+%5E+f+%5E+g+%5E+h%29%29+%5E+%28%28c+%5E+d%29+%26+%28g+%5E+h%29%29">is correct</a>. </p> <img src="https://i.imgur.com/SfIxt6C.png" width="250"/> <p> This all has a clear connection to the FOIL method and its generalizations, after all <q>conjunction and exclusive or form the multiplication and addition operations of a field GF(2)</q>. </p> <p> The same diagrams also show why AND distributes over OR (the normal, inclusive, OR), which could alternatively be explained in terms of the <a href="https://en.wikipedia.org/wiki/Two-element_Boolean_algebra">Boolean semiring</a>. </p></div>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-36711275019684038902020-05-03T16:28:00.000-07:002020-05-03T16:42:18.175-07:00Information on incrementation<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script type="text/javascript">MathJax.Hub.Config({ "HTML-CSS": { scale: 100 } }); </script><span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><h2>Defining <tt>increment</tt></h2><p>Just to avoid any confusion, the operation that this post is about is adding 1 (one) to a value: $$\text{increment}(x) = x + 1$$ Specifically, performing that operation in the domain of bit-vectors.</p><p>Incrementing is very closely related to <a href="https://bitmath.blogspot.com/2017/12/notes-on-negation.html">negating</a>. After all, <tt style="white-space: nowrap">-x = ~x + 1</tt> and therefore <tt style="white-space: nowrap">x + 1 = -~x</tt>, though putting it that way feel oddly reversed to me.</p><h2>Bit-string notation</h2><p>In bit-string notation (useful for analysing compositions of operations at the bit level), increment can be represented as: $$a01^k + 1 = a10^k$$</p><p>An "English" interpretation of that form is that an increment carries through the trailing set bits, turning them to zero, and then carries into the right-most unset bit, setting it.</p><p>That "do something special with the right-most unset bit" aspect of increment is the basis for various <a href="http://programming.sirrida.de/programming.html#rightmost_bits">right-most bit manipulations</a>, some of which were implemented in <a href="https://en.wikipedia.org/wiki/Bit_Manipulation_Instruction_Sets#TBM_(Trailing_Bit_Manipulation)">AMD Trailing Bit Manipulation (TBM)</a> (which has been discontinued).</p><p>For example, the right-most unset bit in <tt>x</tt> can be set using <tt style="white-space: nowrap">x | (x + 1)</tt>, which has a nice symmetry with the more widely known trick for unsetting the right-most set bit, <tt style="white-space: nowrap">x & (x - 1)</tt>.</p><h2>Increment by XOR</h2><p>As was the case with negation, there is a way to define increment in terms of XOR. The bits that flip during an increment are all the trailing set bits and the right-most unset bit, the TBM instruction for which is <tt>BLCMSK</tt>. While that probably does not seem very useful yet, the fact that <tt style="white-space: nowrap">x ^ (x + 1)</tt> takes the form of some number of leading zeroes followed by some number of trailing ones turns out to be useful.</p><p>Suppose one wants to increment a bit-reversed integer, a possible (and commonly seen) approach is looping of the bits from top the bottom and implementing the "carry through the ones, into the first zero" logic by hand. However, if the non-reversed value was <i>also</i> available (let's call it <tt>i</tt>), the bit-reversed increment could be implemented by calculating the number of ones in the mask as <tt style="white-space: nowrap">tzcnt(i + 1) + 1</tt> (or <tt style="white-space: nowrap">popcnt(i ^ (i + 1))</tt>) and forming a mask with that number of ones located at the desired place within an integer: <pre>// i = normal counter<br />// rev = bit-reversed counter<br />// N = 1 << number_of_bits<br />int maskLen = tzcnt(i + 1) + 1;<br />rev ^= N - (N >> maskLen);</pre>That may still not seem useful, but this enables an implementation of the <a href="https://en.wikipedia.org/wiki/Bit-reversal_permutation">bit-reversal permutation</a> (not a bit-reversal itself, but the permutation that results from bit-reversing the indices). The bit-reversal permutation is sometimes used to re-order the result of a non-auto-sorting Fast Fourier Transform algorithm into the "natural" order. For example, <pre>// X = array of data<br />// N = length of X, power of two<br />for (uint32_t i = 0, rev = 0; i < N; ++i)<br />{<br /> if (i < rev)<br /> swap(X[i], X[rev]);<br /> int maskLen = tzcnt(i + 1) + 1;<br /> rev ^= N - (N >> maskLen);<br />}</pre>This makes no special effort to be cache-efficient.</p></span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-64281581720303151482019-10-10T07:43:00.000-07:002019-10-10T08:16:08.742-07:00Square root of bitwise NOT<p style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif;"><p>The square root of bitwise NOT, if it exists, would be some function <i>f</i> such that <i>f(f(x)) = not x</i>, or in other words, <i>f²(x) = not x</i>. It is similar in concept to the <a href="https://en.wikipedia.org/wiki/Quantum_logic_gate#Square_root_of_NOT_gate_(%E2%88%9ANOT)">√NOT gate in Quantum Computing</a>, but in a different domain which makes the solution very different.</p> <p>Before trying to find any specific <i>f</i>, it may be interesting to wonder what properties it would have to have (and lack). <ul> <li><i>f</i> must be bijective, because its square is bijective.</li> <li><i>f²</i> is an involution but <i>f</i> cannot be an involution, because its square would then be the identity.</li> <li><i>f</i> viewed as a permutation (which can be done, because it has to be bijective) must be a <a href="https://en.wikipedia.org/wiki/Derangement">derangement</a>, if it had any fixed point then that would also be a fixed point in <i>f²</i> and the <i>not</i> function does not have a fixed point.</li></ul></p> <p><h2>Does <i>f</i> exist?</h2>In general, a permutation has a square root if and only if the number of cycles of same even length is even. The <i>not</i> function, being an involution, can only consist of swaps and fixed points, and we already knew it has no fixed points so it must consist of only swaps. A swap is a cycle of length 2, so an even length. Since the <i>not</i> function operates on <i>k</i> bits, the size of its domain is a power of two, <i>2<sup>k</sup></i>. That almost always guarantees an even number of swaps, except when <i>k = 1</i>. So, the <i>not</i> function on a single bit has no square root, but for more than 1 bit there are solutions.</p> <h2><i>f</i> for even k</h2><p>For 2 bits, the <i>not</i> function is the permutation <span>(0 3) (1 2)</span>. An even number of even-length cycles, as predicted. The square root can be found by interleaving the cycles, giving (0 1 3 2) or (1 0 2 3). In bits, the first looks like:</p><tt><table><tr><td>in</td><td>out</td></tr><tr><td>00</td><td>01</td></tr><tr><td>01</td><td>11</td></tr><tr><td>10</td><td>00</td></tr><tr><td>11</td><td>10</td></tr></table></tt><p>Which corresponds to swapping the bits and then inverting the lsb, the other variant corresponds to inverting the lsb first and then swapping the bits.</p> <p>That solution can be applied directly to other even numbers of bits, swapping the even and odd bits and then inverting the even bits, but the square root is not unique and there are multiple variants. The solution can be generalized a bit, combining a step that inverts half of the bits with a permutation that brings each half of the bits into the positions that are inverted when it is applied twice, so that half the bits are inverted the first time and the <i>other</i> half of the bits are inverted the second time. For example for 32 bits, there is a nice solution in x86 assembly: <pre style="background-color: #EBECE4">bswap eax<br />xor eax, 0xFFFF<br /></pre></p><h2><i>f</i> for odd k</h2><p>Odd k makes things less easy. Consider k=3, so (0 7) (1 6) (2 5) (3 4). There are different ways to pair up and interleave the cycles, leading to several distinct square roots: <ol> <li>(0 1 7 6) (2 3 5 4)</li> <li>(0 2 7 5) (1 3 6 4)</li> <li>(0 3 7 4) (1 2 6 5)</li> <li>etc..</li></ol></p><tt><table><tr><td>in</td><td>1</td><td>2</td><td>3</td></tr><tr><td>000</td><td>001</td><td>010</td><td>011</td></tr><tr><td>001</td><td>111</td><td>011</td><td>010</td></tr><tr><td>010</td><td>011</td><td>111</td><td>110</td></tr><tr><td>011</td><td>101</td><td>110</td><td>111</td></tr><tr><td>100</td><td>010</td><td>001</td><td>000</td></tr><tr><td>101</td><td>100</td><td>000</td><td>001</td></tr><tr><td>110</td><td>000</td><td>100</td><td>101</td></tr><tr><td>111</td><td>110</td><td>101</td><td>100</td></tr></table></tt><p>These correspond to slightly tricky functions, for example the first one has as its three from lsb to msb: the msb but inverted, the parity of the input, and finally the lsb. The other ones also incorporate the parity of the input in some way.</p></p>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-60850388386975823762018-10-03T03:39:00.000-07:002018-10-03T03:52:47.090-07:00abs and its "extra" result<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script type="text/javascript">MathJax.Hub.Config({ "HTML-CSS": { scale: 100 } }); </script><span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><p>The <tt>abs</tt> function has, in its usual (most useful) formulation, one extra value in its codomain than just "all non-negative values". That extra value is the most negative integer, which satisfies <tt>abs(x) == x</tt> despite being negative. Even accepting that the absolute value of the most negative integer is itself, it may still seem strange (for an operation that is supposed to have such a nice symmetry) that the size of the codomain is not exactly half of the size of the domain.</p><p>That there is an "extra" value in the codomain, and that it is specifically the most negative integer, may be more intuitively obvious when the action of <tt>abs</tt> on the number <s>line</s> circle is depicted as "folding" the circle symmetrically in half across the center and through zero (around which <tt>abs</tt> is supposed to be symmetric), folding the negative numbers onto the corresponding positive numbers: </br><img border="0" src="https://1.bp.blogspot.com/-cymRZhXigG8/W7SWlsqpvjI/AAAAAAAAAJs/zHDYKevIwfcy1x81hv9zqB6aTdVb8MM_ACLcBGAs/s1600/number%2Bcircle%2Bmirror.png"/></p><p>Clearly both zero and the most negative integer (which is also on the "folding line") stay in place in such a folding operation and remain part of the resulting half-circle. That there is an "extra" value in the codomain is the usual fencepost effect: the resulting half-circle is half the size of the original circle in some sense, but the "folding line" cuts through two points that have now become endpoints.</p><p>By the way the "ones' complement alternative" to the usual <tt>abs</tt>, let's call it <tt>OnesAbs(x) = x < 0 ? ~x : x</tt> (there is a nice branch-free formulation too) <i>does</i> have a codomain with a size exactly half of the size of its domain. The possible results are exactly the non-negative values. It has to pay for that by, well, not being the usual <tt>abs</tt>. The "folding line" for <tt>OnesAbs</tt> runs <i>between</i> points, avoiding the fencepost issue:</br><img border="0" src="https://4.bp.blogspot.com/-rHHJNzlzsYE/W7SbubkyAJI/AAAAAAAAAJ4/l6IAS_yVEv4ZZSlpCBHiU4mVHKAy2zAtACLcBGAs/s1600/number%2Bcircle%2Bmirror%2Bcpl.png" /></p></span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-18537902208846599142018-08-21T13:06:00.000-07:002018-08-21T13:18:11.594-07:00Signed wrapping is meaningful and algebraically nice<span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><div><ul> <li><a href="#c1">Signed wrapping is not wrong</a></li> <li><a href="#c2">Signed wrapping is meaningful</a></li> <li><a href="#c3">Signed wrapping is not inherent</a></li> <li><a href="#c4">Signed wrapping is algebraically nice</a></li></ul></div><p>In this post I defend wrapping, a bit more opinionated than my other posts. As usual I'm writing from the perspective that signed and unsigned integer types are a thin transparent wrapper around bit vectors, of course I am aware that they are often not used that way. That difference between their use and their actual nature is probably the source of the problems.</p><a name="c1"></a><h2>Signed wrapping is not wrong</h2><p>It is often said that when signed wraparound occurs, the result is simply wrong. That is an especially narrow view to take, probably inspired by treating fixed-size bit vector arithmetic as if it is arithmetic in ℤ, which it is not. Bit vector arithmetic can be viewed as arithmetic in ℤ so long as no "overflow" occurs, but violating that condition does not make the result wrong, it makes the interpretation wrong. </p> <a name="c2"></a><h2>Signed wrapping is meaningful</h2><p>The wrapping works exactly the same as unsigned wrapping, it corresponds to taking the lowest k bits of the arbitrary precision result. Such a truncation therefore gives you exactly k meaningful bits, it's just a slice of the result. Some upper bits may be lost, they can be calculated if you need them. If the whole result is meaningful, then part of it is too, namely <i>at least</i> under the interpretation of being "part of the result".</p><p>An other well known example of benign wrapping is the calculation of the average of two non-negative signed integers. While <tt>(a + b) / 2</tt> gives inconvenient results when the addition "overflows", <tt>(uint)(a + b) / 2</tt> (using unsigned division) or <tt>(a + b) >>> 1</tt> (unsigned right shift as in Java) are correct even when the addition of two positive values results in a negative value. An other way to look at it is that there is no <i>unsigned</i> wrapping. Nominally the integers being added here are signed but that doesn't really matter. Casting the inputs to unsigned before adding them is a no-op that can be performed mentally.</p><p>Wrapping can also sometimes be cancelled with more wrapping. For example, taking an absolute value with wrapping and casting the result to an unsigned type of the same width results in the actual absolute value without the funny <tt>int.MinValue</tt> edge case:</p><pre>(uint)abs(int.MinValue) = <br />(uint)abs(-2147483648) =<br />(uint)(-2147483648) =<br />2147483648</pre><p>This is <i>not</i> what <a href="https://docs.microsoft.com/en-us/dotnet/api/system.math.abs?view=netframework-4.7.2#System_Math_Abs_System_Int32_"><tt>Math.Abs</tt></a> in C# does, it throws, perhaps inspired by its signed return type. On the other hand, Java's <a href="https://docs.oracle.com/javase/8/docs/api/java/lang/Math.html#abs-int-"><tt>Math.abs</tt></a> gets this right and leaves the reinterpretation up to the consumer of the result, of course in Java there is no uint32 to cast to but you can still treat that result <i>as if</i> it is unsigned. Such "manual reinterpretation" is in general central to integer arithmetic, it's really about the bits, not the "default meaning" of those bits.</p><p>The principle of cancelling wrapping also has some interesting data structure applications. For example, in a Fenwick tree or Summed Area Table, the required internal integer width is the desired integer width of any range/area-sum query that you actually want to make. So a SAT over signed bytes can use an internal width of 16 bits as long as you restrict queries to an area of 256 cells or fewer, since 256 * -128 = -2<sup>15</sup> which still fits a signed 16 bit word.</p><p>An other nice case of cancelled wrapping is strength reductions like <tt>A * 255 = (A << 8) - A</tt>. It is usually not necessary to do that manually, but that's not the point, the point is that the wrapping is not "destructive". The overall expression wraps only <i>iff</i> <tt>A * 255</tt> wraps and even then it has exactly the same result. There are cases in which the left shift experience "signed wrapping" but <tt>A * 255</tt> does not (for example, in 32 bits, A = 0x00800000), in those cases the subtraction also wraps and brings the result back to being "unwrapped". That is not a coincidence nor an instance of two wrongs making a right, it's a result of the intermediate wrapped result being meaningful and wrapping being algebraically nice.</p> <a name="c3"></a><h2>Signed wrapping is not inherent</h2><p>Signed and unsigned integers are two different ways to interpret bit vectors. Almost all operations have no specific signed or unsigned version, only a generic version that does both. There is no such thing as signed addition or unsigned addition, addition is just addition. Operations that are actually different are: <ul> <li>Comparisons except equality</li> <li>Division and remainder</li> <li>Right shift, maybe, but arithmetic right shift and logical right shift can both be reasonably applied in both signed and unsigned contexts</li> <li>Widening conversion</li> <li>Widening multiplication</li></ul>One thing almost all of these have in common is that they cannot overflow, except division of the smallest integer by negative one. By the way I regard that particular quirk of division as a mistake since it introduces an asymmetry between dividing by negative one and multiplying by negative one.</p><p>The result is that the operations that can "overflow" are neither signed nor unsigned, and therefore do not overflow specifically in either of those ways. If they can be said to overflow at all, when and how they do so depends on how they are being viewed by an outside entity, not on the operation itself.</p><p>The distinction between unsigned and signed wrapping is equivalent to imagining a "border" on the <a href="http://bitmath.blogspot.com/2017/08/visualizing-addition-subtraction-and.html">ring of integers</a> (not the mathematical Ring of Integers) either between 0 and -1 (unsigned) or between signed-smallest and signed-highest numbers, but <i>there is no border</i>. Crossing either of the imaginary borders does not mean nearly as much as many people think it means.</p> <a name="c4"></a><h2>Signed wrapping is algebraically nice</h2><p>A property that wrapping arithmetic shares with arbitrary precision integer arithmetic, but not with trapping arithmetic, is that it obeys a good number of desirable algebraic laws. The root cause of this is that ℤ/ℤ2<sup>k</sup> is a <a href="https://en.wikipedia.org/wiki/Ring_(mathematics)">ring</a>, and trapping arithmetic is infested with implicit conditional exceptions. Signed arithmetic can largely be described by ℤ/ℤ2<sup>k</sup>, like unsigned arithmetic, since it is mostly a reinterpretation of unsigned arithmetic. That description does not cover all operations or properties, but it covers the most important aspects.</p><p>Here is a small selection of laws that apply to wrapping arithmetic but not to trapping arithmetic: <ul> <li>-(-A) = A</li> <li>A + -A = 0</li> <li>A - B = A + -B</li> <li>A + (B + C) = (A + B) + C</li> <li>A * (B + C) = A * B + A * C</li> <li>A * -B = -A * B = -(A * B)</li> <li>A * (B * C) = (A * B) * C</li> <li>A * 15 = A * 16 - A</li> <li>A * multiplicative_inverse(A) = 1 (iff A is odd, this is something not found in ℤ which has only two trivially invertible numbers, so sometimes wrapping gives you a new useful property)</li></ul>Some laws also apply to trapping arithmetic: <ul> <li>A + 0 = A</li> <li>A - A = 0</li> <li>A * 0 = 0</li> <li>A * 1 = A</li> <li>A * -1 = -A</li> <li>-(-(-A)) = -A</li></ul></p><p>The presence of all the implicit exceptional control flow makes the code very hard to reason about, for humans as well as compilers.</p><p>Compilers react to that by not optimizing as much as they otherwise would, since they are forced to preserve the exception behaviour. Almost anything written in the source code must actually happen, and in the same order as originally written, just to preserve exceptions that are not even supposed to ever actually be triggered. The consequences of that are often seen in Swift, where code using the <tt>&+</tt> operator is optimized quite well (including auto-vectorization) and code using the unadorned <tt>+</tt> operator can be noticeably slower.</p><p>Humans .. probably don't truly want trapping arithmetic to begin with, what they want is to have their code checked for unintended wrapping. Wrapping is not a bug by itself, but <i>unintended</i> wrapping is. So while canceling a "bare" double negation is not algebraically justified in trapping arithmetic, a programmer will do it anyway since the goal is not to do trapping arithmetic, but removing bad edge cases. Statically checking for unintended wrapping would be a more complete solution, no longer relying on being lucky enough to dynamically encounter every edge case. Arbitrary precision integers would just remove most edge cases altogether, though it would rely heavily on range propagation for performance, making it a bit fragile.</p><p>But anyway, wrapping is not so bad. Just often unintended.</p></span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-44889289559536761212018-08-02T03:07:00.001-07:002019-01-07T13:28:56.721-08:00Implementing Euclidean division<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script type="text/javascript">MathJax.Hub.Config({ "HTML-CSS": { scale: 100 } }); </script><span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><p>While implementing various kinds of division in <a href="http://haroldbot.nl">haroldbot</a>, I had to look up/work out how to implement different kinds of signed division in terms of unsigned division. The common truncated division (written as <tt>/s</tt> in this post and in haroldbot, <tt>/t</tt> in some other places) is natural result of using your intuition from ℝ and writing the definition based on signs and absolute values, ensuring that the division only happens between non-negative numbers (making its meaning unambiguous) and that the result is an integer: $$\DeclareMathOperator{\sign}{sign} D /_s d = \sign(d)\cdot\sign(D)\cdot\left\lfloor\cfrac{\left|D\right|}{\left|d\right|}\right\rfloor$$ That definition leads to a plot like this, showing division by 3 as an example:<p><img src="https://4.bp.blogspot.com/-JL8nINlDDKM/W2K9YuzJUzI/AAAAAAAAAIw/DjpnpiTeR-ghkrwDcnHrsApkx69IHxIuwCPcBGAYYCw/s400/truncated_div_plot1.png" /><p/>Of course the absolute values and sign functions create symmetry around the origin, and that seems like a reasonable symmetry to have. But that little plateau around the origin often makes the mirror at the origin a kind of barrier that you can run into, leading to the well-documented downsides of truncated division.</p><p>The alternative floored division and Euclidean division have a different symmetry, which does not lead to that little plateau, instead the staircase pattern simply continues:<p><img src="https://1.bp.blogspot.com/-hAmcPQ7ZxrU/W2K9W92ncWI/AAAAAAAAAIs/BpwkdHCVv7Yr8RauK4fT1ya6oS5YdW-6QCPcBGAYYCw/s400/euclidean_div_plot1.png" /></p>The point of symmetry, marked by the red cross, is at (-0.5, -0.5). Flipping around -0.5 may remind you of bitwise complement, especially if you have read my earlier post <a href="http://bitmath.blogspot.com/2017/08/visualizing-addition-subtraction-and.html">visualizing addition, subtraction and bitwise complement</a>, and mirroring around -0.5 is no more than a conditional complement. So Euclidean division may be implemented with positive division as: $$\DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\xor}{\bigoplus} D /_e d = \sign(d)\cdot(\sgn(D)\xor\left\lfloor\cfrac{D\xor\sgn(D)}{\left|d\right|}\right\rfloor)$$ Where the <tt>sgn</tt> function is -1 for negative numbers and 0 otherwise, and the circled plus is XOR. XORing with the <tt>sgn</tt> is a conditional complement, with the inner XOR being responsible for the horizontal component of the symmetry and the outer XOR being responsible for the vertical component.</p><p>It would have been even nicer if the symmetry of the divisor also worked that way, but unfortunately that doesn't quite work out. For the divisor, the offset introduced by mirroring around -0.5 would affect the size of the steps of the staircase instead of just their position.</p><p>The <tt>/e</tt> and <tt>%e</tt> operators are available in haroldbot, though like all forms of division the general case is really too hard, even for the circuit-SAT based truth checker (the BDD engine stands no chance at all).</p></span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-81380076835140745842017-12-16T11:04:00.000-08:002018-08-13T09:00:51.671-07:00Notes on negation<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script type="text/javascript">MathJax.Hub.Config({ "HTML-CSS": { scale: 100 } }); </script><span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><h2>The well known formulas</h2><p>Most readers will be familiar with <tt>-x = ~x + 1 = ~(x - 1)</tt>. These are often just stated without justification, or even an explanation for why they are equivalent. There are some algebraic tricks, but I don't think they explain much, so I'll use the rings from <a href="http://bitmath.blogspot.com/2017/08/visualizing-addition-subtraction-and.html">visualizing addition, subtraction and bitwise complement</a>. <tt>~x + 1</tt>, in terms of such a ring, means "flip it, then draw a CCW arrow on it with a length of one tick". <tt>~(x - 1)</tt> means "draw a CW arrow with a length of one tick, then flip". Picking CCW first is arbitrary, but the point is that the direction is reversed because flipping the ring also flips the arrow if it is drawn first, but not if it is drawn second. Equivalent to drawing an arrow, you may rotate the ring around its center.</p><p>So they're equivalent, but why do they negate. The same effect also explains<br/> <tt>a - b = ~(~a + b)</tt>, which when you substitute <tt>a = 0</tt> almost directly gives <tt>-b = ~(b - 1)</tt>. Or using the difference between one's complement and proper negation as I pointed out in that visualization post: the axis of flipping is offset by half a tick, so the effect of flipping introduces a difference of 1 which can be removed by rotating by a tick.</p> <h2>Bit-string notation</h2><p>I first saw this notation in The Art of Computer Programming v4A, but it probably predates it. It provides a more "structural" view of negation: $$-(a10^k) =\; {\sim} (a10^k - 1) =\; {\sim} (a01^k) = ({\sim} a)10^k$$ Here juxtaposition is concatenation, and exponentiation is repetition and is done before concatenation. <tt>a</tt> is an arbitrary bit string that may be infinitely long. It does not really deal with the negation of zero, since the input is presumed to end in 10<sup>k</sup>, but the negation of zero is not very interesting anyway.</p><p>What this notation shows is that negation can be thought of as complementing everything to the left of the rightmost set bit, a property that is frequently useful when <a href="http://bitmath.blogspot.com/2012/09/the-basics-of-working-with-rightmost-bit.html">working with the rightmost bit</a>. A mask of the rightmost set bit and everything to the right of it can be found with <br/><tt>x ^ (x - 1)</tt> or, on a modern x86 processor, <tt>blsmsk</tt>. That leads to negation by XOR: $$-x = x\oplus {\sim}\text{blsmsk}(x)$$ which is sort of cheating since <tt>~blsmsk(x) = x ^ ~(x - 1) = x ^ -x</tt>, so this said that <br/><tt>-x = x ^ x ^ -x</tt>. It may still be useful occasionally, for example when a value of "known odd-ness" is being negated and then XORed with something, the negation can be merged into the XOR.</p><h2>Negation by MUX</h2><p>Using that mask from <tt>blsmsk</tt>, negation can be written as $$-x = \text{mux}(\text{blsmsk}(x), {\sim} x, x)$$ which combines with <a href="http://bitmath.blogspot.com/2017/12/bit-level-commutativity.html">bit-level commutativity</a> in some fun ways: <ul><li><tt>(~x + 1) + (x - 1) = mux(blsmsk(x), ~x, x) + mux(blsmsk(x), x, ~x) = ~x + x = -1</tt></li><li><tt>(~x + 1) | (x - 1) = ~x | x = -1</tt></li><li><tt>(~x + 1) ^ (x - 1) = ~x ^ x = -1</tt></li><li><tt>(~x + 1) & (x - 1) = ~x & x = 0</tt></li></ul>All of these have simpler explanations that don't involve bit-level commutativity, by rewriting them back in terms of negation. But I thought it was nice that it was possible this way too, because it makes it seem as though a +1 and a -1 on both sides of an OR, XOR or AND cancel out, which in general they definitely do not.</p><p>The formula that I've been using as an example for the proof-finder on <a href="http://haroldbot.nl/how.html">haroldbot.nl/how.html</a>, <br/><tt>(a & (a ^ a - 1)) | (~a & ~(a ^ a - 1)) == -a</tt>, is actually a negation-by-MUX, written using <tt>mux(m, x, y) = y & m | x & ~m</tt>.</p></span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-29698167062253011102017-12-01T17:35:00.000-08:002017-12-01T17:39:53.568-08:00Bit-level commutativity<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script type="text/javascript">MathJax.Hub.Config({ "HTML-CSS": { scale: 100 } }); </script><span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><p>By bit-level commutativity I mean that a binary operator has the property that swapping any subset of bits between the left and right operands does not change the result. The subset may be any old thing, so in general I will call an operator <tt>o</tt> bit-level commutative if it satisfies the following property $$\forall m,a,b: a \circ b = \text{mux}(m, a, b) \circ \text{mux}(m, b, a)$$ For example, by setting <tt>m = b</tt> we get <tt>a ⊗ b = (a & b) ⊗ (a | b)</tt>, sort of "bitwise sorting" the operands, with zeroes moved to the left operand and ones moved to the right operand (if possible).</p><p>Anyway, obviously AND, OR and XOR (and their complemented versions) are all bit-level commutative, indeed any purely bitwise operation (expressible as a vectorized function that takes two booleans as input) that is commutative is necessarily also bit-level commutative, for obvious reasons. Interestingly, addition is also bit-level commutative, which may be less obvious (at least in a recent coding competition, it seemed that people struggled with this). It may help to consider addition on a slightly more digit-by-digit level: $$ a + b = \sum_i 2^i (a_i + b_i)$$ It should be clear from the bit-level "exploded" sum, that the individual bits a<sub>i</sub> and b<sub>i</sub> can be either swapped or not, independently for any <tt>i</tt>. This should get more obvious the more you think about what representing a number in a positional numeral system even <i>means</i> in the first place: it was always a sum, so adding two numbers is like taking the sum of two "big" sums, of course it does not matter which of the big sums any particular contribution comes from.</p><p>Alternatively, the old <tt>a + b = (a ^ b) + 2(a & b)</tt> (ie computing bit-level sums and then adding the carries separately) can explain it: both XOR and AND are bit-level commutative, so the whole expression is, too.</p><p>Anyway, a consequence is that <tt>a + b = (a & b) + (a | b)</tt>, which I have more commonly seen derived as: <pre>a + b = (a ^ b) + 2(a & b) // add carries separately<br /> = (a | b) - (a & b) + 2(a & b) // see below<br /> = (a | b) + (a & b)<br /></pre>Where <tt>(a ^ b) = (a | b) - (a & b)</tt> can be explained as XOR being like OR, except that unlike OR it is 0 when both operands are set, so just subtract that case out. I always like having two (or more!) explanations from completely different directions like that.</p><p>Multiplication (including carryless multiplication and OR-multiplication) is of course <i>not</i> bit-level commutative. For example if one operand is zero and the other is odd and not 1, then the lowest bit could be swapped to make neither operand zero, and a non-zero result could be produced that way. Operations such as comparison and (by extension) min and max are obviously not bit-level commutative.</p><p>There is probably more to this, I may add some stuff later.</p></span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-43768952816606238122017-08-06T21:47:00.002-07:002017-11-23T06:00:33.184-08:00Visualizing addition, subtraction and bitwise complement<span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><p>A relatively well-known relation between addition and subtraction, besides the basic relations <tt>a - b = a + (-b)</tt> and <tt>a - b = -(-a + b)</tt>, is <tt>a - b = ~(~a + b)</tt>. But I suspect most people have simply accepted that as fact, or perhaps proved it from the 2's complement definition of negation. haroldbot can <a href="http://haroldbot.nl/?q=~%28~a+%2B+b%29+%3D%3D+a+-+b">do the latter</a>, though not as succinctly as I hoped.</p><p>But it also has a nice one-dimensional geometric interpretation, really analogous to the way <tt>a - b = -(-a + b)</tt> looks in ℤ.</p><p>As negation mirrors ℤ around the origin, complement mirrors the space of two's complement signed bitvectors around the "space" between -1 and 0. Clearly addition in the mirrored space corresponds to subtraction in the unmirrored space, so the obvious way to subtract is mirror, add, and mirror back. That's precisely what <tt>-(-a + b)</tt> does in ℤ and what <tt>~(~a + b)</tt> does for bitvectors. An observant reader may notice that I convenient disregarded the finiteness of the number line of fixed-size bitvectors, that's actually not a problem but the visualization gets a bit trickier.</p><p>What is in a way more surprising is that <tt>a - b = -(-a + b)</tt> works for bitvectors, since negation does not neatly mirror the whole number line when you're talking about two's complement negation. It's around the origin again instead of around a "space", but the most negative number is unaffected.</p><p>When we remember that this number line is really a number <i>ring</i> (in the circular sense), that starts to make sense again. To complete this picture, you can think about holding a ring in your hands, flipping it over while holding it at two diametrically opposite points - zero and the most negative number. Of course this visualization also works for complement, just hold the ring at slightly different places: between negative one and zero, and between the minimum and maximum (which are adjacent, you could think of it as where the ring closes). There are images below, but you should probably only look at them if visualization failed to appear in your mind naturally - if you already have one, your own image is probably easier to think about.</p><h3>But why</h3><p>OK all this spatial insight is fun (maybe), but what was it actually good for. I've found that thinking about the complement operation this way helps me to relate it to addition-like arithmetic operations (add, subtract, min, max, compare, etc) since they're all simple operations with "arrows" around that ring that we just flipped in our minds.</p><p>So it helps to make sense of various "mutants" of De Morgan's Law, such as: <ul><li><tt>~x < ~y == x > y</tt></li><li><tt>~x > ~y == x < y</tt></li><li><tt>~min(~x, ~y) == max(x, y)</tt></li><li><tt>~max(~x, ~y) == min(x, y)</tt></li><li><tt>~avg_up(~x, ~y) == avg_down(x, y)</tt> where <tt>avg_down</tt> is the average rounding down, see also <a href="http://www.virtualdub.org/blog/pivot/entry.php?id=222">VirtualDub: Weighted averaging in SSE (part 2)</a></li><li><tt>~avg_down(~x, ~y) == avg_up(x, y)</tt></li><li><tt>~paddsb(~x, y) == psubsb(x, y)</tt> (signed saturation)</li><li><tt>~psubsb(~x, y) == paddsb(x, y)</tt> (signed saturation)</li><li><tt>~paddusb(~x, y) == psubusb(x, y)</tt> (unsigned saturation)</li><li><tt>~psubusb(~x, y) == paddusb(x, y)</tt> (unsigned saturation)</li></ul></p><p>A similar visualization works for the signed/unsigned "conversion" <tt>x ^ msb == x + msb == x - msb</tt> (msb is a mask with only the most significant bit set), which corresponds to rotating the ring 180 degrees. This may help when thinking about things such as: <ul><li><tt>x <s y == (x ^ msb) <u (y ^ msb)</tt></li><li><tt>x <u y == (x ^ msb) <s (y ^ msb)</tt></li><li><tt>max_s(x, y) == max_u(x ^ msb, y ^ msb) ^ msb</tt></li></ul></p><p>The relationship between the different kinds of min/max can be summed up by a nice commutative diagram:<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-KhcYIg7GsbU/WYio_fG-g-I/AAAAAAAAAG8/sFWMrm91KgQJW7W-7UhralOKpFjPqLQwgCPcBGAYYCw/s1600/maxmin_comm.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-KhcYIg7GsbU/WYio_fG-g-I/AAAAAAAAAG8/sFWMrm91KgQJW7W-7UhralOKpFjPqLQwgCPcBGAYYCw/s320/maxmin_comm.png" width="320" height="219" data-original-width="649" data-original-height="445" /></a></div></p><p>Hope it helps, for me this sort of thing has come in handy occasionally when writing SSE code.</p><br/><br/><br/><br/><br/><br/><br/><p>Here are the images two's complement negation:<br/><a href="https://3.bp.blogspot.com/-PWLAC7TrC3E/WhbS38HR9kI/AAAAAAAAAHg/txwO3m1JpgkdAX39kfOF1JrX6wxoOSAzwCLcBGAs/s1600/twoscompcirc.png" imageanchor="1" ><img border="0" src="https://3.bp.blogspot.com/-PWLAC7TrC3E/WhbS38HR9kI/AAAAAAAAAHg/txwO3m1JpgkdAX39kfOF1JrX6wxoOSAzwCLcBGAs/s320/twoscompcirc.png" width="320" height="318" data-original-width="434" data-original-height="431" /></a><br/>and for plain one's complement:<br/><a href="https://2.bp.blogspot.com/-edpbUKcGGIs/WhbTAtUXWQI/AAAAAAAAAHk/Kaun6rxhLVk9P8vjiUzjS1eBICpRX5WdACLcBGAs/s1600/complementcirc.png" imageanchor="1" ><img border="0" src="https://2.bp.blogspot.com/-edpbUKcGGIs/WhbTAtUXWQI/AAAAAAAAAHk/Kaun6rxhLVk9P8vjiUzjS1eBICpRX5WdACLcBGAs/s320/complementcirc.png" width="320" height="320" data-original-width="425" data-original-height="425" /></a><br/>This is in the orientation that I usually use when I think about these operations this way, but there is not particular meaning to going counter-clockwise with 0 at/near the bottom.</p></span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-92215643066118768772016-11-24T10:10:00.001-08:002016-12-13T00:37:17.976-08:00Parallel prefix/suffix operations<span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><p>After a "brief" hiatus..</p><p>Parallel prefix/suffix operations is a family of operations that follow either this template: <pre class="brush:csharp">x = x ⊕ (x >> 1);<br />x = x ⊕ (x >> 2);<br />x = x ⊕ (x >> 4);<br />x = x ⊕ (x >> 8);<br />x = x ⊕ (x >> 16);</pre>Or this template: <pre class="brush:csharp">x = x ⊕ (x << 1);<br />x = x ⊕ (x << 2);<br />x = x ⊕ (x << 4);<br />x = x ⊕ (x << 8);<br />x = x ⊕ (x << 16);</pre>Where ⊕ is typically OR or XOR. I've never seen ⊕=AND show up naturally, but it might be useful for something.</p><p>There is some disagreement (stemming from the usual "which end of an integer is the front" question) about which of them should be called the "parallel prefix" and which the "parallel suffix", for the purpose of this post I'll be calling the first one (with the shifts to the right) "parallel prefix". Either way, these operations are bit-level analogues to the more well known prefix sum, prefix max, prefix product, etc. The word "parallel" in the name refers to the bit-level parallelism, which has the same structure as the simple (not work-efficient) <a href="http://http.developer.nvidia.com/GPUGems3/gpugems3_ch39.html">parallel prefix sum algorithm</a>.</p><h3>Some members of the family</h3><p>The best-known member of parallel prefix/suffix family is the parallel prefix with ⊕ = XOR (PP-XOR), which converts Gray code back to normally-ordered integers and produces the parity of an integer in the lowest bit.</p><p>The parallel <i>suffix</i> with ⊕ = XOR (PS-XOR) is a carryless multiplication by -1, which isn't very interesting by itself (probably), but gives a hint about the algebraic structure that these operations give rise to.</p><p>The parallel prefix with ⊕ = OR (PP-OR) often shows up whenever bitwise operations are interacting with ranges/intervals, since <code>PP-OR(low ^ high)</code> gives a mask of the bits that are not constrained by the interval. I have used this operation (though I did not name it then) several times in my series on <a href="http://bitmath.blogspot.nl/2012/09/calculating-lower-bound-of-bitwise-or.html">propagating bounds through bitwise operations</a>.<br/>This operation lends itself to some optimizations (I put "performance" in the title, which I admit I've mostly ignored), for example on x64 you could implement it as <pre> lzcnt rax, rax ; input in rax<br /> sbb rdx, rdx<br /> not rdx<br /> shrx rax, rdx, rax</pre>Or: <pre> lzcnt rax, rax<br /> mov edx, 64<br /> sub edx, eax<br /> or rax, -1<br /> bzhi rax, rax, rdx</pre>The each have their pros and cons, and hopefully there's a better way to do it, but I couldn't really find any. I made sure to avoid the infamous false dependency that <code>popcnt</code>, <code>tzcnt</code> and <code>lzcnt</code> all share on any Intel processor that implements them to date. Probably the biggest problem here is that both versions require BMI2, that can be avoided, eg (suggested by Jasper Neumann) <pre> xor edx, edx // provide a zero register for cmov<br /> bsr ecx, eax<br /> mov eax, -1<br /> not ecx // flags not modified<br /> cmovz eax, edx<br /> shr eax, cl</pre></p><h3>Properties/structure</h3><p>To start with the obvious, the prefix and suffix versions are exactly each others mirror image. So I'm going to look just at the suffix part of the family, for the prefix part just mirror everything.</p><h4>PS-XOR(x) is clmul(x, -1)</h4><p>Since every step is a clmul by <code>1 + 2<sup>k</sup></code> and if you clmul those constants together you get -1 (the two's complement -1, not the -1 in the ring formed by XOR and clmul over bitvectors of length k, which would just be 1 again), or from a more intuitive angle, what the PS-XOR is supposed to do in the first place is XOR each bit into all higher bits. So it inherits the properties, such as inversibility (-1 is odd). The inverse of PS-XOR is <code>x ^ (x << 1)</code>.</p><p>It also inherits that <code>PS-XOR(x) ^ PS-XOR(y) == PS-XOR(x ^ y)</code> from the distributivity of clmul.</p><p>Since clmul is commutative and associative, the steps in PS-XOR can be reordered.</p><h4>PS-OR(x) is or_mul(x, -1)</h4><p>This isn't as nice (XOR and clmul form a ring structure) since OR doesn't form a group because it has no negation (this is just a fancy way of saying that you can't un-OR numbers, in the way you can un-ADD by subtraction or un-XOR with XOR again) and so it doesn't extended to a ring, but it can be extended into a semiring by defining the following multiplication: <pre class="brush:csharp">static uint or_mul(uint a, uint b)<br />{<br /> uint r = 0;<br /> for (int i = 0; i < 32; i++)<br /> {<br /> if ((a & 1) == 1)<br /> r |= b; // the only difference is here<br /> a >>= 1;<br /> b <<= 1;<br /> }<br /> return r;<br />}</pre>It's not fancy math notation but you get the idea.</p><p>This (with OR) forming a commutative semiring (proof "left as an exercise to the reader"), it has some nice properties: <ul><li><code>or_mul(a, b) == or_mul(b, a)</code> commutivity</li><li><code>or_mul(a, or_mul(b, c)) == or_mul(or_mul(a, b), c)</code> associativity</li><li><code>or_mul(a, b | c) == or_mul(a, b) | or_mul(a, c)</code> left distributivity</li><li><code>or_mul(a | b, c) == or_mul(a, c) | or_mul(b, c)</code> right distributivity</li></ul>But, of course, no multiplicative inverses. Except when multiplying by 1, but that doesn't count since it's the multiplicative identity.</p><p><code>PS-OR(x) == or_mul(x, -1)</code>, and the individual steps of the PS-OR are of the form <code>x = or_mul(x, 1 + 2<sup>k</sup>)</code>. The or_mul of all those constants together is, unsurprisingly, -1 (though this notation is now slightly odd since this was all a semiring, I mean the element that has all bits set).</p><p>And again, PS-OR inherits <code>PS-OR(x) | PS-OR(y) = PS-OR(x | y)</code> from distributivity.</p><p>And again, since or_mul is commutative and associative, the steps in PS-OR can be reordered.</p><h4>PS-OR(x) is also <code>x | -x</code></h4><p>This one doesn't have a nice mirrored equivalent, just the obvious one where you insert a bit-reversal before and after. It also doesn't have an obvious relation to or_mul. As for why it's the case, given that (in string notation) -(a10<sup>k</sup>) = (~a)10<sup>k</sup>; a10<sup>k</sup> | -(a10<sup>k</sup>) = (~a|a)10<sup>k</sup> = 1<sup>∞</sup>10<sup>k</sup>, so it copies the lowest set bit into all higher bits, just as PS-OR does.</p></span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-32212003611256852252014-04-13T10:27:00.001-07:002014-04-13T10:42:24.707-07:00A bitwise relational domain<span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><p>This is the thing that I was referring to in my <a href="http://bitmath.blogspot.com/2014/02/addition-in-bitfield-domain-alternative.html">previous post</a> where I said it didn't work out as well as I'd hoped. That's still the case. But it's not a complete failure, it's just not as good as I had hoped it might be, and it's still interesting (and, as far as I know, new).</p> <p>The basic idea is really the same one as in Miné's <a href="http://www.di.ens.fr/~mine/publi/article-mine-sas02.pdf">A Few Graph-Based Relational Numerical Abstract Domains</a>, namely a matrix where the element <code>i,j</code> says something about the relation between the variables <code>i</code> and <code>j</code>, and there's a closure operator that very closely resembles the Floyd–Warshall algorithm (or matrix multiplication, which comes down to the same thing). Of course saying that the difference of two variables must be in some set isn't great for bitwise operations, so my first thought was to change it to a xor, with Miné's <a href="http://www.di.ens.fr/~mine/publi/article-mine-wing12.pdf">bitfield domain</a> (the one I've written several other posts about) as the Basis. For the closure operator, the multiplication is represented by the bitwise-xor from the bitfield domain, addition is bitwise-and from the bitfield domain. The rest is similarly easy.</p> <p>The problem with that idea is that it tends finds almost no relations. It can represent some non-trivial relations, for example that one variable is (partially) equal to and other xor-ed with a constant, but it's just not enough.</p> <p>The problem is that xor just doesn't remember enough. So the next idea I had was to abandon xor, and go with an operator that doesn't throw away anything, and use 4-tuples to represent that set instead of 2-tuples. Due to lack of inspiration, I named the elements of the tuple (a, b, c, d), and they work like this: <table><tr><td width="25%">a</td><td>x = 0, y = 0</td></tr><tr><td>b</td><td>x = 0, y = 1</td></tr><tr><td>c</td><td>x = 1, y = 0</td></tr><tr><td>d</td><td>x = 1, y = 1</td></tr></table>Meaning that if, say, bit 0 in a is set in the element at (x,y) then variables x and y can be even simultaneously. On the diagonal of the matrix, where x=y, b and c must both be zero, because a variable can not be unequal from itself.</p> <p>The "multiplication" operator for the closure works like this: <pre>aij &= aik & akj | bik & ckj<br />bij &= aik & bkj | bik & dkj<br />cij &= dik & ckj | cik & akj<br />dij &= dik & dkj | cik & bkj</pre>For all <code>k</code>. This essentially "chains" <code>i,k</code> with <code>k,j</code> to get an implied constraint on <code>i,j</code>.</p> <p>To show how powerful this is, here is what happens after asserting that <code>z = x & y</code><style>#t_00 { border: 1px solid black; border-collapse: collapse; } #t_00 tr { border: 1px solid black; } #t_00 td { border: 1px solid black; } </style><table id="t_00"><tr><td></td><td><center>x</center></td><td><center>y</center></td><td><center>z</center></td></tr><tr><td>x</td><td>-1,0,0,-1</td><td>-1,-1,-1,-1</td><td>-1,-1,0,-1</td></tr><tr><td>y</td><td>-1,-1,-1,-1</td><td>-1,0,0,-1</td><td>-1,-1,0,-1</td></tr><tr><td>z</td><td>-1,0,-1,-1</td><td>-1,0,-1,-1</td><td>-1,0,0,-1</td></tr></table>And now some cool things can happen. For example, if you assert that <code>z</code> is odd, the closure operation will propagate that back to both <code>x</code> and <code>y</code>. Or if you assert that <code>w = x | z</code>, it will deduce that <code>w == x</code>.</p> <p>There is a symmetry between the upper triangle and the lower triangle. They're not the same, but an element (a, b, c, d) is mirrored by an element (a, c, b, d). One of those triangles can easily be left out, to save some space. Elements that are (-1, -1, -1, -1) could also be left out, but that doesn't typically save many elements - if something non-relational is known about a variable (that is, there is an element on the diagonal that isn't (-1, 0, 0, -1)), then everything in the same row and everything in the same column will have that information in it as well. But an element <code>(i,j)</code> exactly equal to <code>(a<sub>i,i</sub> & a<sub>j,j</sub>, a<sub>i,i</sub> & d<sub>j,j</sub>, d<sub>i,i</sub> & a<sub>j,j</sub>, d<sub>i,i</sub> & d<sub>j,j</sub>)</code> can be left out. That removes all elements that describe a trivial relation of the kind that says that the combination of x and y must be in the Cartesian product of the sets that x and y can be in, which is completely redundant.</p> <p>But it does have problems. If you xor (or add or subtract) two unconstrained variables, nothing happens. That sort of 3-variable relation isn't handled at all. This led me to explore a 3D variant of this domain (with 8-tuples), which is really great if you only consider its expressive power, but it likes to take a large amount of space, and its closure operator is very inefficient.</p> <p>The algorithms I <a href="http://bitmath.blogspot.com/2013/08/addition-in-bitfield-domain.html">presented</a> <a href="http://bitmath.blogspot.com/2014/02/addition-in-bitfield-domain-alternative.html">earlier</a> for addition in the bitfield domain still work in this relational domain, and with just a couple of simple changes they can make use of some relational information. They want to know something about <code>p = x ^ y</code> and <code>g = x & y</code>, relational information can be available for both of them. For example, if it is known that <code>x & y == 0</code> (indicated by a 4-tuple (a, b, c, 0)), <code>g</code> will be known to be 0, all carries will be known to be 0, and two relations will be made that look exactly the same as if you had asserted <code>z = x | y</code>. Of course that only works if <code>x & y == 0</code> is known <i>before</i> the addition.</p> <p>Assertions such as <code>x & y == 0</code> (more generally: x ⊗ y is in (z, o) where (z, o) is an element from the unrelational bitfield domain) can be done directly (without inventing a temporary to hold x ⊗ y and asserting on it), and actually that's simpler and far more efficient.</p> </span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-31849968245946718292013-09-28T07:10:00.000-07:002017-11-23T04:42:54.437-08:00Determining the cardinality of a set described by a mask and bounds<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script type="text/javascript">MathJax.Hub.Config({ "HTML-CSS": { scale: 175 } }); </script><span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><p>In other words, calculating this \begin{equation} | \left\{ x | (x \& m) = v \land x \ge a \land x \le b \right\} | \end{equation}</p><p>A <a href="http://en.wikipedia.org/wiki/Binary_decision_diagram">BDD</a> (which haroldbot uses to solve this) has no trouble with that at all, but the structure the problem is so nice that I thought it should be possible to do better. And it is, though the solution I'm about to present is probably far from optimal. I don't know how to significantly improve on it, but I just get a very non-scientific gut-feeling that there should be a fundamentally better way to do it.</p><p>The cardinality of the set without the bounds (only the mask) is obviously trivial to compute (as <code>1 << popcnt(~m)</code>), and the bounds divide that set in three parts: items that are lower than the lower bound ("left part"), items that are higher than the upper bound ("right part")), and items that are actual set of interest ("middle part"). The main idea it is built on is that it's relatively easy to solve the problem if it didn't have a lower bound and the upper bound was a power of two, with a formula roughly similar to the one above. Using that, all the powers of two that fit before lower bound can be counted from high to low, giving the cardinality of the "left part". The same can be done for the "right part", actually in exactly the same way, by complementing the mask and the upper bound. Obviously the cardinality of the "middle part" can be computed from this.</p><p>And here's the code. It doesn't like it when the cardinality is 2<sup>32</sup>, and watch out for weird corner cases such as when the lower bound is bigger than the upper bound (why would you even do that?). It usually works, that's about as much as I can say - I didn't prove it correct. <pre class="brush: csharp">static uint Cardinality(uint m, uint v, uint a, uint b)<br />{<br /> // count x such that a <= x <= b && (x & m) == v<br /> // split in three parts:<br /> // left = 0 <= x < a<br /> // right = b < x <= -1<br /> // the piece in the middle is (1 << popcnt(~m)) - left - right<br /> uint left = 0;<br /> uint x = 0;<br /> for (int i = 32 - 1; i >= 0; i--)<br /> {<br /> uint mask = 1u << i;<br /> if (x + mask <= a)<br /> {<br /> x += mask;<br /> uint mask2 = 0 - mask;<br /> if ((x - 1 & m & mask2) == (v & mask2))<br /> { <br /> uint amount = 1u << popcnt(~(m | mask2));<br /> left += amount;<br /> }<br /> }<br /> }<br /> uint right = 0;<br /> uint y = 0;<br /> for (int i = 32 - 1; i >= 0; i--)<br /> {<br /> uint mask = 1u << i;<br /> if (y + mask <= ~b)<br /> {<br /> y += mask;<br /> uint mask2 = 0 - mask;<br /> if ((y - 1 & m & mask2) == (~v & m & mask2))<br /> {<br /> uint amount = 1u << popcnt(~(m | mask2));<br /> right += amount;<br /> }<br /> }<br /> }<br /> uint res = (uint)((1UL << popcnt(~m)) - (left + right));<br /> return res;<br />}</pre>The loops can be merged of course, but for clarity they're separate here.</p><p>If you have any improvements, please let me know.</p></span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-72980651341267922382013-08-10T07:50:00.000-07:002015-07-10T11:25:30.758-07:00Announcing haroldbot<span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;">haroldbot <i>was</i> an ircbot (hence the name) that solves some bitmath problems. The title is actually a lie - haroldbot has been around for a while now. But now it finally got its own website.<br/><span style="font-size: 150%"><a href="http://haroldbot.nl">haroldbot.nl</a></span><br/>Check it out, if you work with bits you will probably find this useful. </span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-51070387291933529082013-05-30T09:54:00.000-07:002017-12-24T12:19:15.307-08:00Carryless multiplicative inverse<span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;">Note: this post is neither about the normal multiplicative inverse, nor the modular multiplicative inverse. <a href="http://bitmath.blogspot.com/2012/09/divisibility-and-modular-multiplication.html">This other post</a> has information about the modular multiplicative inverse, which might be what you were looking for.<br/><br/>Mathematicians may call carryless multiplication "multiplication in GF(2^n)", but that doesn't explain how it works - recall the shift-and-add algorithm for multiplication: <pre class="brush: csharp">static uint mul(uint a, uint b)<br />{<br /> uint r = 0;<br /> while (b != 0)<br /> {<br /> if ((a & 1) != 0)<br /> r += b;<br /> a >>= 1;<br /> b <<= 1;<br /> }<br /> return r;<br />}</pre>Carryless multiplication is a very simple variation on that: do the addition without carries. That's just a XOR. <pre class="brush: csharp">static uint cl_mul(uint a, uint b)<br />{<br /> uint r = 0;<br /> while (b != 0)<br /> {<br /> if ((a & 1) != 0)<br /> r ^= b; // carryless addition is xor<br /> a >>= 1;<br /> b <<= 1;<br /> }<br /> return r;<br />}</pre>It has some applications in complicated cryptography related algorithms, but it also seems like this should be an interesting and powerful operation when working with bits, and it may well be, but I know of almost no uses for it (<del>besides, Intel's implementation is so slow that it often wouldn't help</del> Intel made it over twice as fast in Haswell). But anyway, let's just start with its basic properties: like normal multiplication, it's commutative and associative. It's also distributive, but over xor instead of over addition. None of this is very surprising.<br/><br/>As an aside, using associativity, it can be shown that the parallel suffix with XOR (which does have some known uses in bitmath, for example in implementing <a href="http://programming.sirrida.de/bit_perm.html#c_e">compress_right</a> in software), code shown below, is equivalent to a carryless multiplication by -1. <pre>// parallel suffix with XOR<br />x ^= x << 1;<br />x ^= x << 2;<br />x ^= x << 4;<br />x ^= x << 8;<br />x ^= x << 16;</pre>Every step is clearly a carryless multiplication, by 3, 5, 17, 257, and 65537 respectively. So it's equivalent to:<br/><code>clmul(clmul(clmul(clmul(clmul(x, 3), 5), 17), 257), 65537)</code> which can be rearranged (using associativity) to:<br/><code>clmul(x, clmul(clmul(clmul(clmul(3, 5), 17), 257), 65537))</code> which works out to <code>clmul(x, -1)</code>. Of course it was supposed to work out that way, because every bit of the result should be the XOR of all bits up to (and including) that bit, but it's nice that it also follows easily from a basic property. Incidentally if you have a full-width carryless multiplication, multiplying by -1 also computes the parallel <i>prefix</i> with XOR in the upper bits (the upper bit of the low word, which is the parity of the input, is shared by the suffix and the prefix.)<br/><br/>Carryless multiplication also shares an other property with normal multiplication: there are multiplicative inverses modulo 2<sup>n</sup> (and also modulo other numbers, but 2<sup>n</sup> is of particular interest since we're working in that by default anyway). Again there are only inverses for odd numbers, and it's equally obvious (as for normal multiplication) why that should be so - an even multiplier will throw out at least one high order bit. First, here's an example of carrlessly multiplying <code>x</code> by -1 and then carrylessly multiplying that by 3.<br/><pre>x = {d}{c}{b}{a} // the letters are bits<br />y = cl_mul(x, -1) = {d^c^b^a}{c^b^a}{b^a}{a}<br />z = cl_mulinv(-1) = 0011<br />cl_mul(y, z) = {d^c^b^a ^ c^b^a}{c^b^a ^ b^a}{b^a ^ a}{a} = {d}{c}{b}{a}</pre>Ok, so that worked out well, and it also gives part the answer to exercise 3 in chapter 5 of Hacker's Delight (about whether parallel prefix/suffix with XOR is invertible and how) because a carryless multiplication by -1 is the same as the parallel suffix with XOR. A carryless multiplication of <code>y</code> by 3 is of course just <code>y ^ (y << 1)</code>.<br/><br/>But back to actually computing the inverse. The inverse had better be odd, so bit 0 is already known, and for all the other bits follow these steps <ol><li>if the remainder is 1, stop</li><li>if bit <code>k</code> is 0, go to step 5</li><li>set bit <code>k</code> in the inverse</li><li>xor the remainder with <code>input << k</code></li><li>increment k and go to step 1</li></ol>Step 4 always resets the offending bit because the input had to be odd, so it's obvious that the remainder always ends up being 1 eventually, and so the algorithm always terminates. Moreover, even in the worst case it only has to process every bit but one, and continuing after the remainder becomes 1 simply does nothing, so step 1 could read "if <code>k</code> is 32" (or some other number, depending on how many bits your ints are wide), which is easier to unroll and better suited for a hardware implementation (not that I've seen this operation implemented in hardware anywhere).<br/>For example, in C# it could look like this: <pre class="brush: csharp">static uint clmulinv(uint x)<br />{<br /> uint inv = 1;<br /> uint rem = x;<br /> for (int i = 1; i < 32; i++)<br /> {<br /> if (((rem >> i) & 1) != 0)<br /> {<br /> rem ^= x << i;<br /> inv |= 1u << i;<br /> }<br /> }<br /> return inv;<br />}</pre><br/>A variation of the algorithm to find a multiplicative inverse modulo a power of two (see <tt>inv</tt> <a href="http://bitmath.blogspot.com/2012/09/divisibility-and-modular-multiplication.html">here</a>) also works, which is useful when clmul is fast: <pre class="brush: csharp">static uint clmulinv(uint d)<br />{<br /> uint x = 1;<br /> for (int i = 0; i < 5; i++)<br /> {<br /> x = clmul(x, clmul(x, d));<br /> }<br /> return x;<br />}</pre><br/>The first iteration sets x to d, that can be done immediately to skip an iteration. <br/><br/>Some sample inverses <pre>1 0x00000001<br />3 0xFFFFFFFF<br />5 0x55555555<br />7 0xDB6DB6DB<br />9 0x49249249<br />11 0x72E5CB97<br />13 0xD3A74E9D<br />15 0x33333333</pre><br/><p>The definition of <tt>clmul</tt> at the start of the post was meant to be just that, a faster way to emulate it is this: <pre class="brush: csharp">static uint clmul(uint a, uint b)<br />{<br /> uint r = 0;<br /> do<br /> {<br /> r ^= a * (b & (0 - b));<br /> b &= b - 1;<br /> r ^= a * (b & (0 - b));<br /> b &= b - 1;<br /> r ^= a * (b & (0 - b));<br /> b &= b - 1;<br /> r ^= a * (b & (0 - b));<br /> b &= b - 1;<br /> } while (b != 0);<br /> return r;<br />}</pre>This works by extracting a bit from <tt>b</tt> and multiplying by it (which just shifts <tt>a</tt> left), then resetting that bit. This can be unrolled safely since when <tt>b == 0</tt>, no further changes are made to <tt>r</tt> automatically. The <tt>0 - b</tt> thing is due to an unfortunate misfeature of C#, negating unsigned integers converts it to a <tt>long</tt>.</p><p>A similar trick works for the inverse: <pre class="brush: csharp">static uint clmulinv(uint x)<br />{<br /> uint inv = 1;<br /> uint rem = x - 1;<br /> while (rem != 0)<br /> {<br /> uint m = rem & (0 - rem);<br /> rem ^= x * m;<br /> inv += m;<br /> }<br /> return inv;<br />}</pre></p><br/><br/>By the way, why is this post so popular? Please let me know in the comments down below. </span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-25656963896824613342013-04-11T07:45:00.000-07:002017-11-23T04:42:11.035-08:00Improving bounds when some bits have a known value<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script type="text/javascript">MathJax.Hub.Config({ "HTML-CSS": { scale: 175 } }); </script><span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;">This problem is closely related the series of problems discussed in <a href="http://bitmath.blogspot.com/2012/09/calculating-lower-bound-of-bitwise-or.html">calculating the lower bound of the bitwise OR of two bounded variables</a> (and some of the posts after that one), and the algorithm is very closely related, too. The question in this post is, suppose some bits may be known to be zero and some may be known to be one, is there a better lower/upper bound than the given one, and if so, what is it? That is, calculate \begin{equation} \min _{x \in [a, b] \wedge (x | \sim z) = x \wedge (x \& \sim o) = 0 } x \end{equation} and \begin{equation} \max _{x \in [a, b] \wedge (x | \sim z) = x \wedge (x \& \sim o) = 0 } x \end{equation} where <code>z</code> is a bitmask containing the bits that are allowed to be 0, and <code>o</code> is a bitmask containing the bits that are allowed to be 1.<br/><br/>The idea behind the algorithms is to do a binary search (the one-sided variant) over the numbers that the masks allow, for the lowest value bigger than or equal to the original lower bound (or smaller than or equal to the original upper bound, for the new upper bound). Just as in the case of propagating bounds through XOR, it may take more than one step, so there aren't many shortcuts. I called them both "reduce" even though <code>ReduceMin</code> actually increases the value, because their purpose is to reduce the range <code>[min, max]</code>. <pre class="brush: csharp">static uint ReduceMin(uint min, uint z, uint o)<br />{<br /> uint mask = z & o; // note: mask is a subset of r<br /> uint r = o;<br /> uint m = 0x80000000 >> nlz(mask);<br /> while (m != 0)<br /> {<br /> // reset the bit if it can be freely chosen<br /> uint newr = r ^ (m & mask);<br /> if (newr >= min)<br /> // keep the change if still within bounds<br /> r = newr;<br /> m >>= 1;<br /> }<br /> return r;<br />}</pre><pre class="brush: csharp">static uint ReduceMax(uint max, uint z, uint o)<br />{<br /> uint mask = z & o;<br /> uint r = ~z;<br /> uint m = 0x80000000 >> nlz(mask);<br /> while (m != 0)<br /> {<br /> // set the bit if it can be freely chosen<br /> uint newr = r | (m & mask);<br /> if (newr <= max)<br /> // keep the change if still within bounds<br /> r = newr;<br /> m >>= 1;<br /> }<br /> return r;<br />}</pre><br/>There is one shortcut (that I know of): using <code>nlz</code> on every iteration, thereby skipping iterations where the current bit isn't even changed. With the implementation of <code>nlz</code> I was working with, that wasn't worth it, so whether it's actually a real shortcut or not is up for debate.<br/><br/>Occasionally the new lower bound can be higher than the new upper bound, that means the set of values was actually empty. If you were working with clockwise intervals, that changes to "if the new bounds aren't ordered the same way as the old ones" - ie if the interval was proper and the new one isn't or vice versa, the set is empty. </span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-85314416871476424592012-11-29T03:47:00.001-08:002013-04-11T07:46:44.024-07:00Tesseral arithmetic - useful snippets<span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><small>This post doesn't introduce anything new, and is, in my opinion, boring. Feel free to skip.</small><br/><br/>My <a href="http://bitmath.blogspot.com/2012/11/tesseral-arithmetic.html">previous post</a> didn't have too many useful snippets in it (mainly useful techniques to make your own snippets), and I thought I could improve on that. This post is not a good read in isolation - it's probably a good idea to read my previous post first, if you haven't already.<br/><br/>Tesseral addition (see previous post) was nice, but very often you only need to increment/decrement one dimension of a coordinate (for example when iterating over a portion of a Z-ordered grid in reading order), equivalent to adding/subtracting <code>(1, 0)</code> or <code>(0, 1)</code> to/from a coordinate. Since only one part of the coordinate changes, only about half as much code is necessary. Also, since the thing being added to the coordinate is a constant, one of the masking operations can be merged with it. <pre class="brush: csharp">static uint IncX(uint z)<br />{<br /> uint xsum = (z | 0xAAAAAAAA) + 1;<br /> return (xsum & 0x55555555) | (z & 0xAAAAAAAA);<br />}<br /><br />static uint IncY(uint z)<br />{<br /> uint ysum = (z | 0x55555555) + 2;<br /> return (ysum & 0xAAAAAAAA) | (z & 0x55555555);<br />}<br /><br />static uint DecX(uint z)<br />{<br /> uint xsum = (z & 0x55555555) - 1;<br /> return (xsum & 0x55555555) | (z & 0xAAAAAAAA);<br />}<br /><br />static uint DecY(uint z)<br />{<br /> uint ysum = (z & 0xAAAAAAAA) - 2;<br /> return (ysum & 0xAAAAAAAA) | (z & 0x55555555);<br />}</pre><br/>My previous post only had <code>TesseralMin</code>, not the corresponding <code>TesseralMax</code>, so here you go: <pre class="brush: csharp">public static uint TesseralMax(uint z, uint w)<br />{<br /> uint xdiff = (z & 0x55555555) - (w & 0x55555555);<br /> uint ydiff = (z >> 1 & 0x55555555) - (w >> 1 & 0x55555555);<br /> uint maskx = (uint)((int)xdiff >> 31);<br /> uint masky = (uint)((int)ydiff >> 31);<br /> uint xmin = (~maskx & z) | (maskx & w);<br /> uint ymin = (~masky & z) | (masky & w);<br /> return new T((xmin & 0x55555555) | (ymin & 0xAAAAAAAA));<br />}</pre>Note that the only difference is that the mask and the complemented mask have switched places.<br/><br/>This <code>TesseralMax</code> and the <code>TesseralMin</code> from the previous post can be combined with the increments and decrements (and with full tesseral addition, but that's less frequently useful) to form saturating increments and decrements, useful for sampling around a position on a Z-ordered grid without getting out of bounds. <pre class="brush: csharp">static uint IncXSat(uint z, uint xmax)<br />{<br /> uint xsum = ((z | 0xAAAAAAAA) + 1) & 0x55555555;<br /> uint xdiff = xsum - xmax;<br /> uint maskx = (uint)((int)xdiff << 1 >> 31);<br /> uint xsat = (maskx & xsum) | (~maskx & xmax);<br /> return xsat | (z & 0xAAAAAAAA);<br />}<br /><br />static uint IncYSat(uint z, uint ymax)<br />{<br /> uint ysum = ((z | 0x55555555) + 2) & 0xAAAAAAAA;<br /> uint ydiff = ysum - ymax;<br /> uint masky = (uint)((int)ydiff >> 31);<br /> uint ysat = (masky & ysum) | (~masky & ymax);<br /> return ysat | (z & 0x55555555);<br />}<br /><br />static uint DecXSat(uint z, uint xmin)<br />{<br /> uint xsum = ((z & 0x55555555) - 1) & 0x55555555;<br /> uint xdiff = xsum - xmin;<br /> uint maskx = (uint)((int)xdiff << 1 >> 31);<br /> uint xsat = (~maskx & xsum) | (maskx & xmin);<br /> return xsat | (z & 0xAAAAAAAA);<br />}<br /><br />static uint DecYSat(uint z, uint ymin)<br />{<br /> uint ysum = ((z & 0xAAAAAAAA) - 2) & 0xAAAAAAAA;<br /> uint ydiff = ysum - ymin;<br /> uint masky = (uint)((int)ydiff >> 31);<br /> uint ysat = (~masky & ysum) | (masky & ymin);<br /> return ysat | (z & 0x55555555);<br />}</pre>Merging them this way is nice, because only "half" of a <code>TesseralMin</code> or <code>TesseralMax</code> is necessary that way. On the other hand, they do have the overflow problem again, though that usually won't be a problem.<br/><br/><a href="http://bitmath.blogspot.com/2013/04/improving-bounds-when-some-bits-have.html">Next time</a>, back to "stuff with bounds". </span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-14331192293311816402012-11-27T02:26:00.001-08:002017-12-04T07:14:34.459-08:00Tesseral arithmetic<span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><small>Introductions are boring, feel free to skip to the <a href="#interesting">interesting stuff</a></small><br/><br/><a href="http://www.geog.ubc.ca/courses/klink/gis.notes/ncgia/u37.html#SEC37.4.4">Tesseral arithmetic</a> is a type of arithmetic that operates on interleaved coordinates. That may not seem very useful, so first, when would you want to do that?<br/><br/>The <a href="http://en.wikipedia.org/wiki/Z-order_curve">Z-order curve</a> is a space-filling curve (also known as Morton order, Morton coordinates, etc) that is closely related to quad trees (and octrees) and (in some contexts) improves the locality of reference when working with multidimensional data.<br/><br/>In essence, it maps multidimensional coordinates to single-dimensional coordinates, which can be used to address memory, and it does so in a way that sometimes leads to better locality of reference than concatenating the parts of a coordinate into a longer one. The trick is to <a href="http://graphics.stanford.edu/~seander/bithacks.html#InterleaveBMN">interleave the bits</a>. While that is not the best (ie. optimal locality of reference) mapping, but it's interesting that it works so well for such a simple trick.<br/><br/>But where it really gets interesting is when you have interleaved coordinates and you want to do math with them. You could unpack them, do your math, and then repack, but if you follow the previous link you can see that while unpacking and packing are simple and fast relative to the mappings of other space-filling curves, unpacking and packing would add a lot of overhead to what would otherwise be simple math.<br/><br/>That's where tesseral arithmetic comes in.<br/><br/>Bitwise AND, OR and XOR still work the same way, because the bits of the result only depend on the corresponding bits in the inputs. Shifts are simple - the shift count must be multiplied by two. So for example <code>x ^ (x << 1)</code> becomes <code>x ^ (x << 2)</code> in tesseral arithmetic.<br/><br/><a name="interesting"></a>Addition is more trouble. The carries in normal addition propagate into bits they shouldn't be affecting in tesseral arithmetic. But consider what would happen if the bit pairs at odd positions would each sum to 1. A carry coming into an odd position would always be passed on, and no extra carries would be generated from odd positions. So if the bits at odd positions are just right, the bits at the even positions are summed tesserally, with the carry moving two places instead of one. Obviously this extends to the odd bits as well, when the bits at even positions are just right. This actually makes tesseral addition quite simple: <pre class="brush: csharp">static uint TesseralAdd(uint z, uint w)<br />{<br /> uint xsum = (z | 0xAAAAAAAA) + (w & 0x55555555);<br /> uint ysum = (z | 0x55555555) + (w & 0xAAAAAAAA);<br /> return (xsum & 0x55555555) | (ysum & 0xAAAAAAAA);<br />}</pre>Unsurprisingly, the same principle applies to subtraction. In subtraction, borrows are passed on unmodified through a pair of bits if they sum to zero, or in other words, if both are zero. In a way that's conceptually even simpler than addition. <pre class="brush: csharp">static uint TesseralSubtract(uint z, uint w)<br />{<br /> uint xdiff = (z & 0x55555555) - (w & 0x55555555);<br /> uint ydiff = (z & 0xAAAAAAAA) - (w & 0xAAAAAAAA);<br /> return (xdiff & 0x55555555) | (ydiff & 0xAAAAAAAA);<br />}</pre>But multiplication isn't that nice. The problem is that multiplication is basically build out of a lot of shifts and additions (it's not implemented that way in hardware anymore) and the additions aren't tesseral nor can they be made tesseral.<br/>Unless, of course, we implement multiplication in software: <pre class="brush: csharp">static uint TesseralMultiply(uint z, uint w)<br />{<br /> uint x = z & 0x55555555;<br /> uint y = w & 0x55555555;<br /> uint xres = 0;<br /> while (x != 0)<br /> {<br /> if ((x & 1) != 0)<br /> xres = (xres | 0xAAAAAAAA) + y;<br /> y <<= 2;<br /> x >>= 2;<br /> }<br /><br /> x = z & 0xAAAAAAAA;<br /> y = w & 0xAAAAAAAA;<br /> uint yres = 0;<br /> while (x != 0)<br /> {<br /> if ((x & 2) != 0)<br /> yres = (yres | 0x55555555) + y;<br /> y <<= 2;<br /> x >>= 2;<br /> }<br /><br /> return (xres & 0x55555555) | (yres & 0xAAAAAAAA);<br />}</pre>But that doesn't achieve the goal of being faster than unpacking, doing math, and repacking. If anyone has a better idea, please let me know.<br/><br/>So ok, no tricks multiplication or division. But we're not done. As I hinted in my previous post, many bitwise tricks extend to tesseral arithmetic. For example, taking the absolute value of both parts of the coordinate simultaneously, using the same trick as in my <a href="http://bitmath.blogspot.nl/2012/11/the-basics-of-working-with-signbit.html">previous post (working with the signbit)</a>. The basic principle is simple: replace all operations by their tesseral counterparts. Then look for simplifications and other improvements. <pre class="brush: csharp">static uint TesseralAbs(uint z)<br />{<br /> uint maskx = (uint)((int)z << 1 >> 31);<br /> uint masky = (uint)((int)z >> 31);<br /><br /> // this is a simplified tesseral addition (followed by a xor)<br /> uint xabs = (z & 0x55555555) + maskx ^ maskx;<br /> uint yabs = (z & 0xAAAAAAAA) + masky ^ masky;<br /><br /> return (xabs & 0x55555555) | (yabs & 0xAAAAAAAA);<br />}</pre>The mask is known to be either all ones or all zeroes. It may seem at first as though that means we'd have to OR it with something to make the "in between" bits sum to one, but when the mask is zero there are no carries to pass on anyway. So the OR can be skipped.<br/><br/>But calculating absolute values of coordinates doesn't happen that often. So let's calculate an element-wise minimum, using the same basic principle as before, replace normal operators by tesseral operators. This time however, a substantial improvement over the non-tesseral version is possible. <pre class="brush: csharp">static uint TesseralMin(uint z, uint w)<br />{<br /> // these are tesseral subtractions, of course<br /> uint xdiff = (z & 0x55555555) - (w & 0x55555555);<br /> uint ydiff = (z >> 1 & 0x55555555) - (w >> 1 & 0x55555555);<br /><br /> uint maskx = (uint)((int)xdiff >> 31);<br /> uint masky = (uint)((int)ydiff >> 31);<br /><br /> uint xmin = (maskx & z) | (~maskx & w);<br /> uint ymin = (masky & z) | (~masky & w);<br /><br /> return (xmin & 0x55555555) | (ymin & 0xAAAAAAAA);<br />}</pre>And there's something very nice about how that worked out. In the normal <code>min</code>, there was a problem with overflow. That doesn't happen here, because for <code>xdiff</code> there was an extra bit anyway, and for <code>ydiff</code> that extra bit could easily be arranged by shifting right by 1. That makes the comparison unsigned, though, because the "extra bit" is zero, not a sign-extended bit.<br/><br/>So that's it for this post. Many other bitwise tricks can be extended to tesseral math, using the same basic principle. And of course this all generalizes to higher dimensions as well.<br/><br/>In the <a href="http://bitmath.blogspot.com/2012/11/tesseral-arithmetic-useful-snippets.html">next post</a>, I'll have some more useful snippets for tesseral arithmetic.<br/><br/>There are some other references for this type of arithmetic or its generalizations, for example The Art of Computer Programming volume 4A which calls this "working with fragmented fields" and Morton-order Matrices Deserve Compilers’ Support which calls this the "algebra of dilated integers".<br/><br/>By the way I originally wrote this post thanks to (or maybe due to?) <a href="http://cgi.csc.liv.ac.uk/~frans/OldResearch/dGKBIS/tesseral.html">this article</a>, which I found by searching how to do coordinate arithmetic in a quad tree with Morton order. That's where the title comes from. Unfortunately the article didn't really say how to <i>actually do it</i>, so I worked that out (though the algebra of dilated integers had been explored before, I did not know it went by that name) and posted it for the benefit of other people who perhaps traversed the same steps up to that point. </span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-44703465905904604972012-11-18T02:36:00.000-08:002013-05-22T08:38:03.571-07:00The basics of working with the signbit<span style="font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;"><small>this is a filler (in that it is much easier than the usual material), but it seems like most readers only read the fillers anyway</small><br/><br/>When I write signbit, I mean the upper bit in a bit string that is interpreted as a <a href="http://en.wikipedia.org/wiki/Two%27s_complement">two's complement</a> signed integer.<br/><br/>Central to working with the signbit is the idea that signed shift right aka <a href="http://en.wikipedia.org/wiki/Arithmetic_shift">arithmetic shift right</a> copies the signbit to other bits, and specifically, a signed shift right by 31 (or 63 or in general, one less than the size of your numbers) broadcasts the signbit to all other bits.<br/><br/>Perhaps the most obvious thing you can do with that is broadcasting an <i>arbitrary</i> bit to all other bits. Simply shift that bit into the signbit, and then shift right by 31: <pre class="brush: csharp">static int broadcastbit(int value, int bitindex)<br />{<br /> // put the target bit in the sign<br /> int temp = value << (31 - bitindex);<br /> // copy it to all bits<br /> return temp >> 31;<br />}</pre>In C, that's undefined behaviour (UB). Letting a left shift overflow (which could easily happen here) is UB, and signed right shift is UB in any case. But this is C# code (the source of this page will tell you so) where it's perfectly well-defined. And anyway, this is the kind of UB that is safe to use; the expected thing happens when you combine a sane compiler with a typical platform (say, MSVC on x86). But, of course, purists won't like it and on platforms without arithmetic right shift it's probably not going to work.<br/><br/>That actually applies to most of this blog, I suppose.<br/><br/>On to other tricks. This one is slightly harder to grasp, but more useful: calculating the absolute value of an integer without branching. First, the simple to understand version. <pre class="brush: csharp">static int abs(int value)<br />{<br /> // make a mask that is all ones if negative, or all zeroes if non-negative<br /> int mask = value >> 31;<br /> // select -value if negative, or value if non-negative<br /> return (mask & -value) | (~mask & value);<br />}</pre>That's just the usual branchless selection between two things.<br/><br/>The better way to do this has to do with how negation works. The negation of a number <code>x</code> is <code>~x + 1</code> (first definition) or <code>~(x - 1)</code> (second definition). Those definitions are, of course, equivalent. The trick (and you may have seen this coming), is to make the complement and the increment/decrement conditional based on the mask. <pre class="brush: csharp">static int abs(int value)<br />{<br /> // make a mask that is all ones if negative, or all zeroes if non-negative<br /> int mask = value >> 31;<br /> // conditionally complement and subtract -1 (first definition)<br /> return (value ^ mask) - mask;<br /> // conditionally add -1 and complement (second definition)<br /> return (value + mask) ^ mask;<br />}</pre>I've heard that the version of <code>abs</code> using the first definition is patented. That probably doesn't hold up (there will be a mountain of prior art and it's an obvious trick that anyone could derive), and no one's going to find out you're using it much less sue you for it, but you could use the version using the second definition just to be on the safe side.<br/><br/>One good thing about the simple version of <code>abs</code> is that it's using a generic branchless selection. That means you're not limited to choosing between <code>value</code> and <code>-value</code>, you can select <i>anything</i>. For example, you can subtract two numbers and use the sign of the difference to select the (unsigned) smallest one. That doesn't always work. The subtraction must not overflow, otherwise it selects the wrong one. The problem goes away if the inputs are smaller than <code>int</code>s, for example if they are bytes. <pre class="brush: csharp">static byte min(byte x, byte y)<br />{<br /> int difference = x - y;<br /> // make a mask that is all ones if x < y, or all zeroes if x >= y<br /> int mask = difference >> 31;<br /> // select x if x < y, or y if x >= y<br /> return (byte)((mask & x) | (~mask & y));<br /> // alternative: use arithmetic to select the minimum<br /> return (byte)(y + (difference & mask));<br />}</pre>The weird mixing of signed and unsigned may be confusing. Try to think of numbers as pure bit strings and only look at the type when an operator depends on it. That's closer to what actually happens in a computer, and it's less confusing that way.<br/><br/>The problem also goes away if you can use the carry flag instead of the signbit, because then you're not using a bit of the result to hold a flag but a separate thing, and thus doesn't "eat into the range of values". But high level languages are too good for the carry flag or something like that, and don't enable you to use it. So here's <code>min</code> in x86 assembly: <pre> ; inputs are in eax and edx, result in eax<br /> sub eax, edx<br /> sbb ecx, ecx ; makes ecx all ones if carry (ie. if eax < edx)<br /> and eax, ecx<br /> add eax, edx<br /></pre>Whether this or the more usual branchless version with <code>cmov</code> is faster depends on the processor.<br/><br/>And that has nothing to do with the signbit anymore, I know.<br/><br/>These tricks, and many others, also extend to <a href="http://www.geog.ubc.ca/courses/klink/gis.notes/ncgia/u37.html#SEC37.4.4">tesseral arithmetic</a>, which I'll cover in my <a href="http://bitmath.blogspot.nl/2012/11/tesseral-arithmetic.html">next post</a>, which isn't a filler. </span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-44531270980777091532012-09-16T11:37:00.000-07:002018-11-07T20:01:59.480-08:00Calculating the lower and upper bound of the bitwise OR of two variables that are bounded and may have bits known to be zero<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script type="text/javascript">MathJax.Hub.Config({ "HTML-CSS": { scale: 175 } }); </script><span style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;">This new problem clearly is related to two of my <a href="http://bitmath.blogspot.com/2012/09/calculating-lower-bound-of-bitwise-or.html">previous</a> <a href="http://bitmath.blogspot.com/2012/09/calculating-upper-bound-of-bitwise-or.html">posts</a>. But this time, there is slightly more information. It may look like a contrived, purely theoretical, problem, but it actually has applications in abstract interpretation. Static knowledge about the values that variables could have at runtime often takes the form of a range and a number that the variable is known to be a multiple of, which is most commonly a power of two.<br/><br/>The lower bound will be \begin{equation} \min _{x \in [a, b] \wedge m\backslash x, y \in [c, d] \wedge n\backslash y} x | y \end{equation} And the upper bound will be \begin{equation} \max _{x \in [a, b] \wedge m\backslash x, y \in [c, d] \wedge n\backslash y} x | y \end{equation} Where <code>m\x</code> means "<code>x</code> is divisible by <code>m</code>".<br/><br/>So how can we calculate them faster than direct evaluation? I don't know, and to my knowledge, no one else does either. But if sound (ie only <i>over</i>approximating) but non-tight bounds are OK, then there is a way. Part of the trick is constraining <code>m</code> and <code>n</code> to be powers of two. It's safe to use <code>m = m & -m</code>. That should look familiar - it's extracting the rightmost bit of <code>m</code>. An other explanation of "the rightmost bit of <code>m</code>" is "the highest power of two that divides <code>m</code>". That doesn't rule out any values of <code>x</code> that were valid before, so it's a sound approximation.<br/><br/>Strangely, for <code>minOR</code>, if the bounds are pre-rounded to their corresponding powers of two, there is absolutely no difference in the code whatsoever. It is possible to set a bit that is known to be zero in that bound, but that can only happen if that bit is one in the other bound anyway, so it doesn't affect the result. The other case, setting a bit that is not known to be zero, is the same as it would be with only the range information.<br/><br/><code>maxOR</code> is a problem though. In <code>maxOR</code>, bits at the right are set which may be known to be zero. Some of those bits may have to be reset. But how many? To avoid resetting too many bits, we have to round the result down to a multiple of <code>min(m, n)</code>. That's clearly sound - if a bit can't be one in both <code>x</code> and <code>n</code>, obviously it can't be one in the result. But it turns out not to be tight - for example for <code>[8, 9] 1\x</code> and <code>[0, 8] 4\y</code>, it computes 0b1111, even though the last two bits can only be 0b00 or 0b01 (<code>y</code> does not contribute to these bits, and the range of <code>x</code> is so small that the bits only have those values) so the tight upper bound is 0b1101. If that's acceptable, the code would be <pre class="brush: csharp">static uint maxOR(uint a, uint b, uint c, uint d, uint m, uint n)<br />{<br /> uint resettableb = (a ^ b) == 0 ? 0 : 0xFFFFFFFF >> nlz(a ^ b);<br /> uint resettabled = (c ^ d) == 0 ? 0 : 0xFFFFFFFF >> nlz(c ^ d);<br /> uint resettable = b & d & (resettableb | resettabled);<br /> uint target = resettable == 0 ? 0 : 1u << bsr(resettable);<br /> uint targetb = target & resettableb;<br /> uint targetd = target & resettabled & ~resettableb;<br /> uint newb = b | (targetb == 0 ? 0 : targetb - 1);<br /> uint newd = d | (targetd == 0 ? 0 : targetd - 1);<br /> uint mask = (m | n) & (0 - (m | n));<br /> return (newb | newd) & (0 - mask);<br />}</pre>Which also uses a sneaky way of getting <code>min(m, n)</code> - by ORing them and then taking the rightmost bit. Because why not.</br><br/>I haven't (yet?) found a nice way to calculate the tight upper bound. Even if I do, that still leaves things non-tight when the old <code>m</code> or <code>n</code> were not powers of two.<br/><br/></span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-18239386064042961472012-09-14T11:03:00.000-07:002018-11-07T19:49:09.269-08:00Calculating the lower and upper bounds of the bitwise AND of two bounded variables<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script type="text/javascript">MathJax.Hub.Config({ "HTML-CSS": { scale: 175 } }); </script><span style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;">This post is the closely related the <a href="http://bitmath.blogspot.com/2012/09/calculating-upper-bound-of-bitwise-or.html">previous post</a> and the <a href="http://bitmath.blogspot.com/2012/09/calculating-lower-bound-of-bitwise-or.html">post before it</a>, so I strongly suggest you read those two first.<br><br/>It's the same idea as before, but with bitwise AND instead of OR. That leads to some interesting symmetries. First, the definitions. The lower bound will be \begin{equation} \min _{x \in [a, b], y \in [c, d]} x \& y \end{equation} And the upper bound will be \begin{equation} \max _{x \in [a, b], y \in [c, d]} x \& y \end{equation} The algorithms given by Warren are <pre class="brush: cpp">unsigned minAND(unsigned a, unsigned b, <br /> unsigned c, unsigned d) {<br /> unsigned m, temp; <br /> <br /> m = 0x80000000; <br /> while (m != 0) {<br /> if (~a & ~c & m) {<br /> temp = (a | m) & -m; <br /> if (temp <= b) {a = temp; break;} <br /> temp = (c | m) & -m; <br /> if (temp <= d) {c = temp; break;} <br /> } <br /> m = m >> 1; <br /> } <br /> return a & c; <br />}</pre><pre class="brush: cpp">unsigned maxAND(unsigned a, unsigned b, <br /> unsigned c, unsigned d) {<br /> unsigned m, temp; <br /> <br /> m = 0x80000000; <br /> while (m != 0) {<br /> if (b & ~d & m) {<br /> temp = (b & ~m) | (m - 1); <br /> if (temp >= a) {b = temp; break;} <br /> } <br /> else if (~b & d & m) {<br /> temp = (d & ~m) | (m - 1); <br /> if (temp >= c) {d = temp; break;} <br /> } <br /> m = m >> 1; <br /> } <br /> return b & d; <br />}</pre>Obviously, they follow the same basic idea. Try to set a bit so you can reset the bits to the right of it in the lower bound, or try to reset a bit so you can set the bits to the right of it in the upper bound. The same reasoning about starting at <code>0x80000000 >> nlz(~a & ~c)</code> or <code>0x80000000 >> nlz(b ^ d)</code> applies, and the same reasoning about "bits at and to the right of <code>a ^ b</code>" applies as well. I'll skip the "sparse loops" this time, they're nice enough but mainly instructive, and repeating the same idea twice doesn't make it twice as instructive. So straight to the loopless algorithms: <pre class="brush: csharp">static uint minAND(uint a, uint b, uint c, uint d)<br />{<br /> uint settablea = (a ^ b) == 0 ? 0 : 0xFFFFFFFF >> nlz(a ^ b);<br /> uint settablec = (c ^ d) == 0 ? 0 : 0xFFFFFFFF >> nlz(c ^ d);<br /> uint settable = ~a & ~c & (settablea | settablec);<br /> uint target = settable == 0 ? 0 : 1u << bsr(settable);<br /> uint targeta = target & settablea;<br /> uint targetc = target & settablec & ~settablea;<br /> uint newa = a & (targeta == 0 ? 0xFFFFFFFF : 0-targeta);<br /> uint newc = c & (targetc == 0 ? 0xFFFFFFFF : 0-targetc);<br /> return newa & newc;<br />}</pre><pre class="brush: csharp">static uint maxAND(uint a, uint b, uint c, uint d)<br />{<br /> uint resettableb = (a ^ b) == 0 ? 0 : 0xFFFFFFFF >> nlz(a ^ b);<br /> uint resettabled = (c ^ d) == 0 ? 0 : 0xFFFFFFFF >> nlz(c ^ d);<br /> uint candidatebitsb = b & ~d & resettableb;<br /> uint candidatebitsd = ~b & d & resettabled;<br /> uint candidatebits = candidatebitsb | candidatebitsd;<br /> uint target = candidatebits == 0 ? 0 : 1u << bsr(candidatebits);<br /> uint targetb = target & b;<br /> uint targetd = target & d & ~b;<br /> uint newb = b | (targetb == 0 ? 0 : targetb - 1);<br /> uint newd = d | (targetd == 0 ? 0 : targetd - 1);<br /> return newb & newd;<br /></pre>Symmetry everywhere. But not really anything to new to explain.<br/><br/><a href="http://bitmath.blogspot.com/2012/09/calculating-lower-and-upper-bound-of.html">Next post</a>, something new to explain. </span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-82416886179764428172012-09-14T05:49:00.000-07:002017-11-23T04:40:10.195-08:00Calculating the upper bound of the bitwise OR of two bounded variables<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script type="text/javascript">MathJax.Hub.Config({ "HTML-CSS": { scale: 175 } }); </script><span style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;">This post is the closely related the <a href="http://bitmath.blogspot.com/2012/09/calculating-lower-bound-of-bitwise-or.html">previous one</a>, so I strongly suggest you read that one first.<br/><br/>The only difference with the previous post, is that this time, we're interested in the upper bound instead of the lower bound. In other words, evaluate<br/>\begin{equation} \max _{x \in [a, b], y \in [c, d]} x | y \end{equation} The algorithm given by Warren in Hackers Delight is <pre class="brush: cpp">unsigned maxOR(unsigned a, unsigned b, <br /> unsigned c, unsigned d) {<br /> unsigned m, temp; <br /> <br /> m = 0x80000000; <br /> while (m != 0) {<br /> if (b & d & m) {<br /> temp = (b - m) | (m - 1); <br /> if (temp >= a) {b = temp; break;} <br /> temp = (d - m) | (m - 1); <br /> if (temp >= c) {d = temp; break;} <br /> } <br /> m = m >> 1; <br /> } <br /> return b | d; <br />}</pre>And it's really the same sort of idea as the algorithm to calculate the minimum, except this time we're looking for a place where both <code>b</code> and <code>d</code> are one, so we can try to reset that bit and set all the bits to the right of it.<br/><br/>Warren notes that <code>m</code> can start at <code>0x80000000 >> nlz(b & d)</code>, and once again the same principle holds: it's enough to <i>only</i> look at those bits which are one in <code>b & d</code>, and they can be visited from high to low with <code>bsr</code><pre class="brush: csharp">static uint maxOR(uint a, uint b, uint c, uint d)<br />{<br /> while (bits != 0)<br /> {<br /> uint m = 1u << bsr(bits);<br /><br /> uint temp;<br /> temp = (b - m) | (m - 1);<br /> if (temp >= a) { b = temp; break; }<br /> temp = (d - m) | (m - 1);<br /> if (temp >= c) { d = temp; break; }<br /><br /> bits ^= m;<br /> }<br /> return b | d;<br />}</pre>And also, again, we can use that the bit we're looking for in <code>b</code> must be at or to the right of the leftmost bit in <code>a ^ b</code> (<code>c ^ d</code> for <code>d</code>), and that the selected bit doesn't actually have to be changed. <pre class="brush: csharp">static uint maxOR(uint a, uint b, uint c, uint d)<br />{<br /> uint resettableb = (a ^ b) == 0 ? 0 : 0xFFFFFFFF >> nlz(a ^ b);<br /> uint resettabled = (c ^ d) == 0 ? 0 : 0xFFFFFFFF >> nlz(c ^ d);<br /> uint candidatebits = b & d & (resettableb | resettabled);<br /> uint target = candidatebits == 0 ? 0 : 1u << bsr(candidatebits);<br /> uint targetb = target & resettableb;<br /> uint targetd = target & resettabled & ~resettableb;<br /> uint newb = b | (targetb == 0 ? 0 : targetb - 1);<br /> uint newd = d | (targetd == 0 ? 0 : targetd - 1);<br /> return newb | newd;<br />}</pre>Most of the code should be obvious after a moments thought, but something interesting and non-symmetric happens for <code>targetd</code>. There, I had to make sure that a change is not made to <i>both</i> bounds (that would invalidate the whole idea of "being able to make the change without affecting that bit in the result"). In <code>minOR</code> that happened automatically because it looked at positions where the bits were different, so both <code>target</code>s couldn't both be non-zero. Here, one of the bounds has to be explicitly prioritized before the other.<br/><br/><a href="http://bitmath.blogspot.com/2012/09/calculating-lower-and-upper-bounds-of.html">Next post</a>, maybe the same sort of thing but for bitwise AND. Then again, maybe not. I'll see what I can come up with.<br/>edit: bitwise AND it is. </span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.comtag:blogger.com,1999:blog-1465986942435538208.post-83582075440097110822012-09-13T16:19:00.001-07:002018-11-08T17:59:05.696-08:00Calculating the lower bound of the bitwise OR of two bounded variables<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script type="text/javascript">MathJax.Hub.Config({ "HTML-CSS": { scale: 175 } }); </script><span style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;">What does that even mean?<br/><br/>Suppose you have the variables <code>x in [a, b]</code> and <code>y in [c, d]</code>. The question then is: what is the lowest possible value of <code>x | y</code> where <code>x</code> and <code>y</code> are both in their corresponding ranges. In other words, evaluate<br/>\begin{equation} \min _{x \in [a, b], y \in [c, d]} x | y \end{equation} At a maximum of 2<sup>64</sup> iterations, direct evaluation is clearly not an option for 32-bit integers.<br/><br/>Fortunately, there is an algorithm that has a complexity linear in the number of bits, given by Warren in Hackers Delight, Propagating Bounds through Logical Operations, which the <a href="http://www.hackersdelight.org/permissions.htm">license</a> permits me to show here: <pre class="brush: cpp">unsigned minOR(unsigned a, unsigned b, <br /> unsigned c, unsigned d) {<br /> unsigned m, temp; <br /> <br /> m = 0x80000000; <br /> while (m != 0) {<br /> if (~a & c & m) {<br /> temp = (a | m) & -m; <br /> if (temp <= b) {a = temp; break;} <br /> } <br /> else if (a & ~c & m) {<br /> temp = (c | m) & -m; <br /> if (temp <= d) {c = temp; break;} <br /> } <br /> m = m >> 1; <br /> } <br /> return a | c; <br />}</pre><br/>So let's break down what it's doing. It starts at the MSB, and then it searches for either the highest bit that is zero <code>a</code> and one in <code>c</code> such that changing <code>a</code> to have that bit set and all bits the right of it unset would not make the new <code>a</code> higher than <code>b</code>, or, the highest bit that is zero <code>c</code> and one in <code>a</code> such that changing <code>c</code> to have that bit set and all bits the right of it unset would not make the new <code>c</code> higher than <code>d</code>, whichever one comes first.<br/><br/>That's literally easier to code than to explain, and I haven't even explained yet <i>why</i> it works.<br/>Suppose the highest such bit is found in <code>a</code>. Setting that bit in <code>a</code> does not affect the value of <code>a | c</code>, after all, that bit must have been set in <code>c</code> already so it was already set in <code>a | c</code>, too. However, resetting the bits to the right of that bit however can lower <code>a | c</code>. Notice that it is pointless to continue looking at lower bits - in <code>a</code> there are no more bits to reset, and for <code>c</code> there are no more bits that have the corresponding bit in <code>a</code> set.<br/><br/>Warren notes that <code>m</code> could start at <code>0x80000000 >> nlz(a ^ c)</code> (where <code>nlz</code> is the "number of leading zeros" function), meaning it starts looking at the first bit that is different in <code>a</code> and <code>c</code>. But we can do better. Not only can we start at the first bit which is different in <code>a</code> and <code>c</code>, we could look at <i>only</i> those bits. That requires frequent invocation of the <code>nlz</code> function (or <code>bsr</code>, <b>b</b>it <b>s</b>can <b>r</b>everse, giving the index of the leftmost bit), but it maps to a fast instruction on many platforms. <pre class="brush: csharp">uint minOR(uint a, uint b, uint c, uint d)<br />{<br /> uint bits = a ^ c;<br /> while (bits != 0)<br /> {<br /> // get the highest bit<br /> uint m = 1u << (nlz(bits) ^ 31);<br /> // remove the bit<br /> bits ^= m;<br /> if ((a & m) == 0)<br /> {<br /> uint temp = (a | m) & -m;<br /> if (temp <= b) { a = temp; break; }<br /> }<br /> else<br /> {<br /> uint temp = (c | m) & -m;<br /> if (temp <= d) { c = temp; break; }<br /> }<br /> }<br /> return a | c;<br />}</pre>One interesting consequence of looking only at the bits that are different is that the second <code>if</code> disappears - the case where the bits are equal is ruled out by looking only at the different bits in the first place.<br/><br/>But that is not all. The bit positions at which the <= operators could return true, are precisely all those at and to the right of one important point: the highest set bit in <code>a ^ b</code> (or <code>c ^ d</code> for the other bound). Why? Well the upper bounds are not lower than the lower bounds, so the first bit at which they differ must be the first position at which the lower bound has a zero where the upper bound has a one. Setting that bit to one and all bits to the right to zero in the lower is clearly valid (ie doesn't make it higher than the upper bound), but whether that bit can actually be set depends on the other lower bound as well.<br/><br/>What that means in practical terms, is that the value of <code>m</code> that first passes the tests is directly computable. No loops required. Also, because the test to check whether the new bound is still less than or equal to the upper bound isn't necessary anymore (by construction, that test always passes), the bit doesn't even have to be set anymore - without the test the new value isn't really needed, and the entire idea was that setting that bit would not change the result, so setting it is pointless. <pre class="brush: csharp">uint minOR(uint a, uint b, uint c, uint d)<br />{<br /> uint settablea = (a ^ b) == 0 ? 0 : 0xFFFFFFFF >> nlz(a ^ b);<br /> uint settablec = (c ^ d) == 0 ? 0 : 0xFFFFFFFF >> nlz(c ^ d);<br /> uint candidatebitsa = (~a & c) & settablea;<br /> uint candidatebitsc = (a & ~c) & settablec;<br /> uint candidatebits = candidatebitsa | candidatebitsc;<br /><br /> uint target = candidatebits == 0 ? 0 : 1u << bsr(candidatebits);<br /> uint targeta = c & target;<br /> uint targetc = a & target;<br /><br /> uint newa = a & ~(targeta == 0 ? 0 : targeta - 1);<br /> uint newc = c & ~(targetc == 0 ? 0 : targetc - 1);<br /> return newa | newc;<br />}</pre>Sadly, there's an awful lot of conditionals in there, which could be branches. But they could also be conditional moves. And on x86 at least, both <code>bsr</code> and <code>lzcnt</code> set a nice condition flag if the input was zero, so it's really not too bad in practice. It is, in my opinion, a pity that there aren't more instruction to deal with leftmost bits, while instruction that deal with the rightmost bit are <a href="https://en.wikipedia.org/wiki/Bit_Manipulation_Instruction_Sets">being added</a>. They are nice, I will admit, but the rightmost bit could already be efficiently dealt with, while the leftmost bit is somewhat problematic.<br/><br/><a href="http://bitmath.blogspot.com/2012/09/calculating-upper-bound-of-bitwise-or.html">Next post</a>, the same thing but for the upper bound. This post is the start of a series of posts that address the propagation of intervals through bitwise operations. </span>Haroldhttp://www.blogger.com/profile/16934800558256607460noreply@blogger.com