Bitwise Combinational Operators


Swapping Variables

If you’re ever writing some code that needs to fit in a very tight amount of memory (such as a piece of assembly language), you might come across a situation where you want to swap two variables without having to use a third. The XOR operator provides a very cool way to do this. This works based on two facts. First, the XOR operation is commutative. That is, X ^ Y is the same thing as Y ^ X. Second, as we saw earlier, anything XORed with itself yields 0. With that in mind, suppose we have two constants, called CONST_A and CONST B. Take a look at the following code fragment:

                   

//  Value of x                         Value of y
// -----------------------------------------------
int x, y;           //  0                                  0
x = CONST_A;        //  CONST_A                            0
y = CONST_B;        //  CONST_A                            CONST_B
x = x ^ y;          //  CONST_A ^ CONST_B                  CONST_B
y = x ^ y;          //  CONST_A ^ CONST_B                  CONST_A ^ CONST_B ^ CONST_B == CONST_A ^ 0 == CONST_A
x = x ^ y;          //  CONST_A ^ CONST_A ^ CONST_B        CONST_A
//  == 0 ^ CONST_B == CONST_B          CONST_A
//  CONST_B                            CONST_A


Look this over for a minute so you can see exactly what’s going on. The two variables begin as CONST_A and CONST_B, respectively. With two XOR operations, y becomes CONST_A ^ CONST_B ^ CONST_B, but CONST_B ^ CONST_B is of course 0, and CONST_A ^ 0 is simply CONST_A. Another XOR operation does the reverse to x, and in three quick statements, you have swapped the values of two variables, without using a third. Is that cool or what? Posted Image Also, note that there is a ^= operator similar to the += and -= operators in C, so the code for swapping x and y comes down to this:

x ^= y;
y ^= x;
x ^= y;

Bitwise operations are extremely fast for the processor to handle, so this is nice and quick, as well as a way to rid yourself of a temporary variable, which can be very nice. All right, we’re almost done. The last thing we can do is replace a few arithmetic operations with bitwise ones.

Replacing Arithmetic Operations

What happens when you take a binary number and multiply it by two? Let’s write it out the long way to see exactly what goes on. First, choose a random binary number, say 0010 1101. Then write it as a sum of powers of two, like we saw at the beginning of this article:

0010 1101 = (1 * 25) + (1 * 23) + (1 * 22) + (1 * 20)

I have eliminated the cases where the bit is 0, because they just multiply out to 0 anyway so they don’t affect the sum. Now, suppose we take this sum of products and multiply it by two. Here’s what we get, again in the expanded notation:

0010 1101 * 2 = 2(1 * 25) + 2(1 * 23) + 2(1 * 22) + 2(1 * 20)
0010 1101 * 2 = (1 * 26) + (1 * 24) + (1 * 23) + (1 * 21)

Notice what our result is — since the original number was written as a sum of powers of two times the bits of the number, all that happens when we multiply the whole sum by two is that the exponents on the 2s increase by one each. See what I’m getting at? The result of the multiplication is still a sum of powers of two, and as such, it can be directly reduced to a binary number. Let’s see what it is.

0010 1101 * 2 = (1 * 26) + (1 * 24) + (1 * 23) + (1 * 21) = 0101 1010

Notice anything odd about the result? That’s right — it’s the original number, shifted left by one position. This is because a shift left increases the weight on each bit by one place, which is the same as increasing the exponents on the 2s in expanded form, which is the same as multiplying by two. Thus, you can multiply by any power of two by shifting left the same number of places. For example, if you wanted to multiply a number by 16, which is 24, you could simply shift it left four places. The following pairs of statements are all equivalent to one another:

x = y * 8;
x = y << 3;

x = y * 64;
x = y << 6;

x = y * 32768;
x = y << 15;

Pretty neat, isn’t it? This is a fast way to accomplish multiplication by powers of two, using only a bitwise shift. If you’ve ever done any circuit design, you know that a bitwise shift is extremely fast, and so this can help you out. Newer processors’ multiplication speeds are getting faster and faster, so this is less of a help than it used to be, but it’s still an interesting thing to know.

Now, you might be asking, if shift left is equivalent to multiplication by two, is shift right equivalent to division by 2? The answer is yes! Shifting right simply drops the weight on each bit, meaning that the exponents on the 2s all decrease by one. Note that this is integer division only; you can’t get a fractional value or a remainder out of this. Division in hardware is not the fastest thing in the world, though it is getting better, so this is again a good trick to know. These pairs of statements are equivalent to one another:

x = y / 4;
x = y >> 2;

x = y / 32;
x = y >> 5;

Finally, there is one more arithmetic operation that we can replace. Let’s take our last example number, 0010 1101, and suppose we extract the last three bits, so the number is in two parts:

Part 1:    0010 1000
Part 2:  | 0000 0101
———–
Sum:       0010 1101

Part 1 of this number has had its lower three bits extracted, which means they are all 0s. So the smallest power of two in the longhand form of Part 1 is 8 = 23, because bit 3 is the lowest bit set to 1. That means that Part 1 is evenly divisible by 8. Part 2 is of course less than eight, because only the three lowest bits have been extracted into it, so it can have a maximum value of 23 – 1 = 7. Thus if you add Parts 1 and 2 together, to get our original number, and divide that number by eight, the result would be the same as if you were to simply divide Part 1 by eight. Have you figured out what Part 2 is yet? It’s the remainder when you divide the sum by eight. That means that you can use an AND to replace a modulus operation if you’re using a power of two.

You might want to re-read that to let it sink in a bit, because this one isn’t quite as obvious as the last two. Consider this. You have a number you want to divide by 32. If you were to shift the number right by five places, this accomplishes the division, and the five least significant bits are thrown away. That means that those five bits (which we’ll call x) had no effect whatsoever on the result of the division. All they represented was how close that original number was to reaching the next multiple of 32.

In summary, to accomplish a modulus operation on a number of the form 2n, simply mask off the lowest n bits using a bitwise AND. To do this, you use 1s for all n of those bits. But notice that we have the following identity:

2n+1 – 1 = Attached Image: bitwise-sigma.gif
2k

In other words, the number we need to use as our mask in order to perform a modulus operation on 2n is simply 2n – 1. To illustrate this, the following pairs of statements are all equivalent to one another:

x = y % 8;
x = y & 7;

x = y % 32;
x = y & 31;

x = y % 256;
x = y & 255;

Conclusion

All right, we’re all finished. We’ve accomplished quite a bit with this article. You’ve now been through a brief introduction to the various number systems and how they are used. You’ve also seen a number of useful applications of this knowledge, such as inserting and extracting values within a larger variable, performing fast arithmetic operations, and a neat trick to swap two variables quickly.

Post by by Joseph “Ironblayde” Farrell

Related posts:

About author

This article was written by admin

Admin has over twenty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles. Working presently as Development Manager in India. A firm Believer in Knowledge grows when it shared.

Comments

No Comments

Leave your comment

Your email address will not be published. Required fields are marked *