Hey, in the Programming Pearls book, there is a source code for setting, clearing and testing a bit of the given index in an array of ints that is actually a set representation.
The code is the following:
#include<stdio.h> #define BITSPERWORD 32 #define SHIFT 5 #define MASK 0x1F #define N 10000000 int a[1+ N/BITSPERWORD]; void set(int i) { a[i>>SHIFT] |= (1<<(i & MASK)); } void clr(int i) { a[i>>SHIFT] &= ~(1<<(i & MASK)); } int test(int i) { a[i>>SHIFT] & (1<<(i & MASK)); }
Could somebody explain me the reason of the SHIFT and the MASK defines? And what are their purposes in the code?
I’ve already read the previous related question.
VonC posted a good answer about bitmasks in general. Here’s some information that’s more specific to the code you posted.
Given an integer representing a bit, we work out which member of the array holds that bit. That is: Bits 0 to 31 live in
a[0], bits 32 to 63 live ina[1], etc. All thati>>SHIFTdoes isi / 32. This works out which member ofathe bit lives in. With an optimising compiler, these are probably equivalent.Obviously, now we’ve found out which member of
athat bitflag lives in, we need to ensure that we set the correct bit in that integer. This is what1 << idoes. However, we need to ensure that we don’t try to access the 33rd bit in a 32-bit integer, so the shift operation is constrained by using1 << (i & 0x1F). The magic here is that0x1Fis 31, so we’ll never left-shift the bit represented byimore than 31 places (otherwise it should have gone in the next member ofa).