I have something I need a 2D array for, but for better cache performance, I’d rather have it actually be a normal array. Here’s the idea I had but I don’t know if it’s a terrible idea:
const int XWIDTH = 10, YWIDTH = 10;
int main(){
int * tempInts = new int[XWIDTH * YWIDTH];
int ** ints = new int*[XWIDTH];
for(int i=0; i<XWIDTH; i++){
ints[i] = &tempInts[i*YWIDTH];
}
// do things with ints
delete[] ints[0];
delete[] ints;
return 0;
}
So the idea is that instead of newing a bunch of arrays (and having them placed in different places in memory), I just point to an array I made all at once.
The reason for the delete[] (int*) ints; is because I’m actually doing this in a class and it would save [trivial amounts of] memory to not save the original pointer.
Just wondering if there’s any reasons this is a horrible idea. Or if there’s an easier/better way. The goal is to be able to access the array as ints[x][y] rather than ints[x*YWIDTH+y].
EDIT: A simple benchmark suggests that my way is faster without the optimizer, but gcc may optimize better on the simple way for some reason.
If you compile with gcc -O0, the best should be stack, then mine, then normal. If you compile with X_MAX set to large value and Y_MAX set to a small value, and use gcc -O3, mine and stack should be really really fast, but the normal one won’t be. If you make X_MAX small and Y_MAX big, the normal way should win (even over the stack method for some reason).
The problem with that kind of approach is that it is error prone. I would tell yout to wrap the allocation and access to individual elements os your 2D array in a class.
In that case, if you ever feel the need of changing the allocation, you can only change the methods implementation, keeping the interface intact. The rest of your code would actually use only the operator() and the constructor. It will help you, also, to prevent memory leaking.