I have a very large multidimensional vector that changes in size all the time.
Is there any point to use the vector.reserve() function when I only know a good approximation of the sizes.
So basically I have a vector
A[256*256][x][y]
where x goes from 0 to 50 for every iteration in the program and then back to 0 again. The y values can differ every time, which means that for each of the
[256*256][y] elements the vector y can be of a different size but still smaller than 256;
So to clarify my problem this is what I have:
vector<vector<vector<int>>> A;
for(int i =0;i<256*256;i++){
A.push_back(vector<vector<int>>());
A[i].push_back(vector<int>());
A[i][0].push_back(SOME_VALUE);
}
Add elements to the vector…
A.clear();
And after this I do the same thing again from the top.
When and how should I reserve space for the vectors.
If I have understood this correctly I would save a lot of time if I would use reserve as I change the sizes all the time?
What would be the negative/positive sides of reserving the maximum size my vector can have which would be [256*256][50][256] in some cases.
BTW. I am aware of different Matrix Templates and Boost, but have decided to go with vectors on this one…
EDIT:
I was also wondering how to use the reserve function in multidimensional arrays.
If I only reserve the vector in two dimensions will it then copy the whole thing if I exceed its capacity in the third dimension?
To help with discussion you can consider the following typedefs:
The cost of growing (vector capacity increase)
int_twill only affect the contents of this particular vector and will not affect any other element. The cost of growingmid_trequires copying of all the stored elements in that vector, that is it will require all of theint_tvector, which is quite more costly. The cost of growingext_tis huge: it will require copying all the elements already stored in the container.Now, to increase performance, it would be much more important to get the correct
ext_tsize (it seems fixed 256*256 in your question). Then get the intermediatemid_tsize correct so that expensive reallocations are rare.The amount of memory you are talking about is huge, so you might want to consider less standard ways to solve your problem. The first thing that comes to mind is adding and extra level of indirection. If instead of holding the actual vectors you hold smart pointers into the vectors you can reduce the cost of growing the
mid_tandext_tvectors (ifext_tsize is fixed, just use a vector ofmid_t). Now, this will imply that code that uses your data structure will be more complex (or better add a wrapper that takes care of the indirections). Eachint_tvector will be allocated once in memory and will never move in eithermid_torext_treallocations. The cost of reallocatingmid_tis proportional to the number of allocatedint_tvectors, not the actual number of inserted integers.Another thing that you should take into account is that
std::vector::clear()does not free the allocated internal space in the vector, only destroys the contained objects and sets the size to 0. That is, callingclear()will never release memory. The pattern for actually releasing the allocated memory in a vector is: