I’m using a block processing approach to handle a calculation between two large matrices.
The code significantly speeds up when using a larger block size. But if I go too large, then I get an Out of Memory error. Currently I hand-tune my code to find the largest working block size for a given input.
My question: how can I automate the process of finding the largest possible block size?
I’ve toyed with wrapping everything in a try/catch block and looping with progressively smaller block sizes till it succeeds. I’m hoping there is a more elegant or idiomatic way.
Before doing the block processing, you can use the MEMORY function to see how much memory is already being used and how much is left available for any additional variables the block processing may need to create. If you can estimate the total amount of memory the block processing steps will need as a function of the block size, you can figure out how large the block size can be before you run out of available memory. This may be easier said than done, since I don’t know exactly how you are doing the block processing.
Here’s a simple example. I’ll start by clearing the workspace and creating 2 large matrices:
Now, let’s say I know I will have to allocate an
N-by-Nmatrix of doubles, which will require8*N*Nbytes of memory (8 bytes per double). I can do the following to find out how large I can makeN:If you are routinely having trouble with running out of memory, here are a couple of things you can do: