I’m just starting with NumPy so I may be missing some core concepts…
What’s the best way to create a NumPy array from a dictionary whose values are lists?
Something like this:
d = { 1: [10,20,30] , 2: [50,60], 3: [100,200,300,400,500] }
Should turn into something like:
data = [ [10,20,30,?,?], [50,60,?,?,?], [100,200,300,400,500] ]
I’m going to do some basic statistics on each row, eg:
deviations = numpy.std(data, axis=1)
Questions:
-
What’s the best / most efficient way to create the numpy.array from the dictionary? The dictionary is large; a couple of million keys, each with ~20 items.
-
The number of values for each ‘row’ are different. If I understand correctly numpy wants uniform size, so what do I fill in for the missing items to make std() happy?
Update: One thing I forgot to mention – while the python techniques are reasonable (eg. looping over a few million items is fast), it’s constrained to a single CPU. Numpy operations scale nicely to the hardware and hit all the CPUs, so they’re attractive.
You don’t need to create numpy arrays to call numpy.std(). You can call numpy.std() in a loop over all the values of your dictionary. The list will be converted to a numpy array on the fly to compute the standard variation.
The downside of this method is that the main loop will be in python and not in C. But I guess this should be fast enough: you will still compute std at C speed, and you will save a lot of memory as you won’t have to store 0 values where you have variable size arrays.
example with O(N) complexity: