I asked another question:
https://stackoverflow.com/questions/1180240/best-way-to-sort-1m-records-in-python
where I was trying to determine the best approach for sorting 1 million records. In my case I need to be able to add additional items to the collection and have them resorted. It was suggested that I try using Zope’s BTrees for this task. After doing some reading I am a little stumped as to what data I would put in a set.
Basically, for each record I have two pieces of data. 1. A unique ID which maps to a user and 2. a value of interest for sorting on.
I see that I can add the items to an OOSet as tuples, where the value for sorting on is at index 0. So, (200, 'id1'),(120, 'id2'),(400, 'id3') and the resulting set would be sorted with id2, id1 and id3 in order.
However, part of the requirement for this is that each id appear only once in the set. I will be adding additional data to the set periodically and the new data may or may not include duplicated ‘ids’. If they are duplicated I want to update the value and not add an additional entry. So, based on the tuples above, I might add (405, 'id1'),(10, 'id4') to the set and would want the output to have id4, id2, id3, id1 in order.
Any suggestions on how to accomplish this. Sorry for my newbness on the subject.
* EDIT – additional info *
Here is some actual code from the project:
for field in lb_fields:
t = time.time()
self.data[field] = [ (v[field], k) for k, v in self.foreign_keys.iteritems() ]
self.data[field].sort(reverse=True)
print "Added %s: %03.5f seconds" %(field, (time.time() - t))
foreign_keys is the original data in a dictionary with each id as the key and a dictionary of the additional data as the value. data is a dictionary containing the lists of sorted data.
As a side note, as each itereation of the for field in lb_fields runs, the time to sort increases – not by much… but it is noticeable. After 1 million records have been sorted for each of the 16 fields it is using about 4 Gigs or RAM. Eventually this will run on a machine with 48 Gigs.
I don’t think BTrees or other traditional sorted data structures (red-black trees, etc) will help you, because they keep order by key, not by corresponding value — in other words, the field they guarantee as unique is the same one they order by. Your requirements are different, because you want uniqueness along one field, but sortedness by the other.
What are your performance requirements? With a rather simple pure Python implementation based on Python dicts for uniqueness and Python sorts, on a not-blazingly-fast laptop, I get 5 seconds for the original construction (essentially a sort over the million elements, starting with them as a dict), and about 9 seconds for the “update” with 20,000 new id/value pairs of which half “overlap” (thus overwrite) an existing id and half are new (I can implement the update in a faster way, about 6.5 seconds, but that implementation has an anomaly: if one of the “new” pairs is exactly identical to one of the “old” ones, both id and value, it’s duplicated — warding against such “duplication of identicals” is what pushes me from 6.5 seconds to 9, and I imagine you would need the same kind of precaution).
How far are these 5-and-9 seconds times from your requirements (taking into account the actual speed of the machine you’ll be running on vs the 2.4 GHz Core Duo, 2GB of RAM, and typical laptop performance issues of this laptop I’m using)? IOW, is it close enough to “striking distance” to be worth tinkering and trying to squeeze a last few cycles out of, or do you need orders of magnitude faster performance?
I’ve tried several other approaches (with a SQL DB, with C++ and its std::sort &c, …) but they’re all slower, so if you need much higher performance I’m not sure what you could do.
Edit: since the OP says this performance would be fine but he can’t achieve anywhere near it, I guess I’d best show the script I used to measure these times…:
and this is a typical run:
the overall elapsed time being a few seconds more than the totals I’m measuring, obviously, because it includes the time needed to populate the container with random numbers, generate the “new data” also randomly, destroy and garbage-collect things at the end of each run, and so forth.
This is with the system-supplied Python 2.5.2 on a Macbook with Mac OS X 10.5.7, 2.4 GHz Intel Core Duo, and 2GB of RAM (times don’t change much when I use different versions of Python).