I have a simple problem. I have to fetch a url (about once a minute), check if there is any new content, and if there is, post it to another url.
I have a working system with a cronjob every minute that basically:
for link in models.Link.objects.filter(enabled=True).select_related():
# do it in two phases in case there is cross pollination
# get posts
twitter_posts, meme_posts = [], []
if link.direction == "t2m" or link.direction == "both":
twitter_posts = utils.get_twitter_posts(link)
if link.direction == "m2t" or link.direction == "both":
meme_posts = utils.get_meme_posts(link)
# process them
if len(twitter_posts) > 0:
post_count += views.twitter_link(link, twitter_posts)
if len(meme_posts) > 0:
post_count += views.meme_link(link, meme_posts)
count += 1
msg = "%s links crawled and %s posts updated" % (count, post_count)
This works great for the 150 users I have now, but the synchronousness of it scares me. I have url timeouts built-in, but at some point my cronjob will take > 1 minute, and I’ll be left with a million of them running overwriting eachother.
So, how should I rewrite it?
Some issues:
- I don’t want to hit the APIs too hard incase they block me. So I’d like to have at most 5 open connections to any API at any time.
- Users keep registering in the system as this runs, so I need some way to add them
- I’d like this to scale as well as possible
- I’d like to reuse as much existing code as I can
So, some thoughts I’ve had:
- Spawn a thread for each
link - Use python-twisted – Keep one running process, that the cronjob just makes sure is running.
- Use stackless – Don’t really know much about it.
- Ask StackOverflow 🙂
How would you do this?
Simplest: use a long-running process with sched (on its own thread) to handle the scheduling — by posting requests to a Queue; have a fixed-size pool of threads (you can find a pre-made thread pool here, but it’s easy to tweak it or roll your own) taking requests from the Queue (and returning results via a separate Queue). Registration and other system functions can be handled by a few more dedicated threads, if need be.
Threads aren’t so bad, as long as (a) you never have to worry about synchronization among them (just have them communicate by intrinsically thread-safe Queue instances, never sharing access to any structure or subsystem that isn’t strictly read-only), and (b) you never have too many (use a few dedicated threads for specialized functions, including scheduling, and a small thread-pool for general work — never spawn a thread per request or anything like that, that will explode).
Twisted can be more scalable (at low hardware costs), but if you hinge your architecture on threading (and Queues) you have a built-in way to grow the system (by purchasing more hardware) to use the very similar multiprocessing module instead… almost a drop-in replacement, and a potential scaling up of orders of magnitude!-)