Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • Home
  • SEARCH
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 139459
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 11, 20262026-05-11T07:26:19+00:00 2026-05-11T07:26:19+00:00

I’m trying to jury-rig the Amazon S3 python library to allow chunked handling of

  • 0

I’m trying to jury-rig the Amazon S3 python library to allow chunked handling of large files. Right now it does a ‘self.body = http_response.read()’, so if you have a 3G file you’re going to read the entire thing into memory before getting any control over it.

My current approach is to try to keep the interface for the library the same but provide a callback after reading each chunk of data. Something like the following:

data = [] while True:     chunk = http_response.read(CHUNKSIZE)     if not chunk:         break     if callback:         callback(chunk)     data.append(chunk) 

Now I need to do something like:

self.body = ''.join(data) 

Is join the right way to do this or is there another (better) way of putting all the chunks together?

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. 2026-05-11T07:26:20+00:00Added an answer on May 11, 2026 at 7:26 am

    hm – what problem are you trying to solve? I suspect the answer depends on what you are trying to do with the data.

    Since in general you don’t want a whole 3Gb file in memory, I’d not store the chunks in an array, but iterate over the http_response and write it straight to disk, in a temporary or persistent file using the normal write() method on an appropriate file handle.

    if you do want two copies of the data in memory, your method will require be at least 6Gb for your hypothetical 3Gb file, which presumably is significant for most hardware. I know that array join methods are fast and all that, but since this is a really ram-constrained process maybe you want to find some way of doing it better? StringIO (http://docs.python.org/library/stringio.html) creates string objects that can be appended to in memory; the pure python one, since it has to work with immutable strings, just uses your array join trick internally, but the c-based cStringIO might actually append to a memory buffer internall. I don’t have its source code to hand, so that would bear checking.

    if you do wish to do some kind of analysis on the data and really wish to keep in in memory with minimal overhead, you might want to consider some of the byte array objets from Numeric/NumPy as an alternative to StringIO. they are high-performance code optimised for large arrays and might be what you need.

    as a useful example, for a general-purpose file-handling object which has memory-efficient iterator-friendly approach you might want to check out the django File obeject chunk handling code: http://code.djangoproject.com/browser/django/trunk/django/core/files/base.py.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 130k
  • Answers 130k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer XPath specification defines number type as follows: A number represents… May 12, 2026 at 6:06 am
  • Editorial Team
    Editorial Team added an answer After digging around: The answer is there is no such… May 12, 2026 at 6:06 am
  • Editorial Team
    Editorial Team added an answer I had this exact problem where I was animating the… May 12, 2026 at 6:06 am

Related Questions

I ran into a problem. Wrote the following code snippet: teksti = teksti.Trim() teksti
I am currently running into a problem where an element is coming back from
Seemingly simple, but I cannot find anything relevant on the web. What is the
Does anyone know how can I replace this 2 symbol below from the string
Configuring TinyMCE to allow for tags, based on a customer requirement. My config is

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.