Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • Home
  • SEARCH
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 1108439
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 17, 20262026-05-17T02:04:59+00:00 2026-05-17T02:04:59+00:00

I am in looking for a buffer code for process huge records in tuple

  • 0

I am in looking for a buffer code for process huge records in tuple / csv file / sqlite db records / numpy.darray, the buffer may just like linux command “more”.

The request came from processing huge data records(100000000 rows maybe), the records may look like this:

0.12313 0.231312 0.23123 0.152432
0.22569 0.311312 0.54549 0.224654
0.33326 0.654685 0.67968 0.168749
...
0.42315 0.574575 0.68646 0.689596

I want process them in numpy.darray. For example, find special data process it and store them back, or process 2 cols. However it is too big then if numpy read the file directly it will give me a Memory Error.

Then, I think an adapter like mem cache page or linux “more file” command may save the memory when processing.

Because those raw data may presents as different format – csv / sqlite_db / hdf5 / xml. I want this adapter be more normalized, then, use the “[]” as a “row” may be a more common way because I think each records can be represented as a [].

So the adapter what I want may looks like this:

fd = "a opend big file" # or a tuple of objects, whatever, it is an iterable object can access all the raw rows 

page = pager(fd)

page.page_buffer_size = 100    # buffer 100 line or 100 object in tuple

page.seek_to(0)        # move to start
page.seek_to(120)      # move to line #120
page.seek_to(-10)      # seek back to #120

page.next_page()        
page.prev_page()

page1 = page.copy()

page.remove(0)

page.sync()

Can someone show me some hints to prevent reinvent the wheel?

By the way, ATpy, http://atpy.sourceforge.net/ is a module can sync the numpy.array with raw datasource in different format, however, it also read all the data in-a-go into memory.

And the pytable is not suitable for me so far because SQL is not supported by it and the HDF5 file may not as popular as sqlite db(forgive me if this is wrong).

   My plan is write this tools in this way:
    1. helper.py        <-- define all the house-keeping works for different file format
                            |- load_file()
                            |- seek_backword()       
                            |- seek_forward()
                            | ...
    2. adapter.py       <-- define all the interface and import the helper to interact 
                            with raw data and get a way to interact with numpy.darray in somehow.
                            |- load()
                            |- seek_to()
                            |- next_page()
                            |- prev_page()
                            |- sync()
                            |- self.page_buffer_size
                            |- self.abs_index_in_raw_for_this_page = []
                            |- self.index_for_this_page = []
                            |- self.buffered_rows = []

Thanks,

Rgs,

KC

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-17T02:04:59+00:00Added an answer on May 17, 2026 at 2:04 am

    Ummmm…. You’re not really talking about anything more than a list.

    fd = open( "some file", "r" )
    data =  fd.readlines()
    
    page_size = 100
    
    data[0:0+page_size] # move to start
    data[120:120+page_size] # move to line 120
    here= 120
    data[here-10:here-10+page_size] # move back 10 from here
    here -= 10
    data[here:here+page_size]
    here += page_size
    data[here:here+page_size]
    

    I’m not sure that you actually need to invent anything.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 541k
  • Answers 541k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer .sidebar { float: left; width: 200px; } not right To… May 17, 2026 at 2:53 am
  • Editorial Team
    Editorial Team added an answer Yes, you missed it in the installer because they made… May 17, 2026 at 2:53 am
  • Editorial Team
    Editorial Team added an answer Try this: $date = empty($_GET['currentpage']) ? time() : strtotime($_GET['currentpage']); And:… May 17, 2026 at 2:53 am

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Related Questions

I'm looking at a code line similar to: sprintf(buffer,%02d:%02d:%02d,hour,minute,second); I think the symbolic strings
I would like to be able to spawn a process in python and have
Currently I'm in the process of moving a performance bottleneck in my python code
I usually play with elisp code on my scratch buffer. I find it hard
I'm in the process of trying to hack together the first bits of a
I've tried looking at a bunch of forums, and despite a lot of tweaking
Google officially provides a C++ implementation of Google Protocol buffers, but I'm looking for
hello i got a problem with reading from a file, i am trying to
i am using a named pipe for IPC on a Debian system. I will
I have a Location class which represents a location somewhere in a stream. (The

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.