Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • SEARCH
  • Home
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 331897
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 12, 20262026-05-12T09:49:01+00:00 2026-05-12T09:49:01+00:00

We have a database with a very simple schema: CREATE TABLE IF NOT EXISTS

  • 0

We have a database with a very simple schema:

 CREATE TABLE IF NOT EXISTS tblIndex(
     frame_type  INT, 
     pts  VARCHAR(5),
     ts_start  INT primary key,
     ts_end  INT)

And the application scenario is :

  1. Every second user will insert 2 ~ 50 records ,and the ts_start fields of those records is always increasing. After 8 hours ,there are at most 1_800_000 records. By setting the sync mode to off, the performance seems OK so far.And because the data of each record has only 16 bytes, we may use some buffer even if the insert speed is not too fast.

  2. After 8 hours , user will tell me to delete the oldest data by telling the upper bound ts_start , so I will do

    DELETE FROM tblIndex WHERE ts_start < upper_bound_ts_start.

    Delete 90_000 (which is the records for half a hour) out of the 1_800_000 now take 17 seconds. A litter bit longer than expected. Any way to reduce this ? We don’t care if the records is synced to the hard disk immediately. We are thinking start a separated thread to do the delete so to make this call to be async. The things I am not sure is whether the (long time )delete will impact the insert performance if they share the same connection? Or should I use a separated connection for insert and delete? But In this way, do they need be synced in application level?

  3. Search. SELECT ts_start FROM tblIndex WHERE ts_start BETWEEN ? AND ? – As ts_start is the primary key so the performance is OK for our need now. I am thinking i should use a separated connection for search , right?

Configuration of SQLite :

hard disk database (usb interface)
cache size is 2000
page size is 1024
sync mode is 0 (off)
journal_mode mode is truncate

Thanks for any suggestion to improve the delete performance or about the overall design.

EDIT: 350M MIPS CPU , not too much memory (<2M) specific for this application.

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-12T09:49:02+00:00Added an answer on May 12, 2026 at 9:49 am

    Since the data is transient — and small — why are you using a database?

    You’d be much happier with a simple directory of flat files.

    Your web query can simply read the relevant set of files and return the required results. 1_800_000 records of 16 bytes is just 28Mb of file data. You can read the whole thing into memory, do your processing in memory, and present the results.

    A separate process can delete files that are old once a day at midnight.

    A third process can append 2-50 16-byte records to the working file each second.

    • Write and flush so that the file is correct and complete after each I/O. If your reader handles an incomplete last record gracefully, you don’t even need a lock.

    • Name each file with a sequence number based on the time. You could, for example, take the system time (in seconds) divide by 4*60*60 and truncate the answer. That’s a sequence number that will advance once every 4 hours, creating a new file. 8 hours of data is 3 of these files (2 previous 4-hour files, plus the current working file.)

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 185k
  • Answers 185k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer this makes only one table lookup and not 2 from… May 12, 2026 at 5:04 pm
  • Editorial Team
    Editorial Team added an answer Lots of examples on this site for how to implement… May 12, 2026 at 5:04 pm
  • Editorial Team
    Editorial Team added an answer http://web.archive.org/web/20100111062144/http://www.parallelrealities.co.uk/tutorials/ May 12, 2026 at 5:04 pm

Related Questions

My employer has a database committee and we've been discussing different platforms. Thus far
When I entered my current (employer's) company a new database schema was designed and
Suppose I have a customer object that has all of the standard customer properties
I have recently read about how cursors should be avoided. Well, I would like

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.