Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • SEARCH
  • Home
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 1015207
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 16, 20262026-05-16T10:21:59+00:00 2026-05-16T10:21:59+00:00

I store time-series simulation results in PostgreSQL. The db schema is like this. table

  • 0

I store time-series simulation results in PostgreSQL.
The db schema is like this.

table SimulationInfo (
    simulation_id integer primary key,
    simulation_property1, 
    simulation_property2, 
    ....
)
table SimulationResult (  // The size of one row would be around 100 bytes
    simulation_id integer,
    res_date Date,
    res_value1,
    res_value2,
    ...
    res_value9,
    primary key (simulation_id, res_date)

)

I usually query data based on simulation_id and res_date.

I partitioned the SimulationResult table into 200 sub-tables based on the range value of simulation_id. A fully filled sub table has 10 ~ 15 millions rows. Currently about 70 sub-tables are fully filled, and the database size is more than 100 gb. The total 200 sub tables would be filled soon, and when it happens, I need to add more sub tables.

But I read this answers, which says more than a few dozen partitions does not make sense. So my questions are like below.

  1. more than a few dozen partitions not make sense? why?
    I checked the execution plan on my 200 sub-tables, and it scan only the relevant sub-table. So i guessed more partitions with smaller each sub-table must be better.

  2. if number of partitions should be limited, like 50, then is it no problem to have billions rows in one table? How big one table can be without big problem given the schema like mine?

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-16T10:21:59+00:00Added an answer on May 16, 2026 at 10:21 am

    It’s probably unwise to have that many partitions, yes. The main reason to have partitions at all is not to make indexed queries faster (which they are not, for the most part), but to improve performance for queries that have to sequentially scan the table based on constraints that can be proved to not hold for some of the partitions; and to improve maintenance operations (like vacuum, or deleting large batches of old data which can be achieved by truncating a partition in certain setups, and such).

    Maybe instead of using ranges of simulation_id (which means you need more and more partitions all the time), you could partition using a hash of it. That way all partitions grow at a similar rate, and there’s a fixed number of partitions.

    The problem with too many partitions is that the system is not prepared to deal with locking too many objects, for example. Maybe 200 work fine, but it won’t scale well when you reach a thousand and beyond (which doesn’t sound that unlikely given your description).

    There’s no problem with having billions of rows per partition.

    All that said, there are obviously particular concerns that apply to each scenario. It all depends on the queries you’re going to run, and what you plan to do with the data long-term (i.e. are you going to keep it all, archive it, delete the oldest, …?)

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 514k
  • Answers 514k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer I haven't looked into any plugins. However, here's a simple… May 16, 2026 at 6:18 pm
  • Editorial Team
    Editorial Team added an answer You can do a few things quickly to figure out… May 16, 2026 at 6:18 pm
  • Editorial Team
    Editorial Team added an answer Do you know the angle of the line? You can… May 16, 2026 at 6:18 pm

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Related Questions

When I store a jQuery object in a variable, like this: var $myObject =
I am using R to run simulations using time series data. I have been
Please mention time complexity and best data structure to store these values, when values
I am creating an application which will store a (semi) real-time feed of a
I have two varchar fields which store the date and time in the following
I want to store all accesses to a webpage with user-agent, script-execution-time, request-uri, full
I have a Python program that runs a series of experiments, with no data
I am trying to add Series binding to the Visifire graphing controls set. To
I want to store the contents of an xml file in the database. Is
For a long time now, we've held our data within the project's repository. We

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.