Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • Home
  • SEARCH
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 145155
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 11, 20262026-05-11T08:22:24+00:00 2026-05-11T08:22:24+00:00

I am facing a performance (it can lead to scaling issue later on) issue

  • 0

I am facing a performance (it can lead to scaling issue later on) issue at the moment. The application I am working on is quite complex and it’s running on SQL Server 2005. I need to join 6 – 7 tables to get the desired data. Each table contains more than 100,000 rows of data so far. The database schema cannot be changed (must stay as is). So I can only try to optimize as much as possible. 2 things come to my mind:

  • Try not to join in database and let the application server do the filtering by using LINQ:

    • Pros: will be able to scale easily by adding more app servers.
    • Cons: more effort; I am not sure if responsiveness will be decreased.
  • Application server stays as is and try to optimize the SQL query as much as possible (more indexes, rebuild index frequently etc):

    • Pros: minimum effort
    • Cons: when table records get bigger the problem will come back again

Basically caching isn’t an solution for me at the moment (hardware issue, hosting issues etc) and that’s why I didn’t bring it up originally. But I do know what the benefits of caching will bring to me and have used it many times.

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. 2026-05-11T08:22:24+00:00Added an answer on May 11, 2026 at 8:22 am

    Generally speaking, do the joining in the DBMS. If you do it in the application server, you are betting that you can do a better job of optimizing joins than the people who wrote the DBMS, and (further) that you can out-perform their best efforts by enough to offset the cost of transferring the unjoined data across the wire.

    Now, if you are going to do a cross-product of two wide tables (lets say that they are T1, with N1 rows of width W1 and T2 with N2 rows of width W2) with no filtering, then the DBMS is obliged to create and send N1 * N2 * (W1 + W2) bytes of data over the wire, whereas you could suck down the tables separately as N1 * W1 + N2 * W2 bytes of data. If N1 = N2 = 1M and W1 = W2 = 100, then that’s 200 TB vs 200 MB of data transfer in favour of doing the cross-product in the app server. But that’s not exactly fair to the DBMS. Most queries are not that silly – they join on columns and apply conditions, and the DBMS optimizer will struggle mightily (and automatically) to minimize the work done. Further, it will only send the pertinent data back to you; it does not have to send all the rows that don’t match your criteria.

    To show an alternative scenario (in favour of the DBMS) consider a case where T1 has N1 = 1M rows of width W1 = 100, but T2 has N2 = 100K rows of width W2 = 50. There is a join between the two tables on an integer column, and there are, therefore, 10 rows in T1 for each on in T2. Suppose that you suck down all of T1 and T2 to the app server: that requires N1 * W1 + N2 * W2 = 105 MB of data. But the filter conditions limit the data to 1/10 of the rows in T2 and for each row in T1 that matches a row in T2, there is in fact only 2 rows that match the filter conditions. Now the DBMS is only going to transfer N2 * (W1 + W2) / 5 = 3 MB, a saving of over 100 MB of data transfer by the DBMS. Now, if you manage to be clever and download just the N2 * W2 / 10 = 500 KB of data that corresponds to the values in T2, you still have to get the DBMS to do the ‘semi-join’ of T1 on the values you want to get the right rows from T1 to the app server. If you only need a subset of the columns, there can be another set of savings. And DBMS tend to have rather clever sort packages; you’ll need a good sort package in your app server to present data in the correct order.

    It should normally be a hands-down win for joins in the DBMS. If it isn’t, it is because you are asking the server to do more work than it can handle. In that case, you need to look at whether replicating the database server makes sense, or whether adding more cores, or more network bandwidth, or more main memory will do the job.

    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 97k
  • Answers 98k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer The UI behaviour can be accomplished with Javascript via an… May 11, 2026 at 7:29 pm
  • Editorial Team
    Editorial Team added an answer The .refresh files are just there so that VS knows… May 11, 2026 at 7:29 pm
  • Editorial Team
    Editorial Team added an answer It normally can't reorder elements, no. An exception is if… May 11, 2026 at 7:29 pm

Related Questions

I am facing an application designed to import huge amounts of data into a
I am trying decide on how I want to handle the UI for an
I have a .NET 2.0 server that seems to be running into scaling problems,
We a have a GDBM key-value database as the backend to a load-balanced web-facing

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.