Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • Home
  • SEARCH
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 1049873
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 16, 20262026-05-16T16:41:05+00:00 2026-05-16T16:41:05+00:00

Step 1: Loading data via bulk insert from .txt (delimited) file into Table 1

  • 0

Step 1: Loading data via “bulk insert” from .txt (delimited) file into Table 1 (no indexes etc.)

bulk insert Table_1
from '\\path\to_some_file.txt'
with ( tablock, FORMATFILE ='format_file_path.xml') 

Via format file I map output Column data types to avoid further conversions (from Char to Int for example)

Step 2: OUTPUT the result (perhaps NOT all the columns from Table 1) into another Table 2, but only DISTINCT values from Table 1.

NB! Table_1 is about 20 Million records (per load).

what we have now (example simplified):

select distinct convert(int, col1), convert(int, col2), col3, ... 
into Table_2
from Table_1

It takes about 3.5 mins to process.
Could you advice some best practices that may help to reduce the processing time and put only UNIQUE records into Table_2?

Thanks in advance!

UPD 1: sorry for misunderstanding – I meant that select distinct Query takes 3.5 mins.
“bulk insert” is rather optimized – it loads via 8 threads (8 separate .txt files) “bulk insert” into 1 table with (TABLOCK) and imports 20mln records in about 1min.

UPD 2: I tested different approaches (didn’t test on SSIS – in our application this approach won’t work):
The best result is the approach when data “bulk inserted” into TABLE_2 format already (column types match, data types – also) so we eliminate data type Converts. And just “plain” distinct:

select distinct * into Table_2 from Table_1

Gives 70sec of processing. So I could consider It’s a best result I could get for now.
I also tried a couple of techniques (additional Order by, CTE win grouping etc) – they were worse then “plain” distinct.

Thanks all for participation!

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-16T16:41:05+00:00Added an answer on May 16, 2026 at 4:41 pm

    You have to know if it is your SELECT DISTINCT that is causing the issue or your INSERT INTO is causing the issue.

    You will have to run the SELECT DISTINCT once with, and once without the INSERT INTO and measure the duration to figure out which one you have to tune.

    If it is your SELECT DISTINCT, you can try to fine tune that query to be more efficient.

    If it is your INSERT INTO, then consider the following:

    With an INSERT INTO, a new table is created, and all the pages are allocated as required.

    Are you dropping the old table and creating a new one? If so, you should change that to just DELETE from the old table – DELETE, not Truncate – this is because a truncate will let go of all the pages acquired by the table, and they have to be re-allocated.

    You can try the one or several of the following things to improve efficiency.

    1. Ask your customer for non-duplicate data
      • Index on all the Duplicate-criteria columns. Scanning an index should be much faster than scanning a table.
      • Partition your staging table to get better performance
      • Create a view that selects the distinct values, and use BCP to fast load the data.
    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.