Sign Up

Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.

Have an account? Sign In

Have an account? Sign In Now

Sign In

Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.

Sign Up Here

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here

Please briefly explain why you feel this question should be reported.

Please briefly explain why you feel this answer should be reported.

Please briefly explain why you feel this user should be reported.

Sign InSign Up

The Archive Base

The Archive Base Logo The Archive Base Logo

The Archive Base Navigation

  • SEARCH
  • Home
  • About Us
  • Blog
  • Contact Us
Search
Ask A Question

Mobile menu

Close
Ask a Question
  • Home
  • Add group
  • Groups page
  • Feed
  • User Profile
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Buy Points
  • Users
  • Help
  • Buy Theme
  • SEARCH
Home/ Questions/Q 227597
In Process

The Archive Base Latest Questions

Editorial Team
  • 0
Editorial Team
Asked: May 11, 20262026-05-11T19:36:42+00:00 2026-05-11T19:36:42+00:00

The requirements are : Fact 1 : We have some data files produced by

  • 0

The requirements are :

Fact 1 : We have some data files produced by a legacy system

Fact 2 : We have some data files produced by a new system that should eventually replace the legacy one

Fact 3 :

  1. Both the files are text/ASCII files,
    with records being composed of
    multiple lines.
  2. Each line, within a record, consists
    of a fieldname and fieldvalue.
  3. The format in which the lines are
    presented are different between 1
    and 2, but fieldname and fieldvalue
    can be extracted from each line
    through use of regex
  4. Field names can change between 1 and
    2, but we have a mapping that
    relates them
  5. Each record has a unique identifier
    that helps us relate the legacy
    record with a new record as ordering
    of records in the output file need
    not be same across both systems.
  6. Each file to compare is a minimum of
    10 MB to an average case of 30 – 35
    MB

Fact 4 : As and when we iterate though building the new system, we would need to compare the files produced by both systems under exact same conditions and reconcile the differences.

Fact 5 : This comparison is being done manually using an expensive visual diff tool. To help in this, I wrote a tool that brings the two different fieldnames into a common name and then sorts the field names in each record, in each file, so that they sync in order (new files can have extra fields that is ignored in the visual diff)

Fact 6 : Due to the comparison being done manually by humans, and human making mistakes, we are getting false posetives AND negatives that is significantly impacting our timelines.

Obviously the question is, what should ‘ALG’ and ‘DS’ be?

The scenario I have to address :

Where people continue to inspect the diff visually – in this, the performance of the exsiting script is dismal – most of the processing seems to be in sorting the array of lines in lexicographic order (reading/fetching array element : Tie::File::FETCH, Tie::File::Cache::lookup and putting it in it’s correct place so that it’s sorted : Tie::File::Cache::insert, Tie::File::Heap::insert)

use strict;
use warnings;

use Tie::File;

use Data::Dumper;

use Digest::MD5 qw(md5_hex);

# open an existing file in read-only mode
use Fcntl 'O_RDONLY';

die "Usage: $0 <unsorted input filename> <sorted output filename>" if ($#ARGV < 1);

our $recordsWrittenCount = 0;
our $fieldsSorted = 0;

our @array;

tie @array, 'Tie::File', $ARGV[0], memory => 50_000_000, mode => O_RDONLY or die "Cannot open $ARGV[0]: $!";

open(OUTFILE, ">" .  $ARGV[1]) or die "Cannot open $ARGV[1]: $!";

our @tempRecordStorage = ();

our $dx = 0;

# Now read in the EL6 file

our $numberOfLines = @array; # accessing @array in a loop might be expensive as it is tied?? 

for($dx = 0; $dx < $numberOfLines; ++$dx)
{
    if($array[$dx] eq 'RECORD')
    {
        ++$recordsWrittenCount;

        my $endOfRecord = $dx;

        until($array[++$endOfRecord] eq '.')
        {
            push @tempRecordStorage, $array[$endOfRecord];
            ++$fieldsSorted;
        }

        print OUTFILE "RECORD\n";

        local $, = "\n";
        print OUTFILE sort @tempRecordStorage;
        @tempRecordStorage = ();

        print OUTFILE "\n.\n"; # PERL does not postfix trailing separator after the last array element, so we need to do this ourselves)

        $dx = $endOfRecord;     
    }
}

close(OUTFILE);

# Display results to user

print "\n[*] Done: " . $fieldsSorted . " fields sorted from " . $recordsWrittenCount . " records written.\n";

So I thought about it and I believe, some sort if a trie, maybe suffix trie/PATRICIA trie, so that on insertion itself the fields in each record are sorted.
Hence, I would not have to sort the final array all in one go and the cost would be amortized (a speculation on my part)

Another issue arises in that case – Tie::File uses array to abstract lines in a file – reading lines into a tree and then serializing them back into an array would require additional memory AND processing/

Question is – would that cost more than the current cost of sorting the tied array?

  • 1 1 Answer
  • 0 Views
  • 0 Followers
  • 0
Share
  • Facebook
  • Report

Leave an answer
Cancel reply

You must login to add an answer.

Forgot Password?

Need An Account, Sign Up Here

1 Answer

  • Voted
  • Oldest
  • Recent
  • Random
  1. Editorial Team
    Editorial Team
    2026-05-11T19:36:43+00:00Added an answer on May 11, 2026 at 7:36 pm

    Tie::File is very slow. There are two reasons for this: First, tied variables are significantly slower than standard variables. The other reason is that in the case of Tie::File the data in your array is on disk rather than in memory. This greatly slows access. Tie::File’s cache can help performance in some circumstances but not when you just loop over the array one element at a time as you do here. (The cache only helps if you revisit the same index.) The time to use Tie::File is when you have an algorithm that requires having all the data in memory at once but you don’t have enough memory to do so. Since you’re only processing the file one line at a time using Tie::File is not only pointless, it’s harmful.

    I don’t think a trie is the right choice here. I’d use a plain HoH (hash of hashes) instead. Your files are small enough that you should be able to get everything in memory at once. I recommend parsing each file and building a hash that looks like this:

    %data = (
      id1 => {
        field1 => value1,
        field2 => value2,
      },
      id2 => {
        field1 => value1,
        field2 => value2,
      },
    );
    

    If you use your mappings to normalize the field names while building the data structure it will make the comparison easier.

    To compare the data, do this:

    1. Perform a set comparison of the keys of the two hashes. This should generate three lists: The IDs present in just the legacy data, the IDs present in just the new data, and the IDs present in both.
    2. Report the lists of IDs that only appear in one data set. These are records that don’t have a corresponding record in the other data set.
    3. For the IDs in both data sets, compare the data for each ID field by field and report any differences.
    • 0
    • Reply
    • Share
      Share
      • Share on Facebook
      • Share on Twitter
      • Share on LinkedIn
      • Share on WhatsApp
      • Report

Sidebar

Ask A Question

Stats

  • Questions 107k
  • Answers 107k
  • Best Answers 0
  • User 1
  • Popular
  • Answers
  • Editorial Team

    How to approach applying for a job at a company ...

    • 7 Answers
  • Editorial Team

    How to handle personal stress caused by utterly incompetent and ...

    • 5 Answers
  • Editorial Team

    What is a programmer’s life like?

    • 5 Answers
  • Editorial Team
    Editorial Team added an answer To summarize: On Unixes with /proc really straight and realiable… May 11, 2026 at 9:00 pm
  • Editorial Team
    Editorial Team added an answer Have a look at Daniel Moth's blog entry. I suspect… May 11, 2026 at 9:00 pm
  • Editorial Team
    Editorial Team added an answer Please check this http://en.wikipedia.org/wiki/Electronic_Industries_Alliance cheers May 11, 2026 at 9:00 pm

Related Questions

Here's my situation... I'm writing a .Net/C# security system (authorization and authentication) for a
Given the following table in SQL Server 2005: ID Col1 Col2 Col3 -- ----
I have just started to look at .NET 3.5 so please forgive me if
My requirement is to develop a page with 4 sections, which cannot have a
I'm just getting into creating some WCF services, but I have a requirement to

Trending Tags

analytics british company computer developers django employee employer english facebook french google interview javascript language life php programmer programs salary

Top Members

Explore

  • Home
  • Add group
  • Groups page
  • Communities
  • Questions
    • New Questions
    • Trending Questions
    • Must read Questions
    • Hot Questions
  • Polls
  • Tags
  • Badges
  • Users
  • Help
  • SEARCH

Footer

© 2021 The Archive Base. All Rights Reserved
With Love by The Archive Base

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.