The requirements are :
Fact 1 : We have some data files produced by a legacy system
Fact 2 : We have some data files produced by a new system that should eventually replace the legacy one
Fact 3 :
- Both the files are text/ASCII files,
with records being composed of
multiple lines. - Each line, within a record, consists
of a fieldname and fieldvalue. - The format in which the lines are
presented are different between 1
and 2, but fieldname and fieldvalue
can be extracted from each line
through use of regex - Field names can change between 1 and
2, but we have a mapping that
relates them - Each record has a unique identifier
that helps us relate the legacy
record with a new record as ordering
of records in the output file need
not be same across both systems. - Each file to compare is a minimum of
10 MB to an average case of 30 – 35
MB
Fact 4 : As and when we iterate though building the new system, we would need to compare the files produced by both systems under exact same conditions and reconcile the differences.
Fact 5 : This comparison is being done manually using an expensive visual diff tool. To help in this, I wrote a tool that brings the two different fieldnames into a common name and then sorts the field names in each record, in each file, so that they sync in order (new files can have extra fields that is ignored in the visual diff)
Fact 6 : Due to the comparison being done manually by humans, and human making mistakes, we are getting false posetives AND negatives that is significantly impacting our timelines.
Obviously the question is, what should ‘ALG’ and ‘DS’ be?
The scenario I have to address :
I want to build a PERL program that will
- read relevant info from both files
into an datastructure ‘DS’ - process and find the differences
using algorithm ‘ALG’, between
records from the DS - Display/report the statistics to the
end user, like how many lines
(values) differed between the
records, where they differ or are
the values completely different, are
lines missing (files from new system
can have extra fields, but they MUST
contain all lines that are there in
the files produced by the legacy
system)
My suggestions for:
DS : Multiple nested hash tied to disk.
Looks like:
$namedHash { unique field value across both records } = {
legacy_system => {
'goodField' => 'I am good!',
'firstField' => 1,
'secondField' => 3
},
new_system => {
'firstField' => 11,
'secondField' => 33,
'goodField' => 'I am good!'
}
};
ALG : Custom key – by key comparison between anonymous hashes pointed to by legacy_system and new_system keys. Any differences will be noted down by inserting a new key ‘differences’ that will be an array of field names that differ between legacy and new system.
Hence, for this example, the output of my ALG will be:
$namedHash { unique field value across both records } = {
legacy_system => {
'goodField' => 'I am good!',
'firstField' => 1,
'secondField' => 3
},
new_system => {
'firstField' => 11,
'secondField' => 33,
'goodField' => 'I am good!'
},
differences => [firstField, secondField];
};
What would you have done/suggest in this given scenario?
Why not import all the data to a SQLite database. You need only one table with a single primary key corresponding to the unique identifier common to both systems. Columns should be the union of legacy and new fields.
Import one data set first, say the set generated by the new system. Then, for every item in the legacy set, try UPDATE on the corresponding entry in the table: If UPDATE fails, you know that the new data set is missing those entries that used to exist in the old system.
If any of the columns corresponding to the legacy data have NULLs, then you know the entries in the new system that did not exist in the legacy system.
You can then SELECT rows where any column from the new system does not match the corresponding column from the old system.
IMHO, this more flexible than a hash table based system.