Assuming Visual C/C++ 6, I have a complex data structure of 22399 elements that looks like this:
{ { '(SAME', 'AS', 'U+4E18)', 'HILLOCK', 'OR', 'MOUND'}, { 'TO', 'LICK;', {1, 1, 0}, 'TASTE,', 'A', 'MAT,', 'BAMBOO', 'BARK'}, { '(J)', 'NON-STANDARD', 'FORM', 'OF', 'U+559C', ',', {1, 1, 0}, 'LIKE,', 'LOVE,', 'ENJOY;', {1, 1, 4}, 'JOYFUL', 'THING'}, { '(AN', 'ANCIENT', {1, 2, 2}, {1, 2, 3}, 'U+4E94)', 'FIVE'}, ... }
What’s the best way to declare this? I’ve tried things like
char * abbrevs3[22399][] = { ... };
and
char * abbrevs3[22399][][] = { ... };
but the compile whinges something chronic.
EDIT: The data is a database of descriptions of certain Unihan characters. I’ve been exploring various ways of compacting the data. As it stands you have 22399 entries, each of which may contain a varying number of strings, or triplets of { abbrev marker, line where last seen, element of that line where last seen }.
By the way Greg’s talking, I may need to have each line contain the same number of elements, even if some of them are empty strings. Is that the case?
EDIT #2: And it occurs to me that some of the numeric values in the triplets are way outside the limits of char.
I just read your new posts and re-read the original post, and I think I just fully understood the goal here. Sorry it took so long, I’m kind of slow.
To paraphrase the question, on line 4 of the original example:
You’d want to translate the triples into references to strings used earlier, in an attempt to compress the data. That line becomes:
If the goal is compression I don’t think you’ll see much gain here. The self-referencing triples are each 3 bytes, but the strings that are being substituted out are only 8 bytes total, counting null terminators, and you only save 2 bytes on this line. And that’s for using chars. Since your structure is so big that you’re going to need to use ints for references, your triple is actually 12 bytes, which is even worse. In this case you’ll only ever save space by substituting for words that are 12 ascii characters or more.
If I’m totally off base here then feel free to ignore me, but I think the approach of tokenizing on spaces and then removing duplicate words is just kind of a poor man’s Huffman compression. Huffman where the alphabet is a list of longest common substrings, or some other standard text compression method would probably work well for this problem.
If for some reason this isn’t an option though, I think I would get a list of all unique words in your data and use that as a lookup table. Then store all strings as a list of indexes into that table. You’d have to use two tables, but in the end it might be simpler, and it would save you the space being used by the leading 1’s you’re using as the ‘abbrev marker’ now. Basically, your abbreviation markers would become a single index instead of a triplet.
So,
You’d still lose a lot of space if your strings aren’t of roughly uniform length though.