I have a really big log file (9GB — I know I need to fix that) on my box. I need to split into chunks so I can upload it to amazon S3 for backup. S3 has a max file size of 5GB. So I would like to split this into several chunks and then upload each one.
Here is the catch, I only have 5GB on my server free so I can’t just do a simple unix split. Here is what I want to do:
- grab the first 4GB of the log file and spit out into a seperate file (call it segment 1)
- Upload that segment1 to s3.
- rm segment1 to free up space.
- grab the middle 4GB from the log file and upload to s3. Cleanup as before
- Grab the remaining 1GB and upload to S3.
I can’t find the right unix command to split with an offset. Split only does things in equal chunks and csplit doesn’t seem to have what I need either. Any recommendations?
One (convoluted) solution is to compress it first. A textual log file should easily go from 9G to well below 5G, then you delete the original, giving you 9G of free space.
Then you pipe that compressed file directly through
splitso as to not use up more disk space. What you’ll end up with is a compressed file and the three files for upload.Upload them, then delete them, then uncompress the original log.
=====
A better solution is to just count the lines (say 3 million) and use an awk script to extract and send the individual parts:
Then, at the other end, you can either process
bit1throughbit3individually, or recombine them:And, of course, this splitting can be done with any of the standard text processing tools in Unix:
perl,python,awk,head/tailcombo. It depends on what you’re comfortable with.