I’m currently starting to use git for my version control system, however I do a fair bit of web/game development which of course requires images(binary data) to be stored. So if my understanding is correct if I commit an image and it changes 100 times, if I fetch a fresh copy of that repo I’d basically be checking out all 100 revisions of that binary file?
Is this not an issue with large repo’s where images change regularly wouldn’t the initial fetch of the repo end up becoming quite large? Has anybody experienced any issue’s with this in the real world? I’ve seen a few alternatives for instance, using submodules and keeping images in a separate repo but this only keeps the codebase smaller, the image repo would still be huge. Basically I’m just wondering if there’s a nice solution to this.
I wouldn’t call that “checkout”, but yes, the first time you fetch repository, provided that binary data is huge and incompressible it’s going to be what it is – huge. And yes, since conservation law is still in effect breaking it into modules won’t save you space and time on initial pulling of repository.
One possible solution is still using separate repository and
--depthoption when pulling it. Shallow repositories have some limitations, but I don’t remember what exactly, since I never used it. Check the docs. Keyword is “shallow”.Edit: From
git-clone(1):