We're often working on a project where we've been handed a large data set (say, a handful of files that are 1GB each), and are writing code to analyze it.
All of the analysis code is in Git, so everybody can check changes in and out of our central repository. But what to do with the data sets that the code is working with?
I want the data in the repository:
- When users first clone the repository, the data should come with.
- The data isn't 100% read-only; now and then a data point is corrected, or a minor formatting change happens. If minor changes happen to the data, users should be notified at the next checkout.
However, I don't want the data in the git repository:
- git cloning a spare copy (so I have two versions in my home directory) will pull a few GB of data I already have. I'd rather either have it in a fixed location [set a rule that data must be in ~/data] or add links as needed.
- With data in the repository, copying to a thumb drive may be impossible, which is annoying when I'm just working on a hundred lines of code.
- If an erroneous data point is fixed, I'm never going to look at the erroneous version aga开发者_运维知识库in. Changes to the data set can be tracked in a plain text file or by the person who provided the data (or just not at all).
It seems that I need a setup with a main repository for code and an auxiliary repository for data. Any suggestions or tricks for gracefully implementing this, either within git or in POSIX at large? Everything I've thought of is in one way or another a kludge.
use submodules to isolate your giant files from your source code. More on that here:
http://git-scm.com/book/en/v2/Git-Tools-Submodules
The examples talk about libraries, but this works for large bloated things like data samples for testing, images, movies, etc.
You should be able to fly while developing, only pausing here and there if you need to look at new versions of giant data.
Sometimes it's not even worth while tracking changes to such things.
To address your issues with getting more clones of the data: If your git implementation supports hard links on your OS, this should be a breeze.
The nature of your giant dataset is also at play. If you change some of it, are you changing giant blobs or a few rows in a set of millions? This should determine how effective VCS will be in playing a notification mechanism for it.
Hope this helps.
This sounds like the perfect occasion to try git-annex:
git-annex allows managing files with git, without checking the file contents into git. While that may seem paradoxical, it is useful when dealing with files larger than git can currently easily handle, whether due to limitations in memory, checksumming time, or disk space.
Git BUP claims to do a good job with incrementally backing up large files.
I think BUP assumes a separate repository to do it's work so you'd end up using submodules anyway. However, if you want good bandwidth reduction this is the thing
As an alternative, the data could reside in an untracked (by git) folder that is synced by a p2p service. We use this solution for a dataset of several tens of GB's and it works quite nicely.
- The dataset is shared directly between the peers.
- Depending on the p2p software older versions can be kept and restored.
- The dataset will automatically be up to date in case of changes.
syncthing is the software we use.
I recommend Git Large File Storage which integrates seamlessly into the git ecosystem. It sets up text pointers to large files but doesn't export them to your repository.
After installing (https://packagecloud.io/github/git-lfs/install), you can set it up in your local repo with git lfs install
. And then using it is easy. Tell it what types of files you want to track (git lfs track "*.gz"
), make sure you are tracking .gitattributes
, and it should just work.
精彩评论