There can be:
1) just clone from remote repo as needed (each new one can take 20 minutes and 500MB)
2) clone 2 local ones from remote repo, both 500MB, total 1GB, so always have 2 local repo to work with
3) clone 1 local one from remote repo, called it 'master', and then don't touch this master, but clone other local ones from this master as needed
I started off using (1), but when there is a 开发者_高级运维quick bug fix, I need to do a clone and it is 20 minutes, so then method (2) is better, because there are 2 independent local repos all the time.
But then sometimes a repo becomes "weird" because there are merges that do damages and when it is fixed on the remote repo, any local repo's merge that shows up in hg outgoing
will cause damage later when we push again, so we just remove that local repo and clone from remote again to start "fresh", taking 20 minutes again. (Actually, we can use local repo 2 first, rename local repo 1 as repo_old, and then before sleep or before going home, do a clone again)
Is (3) the best option? Because on a Mac, the master takes 500MB and 20 minutes, but the other local clones are super fast and takes much less than 500MB because it uses hard link on a Mac (how to find out how much disk space without the hard linked content?). And if using (3), how do we do commits and push? Suppose we clone from remote repo to local as "master", and then clone local ones as "clone01", "clone02", 03, etc, then do we work inside of clone01, and then when an urgent fix is needed, we go to master, do an hg pull
, and hg update
, and go to clone02 and also do hg pull
and hg update
, and fix it on clone02, test it, and hg commit
, hg push
to the master, and then go to master, and do an hg push
there? And then when clone01's project is done, again go to master, pull, update, go to clone01, pull, update, merge, test, commit, push, go to master, push to remote repo? That's a lot of steps!
Maybe a fourth option might work better in your case: Mercurial Queues that are kept in a local Mercurial repository.
Using MQ you can:
- Clone the master repository locally.
- Work on your code and keep your changes isolated in patches.
- When new updates from upstream are available, remove your batches, apply the updates, and then re-apply your patches on top of the new update.
- Once you're happy with your work, fold it into your local repository and push it upstream.
You don't have to keep the patches in a local repository, but it's a nice bonus option that is worth considering.
Chapter 12 from Mercurial: The Definitive guide explains the process in fairly good detail.
I don't know that your understanding of the space considerations are correct. When cloning a local repository Mercurial will use hardlinks for the .hg directory, the actual repository, which takes up no additional space. The working directory takes up space (though hopefully not 500GB!) but the .hg directory only looks like it does depending on the tools you use to check.
If you do a clone -U
you create a clone without a working directory and it should take up almost no additional space and be created almost instantly.
I always keep a clone -U
of the central repo in an unmodified state and then create clones off of that as needed. I push directly from those clones back to the remote repository.
Mercurial Queues look really powerful, but I've never given myself the time to read all that documentation, just to be able to put my current work aside to work an a small bug.
I use the attic extension.
It'll be like this:
...working happily, but then there is a quick bug fix...
$hg shelve work
...quickly fix the bug...
$hg ci
$hg unshelve
...continue with work
Sometimes I get an idea, but no time to really play with it. To prevent me from forgetting it.
...working happily, idea drops in...
$hg shelve work
...start a unittest for the idea or some other unfinished piece of code, enough to sketch the idea
$hg shelve idea
$hg unshelve work
...continue with work
$hg ls
idea
*C work
精彩评论