开发者

Git equivalent of subversion's $URL$ keyword expansion

开发者 https://www.devze.com 2022-12-16 20:11 出处:网络
I am considering migrating from subversion to git. One of the things we use subversion for our sysadmins to manage things like configuration files. To that end, we put $URL$ into each file, which expa

I am considering migrating from subversion to git. One of the things we use subversion for our sysadmins to manage things like configuration files. To that end, we put $URL$ into each file, which expands to the file's location in the subversion tree. This lets the admins look at a file on some arbitrary host and figure out where in the tree it came from.

The closest analog I could find is gitattributes. There is the filter= directive, but it seems that git doesn't communicate to the filter what filename it's filtering, which would be necessary to turn $URL$ into a path.

There is also the ident d开发者_JAVA百科irective, which would turn $Id$ into the blob hash. This might be usable if one could map that back into a pathname, but my git-fu isn't strong enough.

Any suggestions?

The workflow is as follows:

  1. Admin commits changes to the VCS repo
  2. Admin updates a central location that has checked out the repo
  3. Admin pulls the changes to the host using cfengine


As mentioned in "Does git have anything like svn propset svn:keywords or pre-/post-commit hooks?", Git does not support keyword expansion.

"Dealing with SVN keyword expansion with git-sv" provides a solution based on git config filter (which is not exactly what you want) and/or gitattributes.


The closest example if file information expansion I have found it still based on the smudge/clean approach, with this git Hash filter, but the clean part removes it from the file, and no path can be found.

This thread actually spells it out (as well as mentioning some git-fu commands which might contain what you are looking for, I have not tested them):

Anyway, smudge/clean does not give the immediate solution to the problem because of smaller technical shortcomings:

  • smudge filter is not passed a name of file being checked out, so it is not possible to exactly find the commit identifier.
    However, this is alleviated by the fact that 'smudge' is only being run for the changed files, so the last commit is the needed one.

  • smudge filter is not passed a commit identifier. This is a bit more serious, as this information is nowhere to get from otherwise.
    I tried to use 'HEAD' value, but apparently it is not yet updated at the moment 'smudge' is being run, so the files end up with the date of the "previous" commit rather than the commit being checked out.
    "Previous" means the commit that was checked out before. The problem gets worse if different branch is checkout out, as the files get the timestamp of a previous branch.

AFAIR, lack of information in smudge filter was intentional, to discourage this particular use of smudge/clean mechanism. However, I think this can be reconsidered given the Peter's use case: "checkout-only" workspace for immediate publishing to webserver.
Alternatively, anyone interested in this use case could implement additional smudge arguments as a site-local patch.

And then, there are small annoyances, which seems to be inevitable: if you change 'clean' filter and check out earlier revision, it will be reported as having modifications (due to changed 'clean' definition).


Since there is nowadays the %f option, scripts like git-rcs-keywords can do the task.

It is already mentioned in this answer.

gitattributes(5) manpage:

Sequence "%f" on the filter command line is replaced with
the name of the file the filter is working on. A filter 
might use this in keyword substitution. For example:

[filter "p4"]
    clean = git-p4-filter --clean %f
    smudge = git-p4-filter --smudge %f


Coming at the problem from a completely different angle, how do the files in question end up on the end hosts? I guess today it is either checked out there directly, or copied somehow from an already checked out repository on another host?

If so, could you modify your process so that files are checked out to a git repository, and a script does the $URL$ or other keyword expansion after checkout. That way you can do whatever substitutions you like, and only be limited by what can be figured out by a script in a checked out repository.


We use a "canonical path" solution in our deployments (they are all internal, FWIW).

All software goes in e.g. /d/sw/xyz/a.c or D:\SW\xyz\a.c

All URLs to the deployed files reflect that e.g. http://host/d/sw/xyz/a.c.

URLs to the repository files start at "sw" e.g. git://githost/gitrepo/xyz/a.c

We encode these canonical paths in the configuration (should it ever need to change), and we have scripts/APIs that generate/reference the URLs on the fly for dynamic linking among components.

0

精彩评论

暂无评论...
验证码 换一张
取 消