使用您所描述的命名分支是一个不错的选择(尽管不是唯一的选择),但我仍然建议在众所周知的位置使用一些单独的克隆来促进这个过程。假装这http://host/hg/
是你安装的 hgweb(以前的 hgwebdir)(尽管 ssh:// 也很好用,无论如何),你会有这样的东西:
http://host/hg/vendor
http://host/hg/custom
两个独立的存储库,其中数据从供应商流向自定义,但从不向另一个方向流动。命名的分支default
将是vendor
和 in 中唯一一个custom
同时拥有default
and的分支stable
。
当您从供应商处获得新代码时,您会将其解压缩到vendor
repo 的工作目录中,然后运行:
hg addremove
hg commit -m 'new drop from vendor, version number x.x.x'
Your history in that vendor
repo will be linear, and it will never have anything you wrote.
Now in your local clone of the custom
repo you'd do:
hg update default # update to the latest head in your default branch
hg pull http://host/hg/vendor # bring in the new changes from vendor as a new head
hg merge tip # merge _your_ most recent default cset with their new drop
And then you do the work of merging your local chances on default with their new code drop. When you're happy with the merge (tests pass, etc.) you push from your local clone back to http://host/hg/custom
.
That process can be repeated as necessary, provides good separation between your history and theirs', and lets everyone on your team not responsible for accepting new code drops from vendors, to concern themselves only with a normal default/stable
setup in a single repo, http://host/hg/custom
.