Mercurial manifest sharding
Problem statement: imagine you have 1m to 1b files.
Individual manifest RAM overhead is a problem somewhere in this range.
Checkout: we don't want to materialize the working copy on the machine, and we don't want the whole manifest on the local machine.
Limitations of large manifests/repos:
- manifest too large for RAM
- checkout too large for local disk
- clone size too large for local disk
- manifest resolution too much CPU
- 100k+ files on HFS+ has bad perf
Current plan to make manifest hash something clients with only a partial checkout can do is to do a per-directory hash that bubbles up, and store entries for those directory nodes in their parent with a d in the flags entry. We considered using a hash of filename and hash mod == 0 do a shard, but decided that was probably going to lead to lots of churn, and also bakes the sharding scheme into the manifest hash (which might be suboptimal).
Will require client support to do pull of sharded manifests - that's the second step.
Discussion (from titanpad)
Directory recursive hashes: We could compute the hash for submanifests by iterating recursively over the directories in the sub manifest content. This would produce a hash that is unique to the commit directory structure, and agnostic to how the manifest is sharded. (similar to git tree hash calculations). Pros:
- allows changing the manifest format in the future without changing the hashes
- allows delivering customly sharded manifests to users on demand
Cons:
- more expensive hash algorithm
- will require a new manifest version flag (since it won't be backwards comptabile)
Related: sparse checkouts
Currently hg sparse --include mobile/
doesn't matter if the repo has other stuff, you only get the mobile directory.
hg sparse --enable-profile mobile
profiles live in repo. .hgsparse
proposal: have team specific .hgsparse files in directories. Allows changes without contention. (hg sparse --enable-profile mobile[/.hgsparse])
future magic: hg clone --sparse mobile (to avoid initial full clone)
merges get a little complicated using regexps matching now. proposed to use directories for includes, but allow regex/glob for exclude (to allow not writing certain types of files, like photoshop files)
narrow changelog (this was part of the discussion at the 3.2Sprint, but should probably be its own page)
ellipsis nodes listing heads and roots
running log and bisect inside ellipsis is difficult. change default so hg log only considers downloaded changesets by default?
DAG can have dangling links and won't explode when we hit them?
Should we have remote fallback for hg log? durin42 thinks so.