Note:
This page is primarily intended for developers of Mercurial.
Performance Improvement
Status: Multi Part Project
Main proponents: Pierre-YvesDavid, GregorySzorc
This is a speculative project and does not represent any firm decisions on future behavior.
The goal of this page is to gather data about known performance bottleneck and ideas about how to solve them
1. Goal
Make Mercurial faster on repositories.
This might be about repository of any size even if most of the work is usually done for large repositories. The larger repositories can be affected by pathological cases and usually get more funding toward improving performance.
Performance on a various reference repository are tracked here: http://perf.octobus.net/
2. Performance Areas
2.1. Local Access
2.1.1. Status
Getting the status of file an important important operation. We can distinct piece where performance matters:
Storing the expected state of files (dirstate). The format currently used is a flat file that has to be fully rewritten on update. Facebook wrote a tree based format for lighter update the storage is also meant to be mmap friendly (see MMapPlan)
In addition, a Rust rewrite of the status can yield impression performance. Some of that gain is the fact rust is a compiled language, some of it is the ability to do this computation in parallel
simple status implemented in Rust (as used by, hg commit, hg diff, etc)
generic implementation of status in Rust (hg status, showing unknown and/or ignored files).
Caching around ignores
better dirstate storage (Facebook extensions exists)
further optimization of the rust code.
have a "fast path" for status, that is implemented in Rust end-to-end.
2.1.2. Nodemap
Many operation needs to validate that some node are in the repository (loading bookmark, discovery, revsets, etc…). Building this mapping for each invocation of Mercurial gets slow for repository with millions of commit.
Get a persistent {node → rev} mapping
2.1.3. Branchmap
Computing the branch map from scratch can be very expensive. Instead we use a cache. However this cache can en up being quite expensive too.
- it be outdated or stalled
making sure it is valid and its content is valid can be pretty expensive (see the NodeMap section right abose this one)
- It growth to be as large and the number of head in the repository. And it grow even for closed branches.
- It affect all repository even the one who do not use named branch (or very few of them).
- each update requires the full file to be rewritten.
So there are multiple things we could improve
removing the need for expensive validation by moving this in an index space (see ComputedIndexPlan)
making update of the data in memory cheaper by using topological head information
making the update of the file on disk incremental (similar to the treedirstate, or to the persistent nodemap).
make mmap an option to access the data (see MMapPlan).
Having a more compact representation (leveraging the topological head information).
better repoview and phases performance has reduced the pain
For example, we could stop explicitly listing the branch head that are also topological heads. Instead we could directly get them from the topological heads. Storing the number of heads (of each types) in the on disk cache could still be useful (in short, there is good progress to be made with a small amount of work).
2.1.4. Manifest Access
Many operation requires manifest access. This is important for this computation to be fast. Some work have been done in this area:
sparse-revlog fixes pathological delta chain.
an handful of recently used manifest are not cached
tree manifest exist, but is still experimental
2.1.5. Copy Tracing
In some situation the using the copies information in a repository can be very slow. A new algorithm based on precomputed information is one the work.
correct changeset centric copy tracing algorithm (worked, now broken for merge)
efficient algorithm (in progress)
flexible storage (we now have sidedata)
efficient storage (waiting on efficient algorithm to assess the needs)
2.2. Rust
Various part of Mercurial are getting a Rust implementation. This improves performance and safety. At some point, we should be able to perform some command without ever running Python Code. See OxidationPlan for details.
2.3. Exchange
2.3.1. Discovery
The discovery is an important step of any push and pull operation. This is especially a probleme for repository with many heads (thousands). The known pathological case has been solved. However, we could reduce the base time further with more work.
Algorithm reducing the number of round-trip,
Rust implementation of the graph related computation,
persistent nodemap cost to reduce server side query cost
Using an abstraction to avoid exchanging the nodes of all heads.
2.3.2. Server-side Changegroup Performance
Servers tend to spend a lot of CPU and bandwidth computing and transferring changegroup data.
https://www.mercurial-scm.org/wiki/OxidationPlan
2.3.2.1. Caching bundles
The most effective way to alleviate this resource usage is by serving static, pre-generated changegroup data instead of dynamically generating it at request time. A server-side cache of changegroup data would fall into this bucket. The "clone bundles" feature which serves initial clones from URLs is one implementation of this. But it only addresses the initial clone case. Subsequent pulls still result in significant load on the server. There is support for a "remote changegroup" bundle2 part that allows servers to advertise the URL of a pre-generated changegroup. There is a prototype for an extension using this.
Manually generated clone bundle
Manually generated pull bundle
Automatically generated clone/pull bundle (prototype exists)
fix bundlerepo layering for incoming with pull bundles
functional pre-generated bundle for stream clone
2.3.3. Delta computation
sparse-revlog improve the delta chain and their re-usability
reduced the number of files to investigate from merge
more delta could be reused avoiding computation both server and client side.
2.3.4. compression
There is plenty of potential to optimize the server for changegroup generation. As of Mercurial 4.0, changegroups (with exception of the changelog) are effectively collections of single delta chains per revlogs. For generaldelta repos, many deltas on disk are reused. However, the server still needs to decompress the revlog entries on disk to obtain the raw deltas then recompress them as part of the changegroup compression context. Furthermore, if there are multiple delta chains in the revlog, the server will need to compute a new delta for those entries. This contributes to overhead, especially the decompression and recompression. Switching away from zlib for both revlog storage and wire protocol compression will help tremendously, as zstd can be 2x more efficient in both decompression and compression.
zstd storage is now officially supported
zstd is not the default.
2.3.5. Skipping the decompression/compression stage
While server efficiency could be increased by increasing the efficiency of compression, it would be better to avoid compression altogether. There exists an "streaming clone" feature that essentially does a file copy of revlogs from server to client. However, this only applies to initial clone. It should be possible to extend this feature to subsequent pulls. So instead of transferring a changegroup with on-the-fly computed delta chains, the server would transfer the raw data in its revlogs, including compression. This feature would not be suitable for all environments, as the transfer size would likely increase and clients would need to support and effectively inherit the settings of the server. However, it would substantially reduce server-side CPU requirements.
support for transparent sending compressed deltas directly from disk.
2.4. MonoRepo Scaling
Extensions like narrow and remotefilelog are useful to see the subset of a huge monorepository as a smaller repository. Moving them as core feature would help their adoption and usage.