Differences between revisions 6 and 14 (spanning 8 versions)
Revision 6 as of 2016-04-04 15:57:11
Size: 5113
Comment:
Revision 14 as of 2020-02-13 22:37:48
Size: 5380
Comment:
Deletions are marked like this. Additions are marked like this.
Line 5: Line 5:
'''Status: Project''' '''Status: In production'''
Line 7: Line 7:
'''Main proponents: [[Pierre-YvesDavid]] [[PhilippePepiot]]''' '''Main proponents: [[Pierre-YvesDavid]] [[PhilippePepiot]] [[RaphaëlGomes]]'''
Line 11: Line 11:
Discussion on devel list: http://marc.info/?t=145863695000002   * Code: https://foss.heptapod.net/mercurial/scmperf/
  * Data: http://perf.octobus.net/
  * Atom feed of regressions: http://perf.octobus.net/regressions.xml
  * Discussion on devel list: http://marc.info/?t=145863695000002
Line 21: Line 24:
  * Provide some regression detection alarms with email notifications   * Provide some regression detection notifications
Line 29: Line 32:
  * Unit test execution time   * --(Unit test execution time)--
Line 31: Line 34:
Another idea is to produce metrics from annotated portions of unit test execution time. --(Another idea is to produce metrics from annotated portions of unit test execution time.
Line 35: Line 38:
written for the purpose of performance regression detection. written for the purpose of performance regression detection)--
Line 40: Line 43:
[[https://hg.logilab.org/review/hgperf/raw-file/tip/docs/tools.html|After evaluating several tools]] we [[https://hg.logilab.org/review/hgperf/raw-file/5aee29f2aee0/docs/tools.html|After evaluating several tools]] we
Line 65: Line 68:
A demo build with a patched ASV can be seen here: https://hg.logilab.org/review/hgperf/raw-file/454c2bd71fa4/index.html#regressions?sort=3&dir=desc

Line 86: Line 86:
  * Fix mercurial branch handling in ASV: https://github.com/spacetelescope/asv/pull/394   * Fix mercurial branch handling in ASV. '''OK''' https://github.com/spacetelescope/asv/pull/394
Line 88: Line 88:
  * Use revision instead of commit date as X axis in ASV (in progress)

  * Provide some ASV benchmark code (starting with revsets) and publish the
  result in a dedicated public repository
  * Use revision instead of commit date as X axis in ASV. '''OK''' https://github.com/spacetelescope/asv/pull/429
Line 95: Line 92:
  changesets are pushed on the main branches.   changesets are pushed on the main branches. '''IN PROGRESS'''
Line 97: Line 94:
  * Parametrize benchmarks against multiple references repositories (hg, mozilla-central, ...)   * Parametrize benchmarks against multiple references repositories (hg, mozilla-central, ...). '''OK'''
Line 99: Line 96:
  * Parametrize revset benchmarks with variants (first, last, min, max, ...)   * Parametrize revset benchmarks with variants (first, last, min, max, ...). '''OK'''
Line 101: Line 98:
  * Implement a notification system in ASV   * Implement a notification system in ASV. '''OK''' https://github.com/spacetelescope/asv/issues/397
Line 103: Line 100:
  * Add more revset benchmarks   * --(Add unit test execution time as benchmark( /!\ We must handle when the test itself has changed /!\ ))--
Line 105: Line 102:
  * Add other benchmarks from `contrib/perf.py`   * --(Write an annotation system in unit tests and get metrics execution time of annotated portions)--
Line 107: Line 104:
  * Add unit test execution time as benchmark( /!\ We must handle when the test itself has changed /!\ )

  * Write an annotation system in unit tests and get metrics execution time of annotated portions

  *
Write a system of scenario based benchmark. They should be written as
  mercurial tests (with annotation) and might be kept in a dedicated repository

  * Enhance the home page of ASV (or use the regression page as home page
)
  * --(Write a system of scenario based benchmark. They should be written as
  mercurial tests (with annotation) and might be kept in a dedicated repository)--
Line 117: Line 108:
  impact on multiple benchmarks, having a global view of this information could be a good feature   impact on multiple benchmarks, having a global view of this information could be a good feature: '''OK''' https://github.com/spacetelescope/asv/pull/437
Line 119: Line 110:
  * Discuss about the maintenance of the benchmarks suites and infrastructure.   * Integrate the benchmark suite in main repository, allowing developpers to run `asv compare` locally
Line 121: Line 112:
  * Write benchmark ''à la'' `.t` tests (against small created repository and by calling contrib/perf commands).

== Code ==

The current work in progress code can be cloned with {{{hg clone -b default https://heptapod.octobus.net/octobus-public/scmperf}}}

Note:

This page is primarily intended for developers of Mercurial.

Performance tracking infrastructure

Status: In production

Main proponents: Pierre-YvesDavid PhilippePepiot RaphaëlGomes

Provide a continuous integration infrastructure to measuring and preventing performance regressions on mercurial

1. Goal

Mercurial code change fast and we must detect and prevent performances  regressions as soon as possible.

  • Automatic execution of performance tests on a given Mercurial revision
  • Store the performance results in a database
  • Expose the performance results in a web application (with graphs, reports, dashboards etc.)
  • Provide some regression detection notifications

2. Metrics

We already have code that produce performance metrics:

  • Commands from the perf extension in contrib/perf.py
  • Revset performance tests contrib/revsetbenchmarks.py
  • Unit test execution time

Another idea is to produce metrics from annotated portions of unit test execution time.

These metrics will be used (after some refactoring for some of the tools that  produce them) as performance metrics, but we may need some more specifically written for the purpose of performance regression detection

3. Tool selection

After evaluating several tools we choose to use Airspeed velocity that already handle most of our needs.

This tool aims at benchmarking Python packages over their lifetime.  It is mainly a command line tool, asv, that run a series of benchmarks (described  in JSON configuration file), and produces a static HTML/JS report.

When running a benchmark suite, ASV take care of clone/pulling the source repository  in a virtual env and running the configured tasks in this virtual env.

Results of each benchmark execution are stored in a "database" (consisting in JSON files). This database is used to produce evolution plots of the time required to run a test (or any metrics; out of the box, asv has support for 4 types of benchmark:  timing, memory, peak memory and tracking), and to run the regression detection algorithms.

One key feature of this tool is that it's very easy for every developer to use it on  its own development environment. For example, it provides an asv compare command allowing to compare  the results of any 2 revisions.

4. Q & A

  • Q: What revisions of the Mercurial source code should we run the performance regression tool on? (public cs on the main branch only? Which branches? ...).

    A: Let's focus on public changesets for now and on the two branch (default and stable)

  • Q: How do we manage the non-linear structure of a Mercurial history ?

    A: The Mercurial repository is mostly linear as long as only one branch is concerned, however we don't (and have no reason to) enforce it. For now the plan is to follow the first parent of the merge changesets to enforce the linearity of each branches.

5. Plan

  • Fix mercurial branch handling in ASV. OK https://github.com/spacetelescope/asv/pull/394

  • Use revision instead of commit date as X axis in ASV. OK https://github.com/spacetelescope/asv/pull/429

  • Provide ansible configuration to deploy the tool in the existing buildbot infrastructure and expose the results in a public website when new public

    changesets are pushed on the main branches. IN PROGRESS

  • Parametrize benchmarks against multiple references repositories (hg, mozilla-central, ...). OK

  • Parametrize revset benchmarks with variants (first, last, min, max, ...). OK

  • Implement a notification system in ASV. OK https://github.com/spacetelescope/asv/issues/397

  • Add unit test execution time as benchmark( /!\ We must handle when the test itself has changed /!\ )

  • Write an annotation system in unit tests and get metrics execution time of annotated portions

  • Write a system of scenario based benchmark. They should be written as mercurial tests (with annotation) and might be kept in a dedicated repository

  • Track both improvement and regression ? A change, especially on revset, can have positive or negative

    impact on multiple benchmarks, having a global view of this information could be a good feature: OK https://github.com/spacetelescope/asv/pull/437

  • Integrate the benchmark suite in main repository, allowing developpers to run asv compare locally

  • Write benchmark à la .t tests (against small created repository and by calling contrib/perf commands).

6. Code

The current work in progress code can be cloned with hg clone -b default https://heptapod.octobus.net/octobus-public/scmperf


CategoryDeveloper CategoryNewFeatures

PerformanceTrackingSuitePlan (last edited 2020-02-13 22:37:48 by Pierre-YvesDavid)