Differences between revisions 9 and 10
Revision 9 as of 2016-06-07 14:12:03
Size: 5408
Comment:
Revision 10 as of 2016-08-31 15:24:40
Size: 5508
Comment: Link to demo & various notes
Deletions are marked like this. Additions are marked like this.
Line 11: Line 11:
Discussion on devel list: http://marc.info/?t=145863695000002   * Discussion on devel list: http://marc.info/?t=145863695000002
  * Demo: https://jenkins.philpep.org/hgperf/
  * Atom feed of regressions: https://jenkins.philpep.org/hgperf/regressions.xml
  * Code: https://hg.logilab.org/review/hgperf
Line 21: Line 24:
  * Provide some regression detection alarms with email notifications   * Provide some regression detection notifications
Line 29: Line 32:
  * Unit test execution time   * --(Unit test execution time)--
Line 31: Line 34:
Another idea is to produce metrics from annotated portions of unit test execution time. --(Another idea is to produce metrics from annotated portions of unit test execution time.
Line 35: Line 38:
written for the purpose of performance regression detection. written for the purpose of performance regression detection)--
Line 86: Line 89:
  * Fix mercurial branch handling in ASV: https://github.com/spacetelescope/asv/pull/394   * Fix mercurial branch handling in ASV. '''OK''' https://github.com/spacetelescope/asv/pull/394
Line 88: Line 91:
  * Use revision instead of commit date as X axis in ASV: https://github.com/spacetelescope/asv/pull/429

  * Provide some ASV benchmark code: https://hg.logilab.org/review/hgperf
  * Use revision instead of commit date as X axis in ASV. '''OK''' https://github.com/spacetelescope/asv/pull/429
Line 94: Line 95:
  changesets are pushed on the main branches.   changesets are pushed on the main branches. '''IN PROGRESS'''
Line 96: Line 97:
  * Parametrize benchmarks against multiple references repositories (hg, mozilla-central, ...): Demo https://hg.logilab.org/review/hgperf/raw-file/685dfc2bbe87/html/index.html#revset.time_roots_0_0_tip   * Parametrize benchmarks against multiple references repositories (hg, mozilla-central, ...). '''OK'''
Line 98: Line 99:
  * Parametrize revset benchmarks with variants (first, last, min, max, ...): DONE   * Parametrize revset benchmarks with variants (first, last, min, max, ...). '''OK'''
Line 100: Line 101:
  * Implement a notification system in ASV: https://github.com/spacetelescope/asv/issues/397   * Implement a notification system in ASV. '''OK''' https://github.com/spacetelescope/asv/issues/397
Line 102: Line 103:
  * Add more revset benchmarks   * --(Add unit test execution time as benchmark( /!\ We must handle when the test itself has changed /!\ ))--
Line 104: Line 105:
  * Add other benchmarks from `contrib/perf.py`   * --(Write an annotation system in unit tests and get metrics execution time of annotated portions)--
Line 106: Line 107:
  * Add unit test execution time as benchmark( /!\ We must handle when the test itself has changed /!\ )

  * Write an annotation system in unit tests and get metrics execution time of annotated portions

  *
Write a system of scenario based benchmark. They should be written as
  mercurial tests (with annotation) and might be kept in a dedicated repository

  * Enhance the home page of ASV (or use the regression page as home page
)
  * --(Write a system of scenario based benchmark. They should be written as
  mercurial tests (with annotation) and might be kept in a dedicated repository)--
Line 116: Line 111:
  impact on multiple benchmarks, having a global view of this information could be a good feature   impact on multiple benchmarks, having a global view of this information could be a good feature: '''OK''' https://github.com/spacetelescope/asv/pull/437
Line 118: Line 113:
  * Discuss about the maintenance of the benchmarks suites and infrastructure.   * Integrate the benchmark suite in main repository, allowing developpers to run `asv compare` locally
Line 120: Line 115:
  * Write benchmark ''à la'' `.t` tests (against small created repository and by calling contrib/perf commands).

Note:

This page is primarily intended for developers of Mercurial.

Performance tracking infrastructure

Status: Project

Main proponents: Pierre-YvesDavid PhilippePepiot

Provide a continuous integration infrastructure to measuring and preventing performance regressions on mercurial

1. Goal

Mercurial code change fast and we must detect and prevent performances  regressions as soon as possible.

  • Automatic execution of performance tests on a given Mercurial revision
  • Store the performance results in a database
  • Expose the performance results in a web application (with graphs, reports, dashboards etc.)
  • Provide some regression detection notifications

2. Metrics

We already have code that produce performance metrics:

  • Commands from the perf extension in contrib/perf.py
  • Revset performance tests contrib/revsetbenchmarks.py
  • Unit test execution time

Another idea is to produce metrics from annotated portions of unit test execution time.

These metrics will be used (after some refactoring for some of the tools that  produce them) as performance metrics, but we may need some more specifically written for the purpose of performance regression detection

3. Tool selection

After evaluating several tools we choose to use Airspeed velocity that already handle most of our needs.

This tool aims at benchmarking Python packages over their lifetime.  It is mainly a command line tool, asv, that run a series of benchmarks (described  in JSON configuration file), and produces a static HTML/JS report.

When running a benchmark suite, ASV take care of clone/pulling the source repository  in a virtual env and running the configured tasks in this virtual env.

Results of each benchmark execution are stored in a "database" (consisting in JSON files). This database is used to produce evolution plots of the time required to run a test (or any metrics; out of the box, asv has support for 4 types of benchmark:  timing, memory, peak memory and tracking), and to run the regression detection algorithms.

One key feature of this tool is that it's very easy for every developer to use it on  its own development environment. For example, it provides an asv compare command allowing to compare  the results of any 2 revisions.

A demo build with a patched ASV can be seen here: https://hg.logilab.org/review/hgperf/raw-file/454c2bd71fa4/index.html#regressions?sort=3&dir=desc

4. Q & A

  • Q: What revisions of the Mercurial source code should we run the performance regression tool on? (public cs on the main branch only? Which branches? ...).

    A: Let's focus on public changesets for now and on the two branch (default and stable)

  • Q: How do we manage the non-linear structure of a Mercurial history ?

    A: The Mercurial repository is mostly linear as long as only one branch is concerned, however we don't (and have no reason to) enforce it. For now the plan is to follow the first parent of the merge changesets to enforce the linearity of each branches.

5. Plan

  • Fix mercurial branch handling in ASV. OK https://github.com/spacetelescope/asv/pull/394

  • Use revision instead of commit date as X axis in ASV. OK https://github.com/spacetelescope/asv/pull/429

  • Provide ansible configuration to deploy the tool in the existing buildbot infrastructure and expose the results in a public website when new public

    changesets are pushed on the main branches. IN PROGRESS

  • Parametrize benchmarks against multiple references repositories (hg, mozilla-central, ...). OK

  • Parametrize revset benchmarks with variants (first, last, min, max, ...). OK

  • Implement a notification system in ASV. OK https://github.com/spacetelescope/asv/issues/397

  • Add unit test execution time as benchmark( /!\ We must handle when the test itself has changed /!\ )

  • Write an annotation system in unit tests and get metrics execution time of annotated portions

  • Write a system of scenario based benchmark. They should be written as mercurial tests (with annotation) and might be kept in a dedicated repository

  • Track both improvement and regression ? A change, especially on revset, can have positive or negative

    impact on multiple benchmarks, having a global view of this information could be a good feature: OK https://github.com/spacetelescope/asv/pull/437

  • Integrate the benchmark suite in main repository, allowing developpers to run asv compare locally

  • Write benchmark à la .t tests (against small created repository and by calling contrib/perf commands).

6. Code

The current work in progress code can be cloned with hg clone -b default https://hg.logilab.org/review/hgperf


CategoryDeveloper CategoryNewFeatures

PerformanceTrackingSuitePlan (last edited 2020-02-13 22:37:48 by Pierre-YvesDavid)