8000 [auto] set use_metas_eagerly_in_conv_on_closed_terms = true by mrhaandi · Pull Request #16293 · rocq-prover/rocq · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[auto] set use_metas_eagerly_in_conv_on_closed_terms = true #16293

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 23, 2022

Conversation

mrhaandi
Copy link
Contributor
@mrhaandi mrhaandi commented Jul 6, 2022

Setting this flag in auto (and eauto) makes the resolution be slightly closer to simple (e)apply.
Also, this behavior is closer to typeclasses eauto.

Fixes #16323
Fixes #16062

Currently, debug auto lies to you, stating that simple apply failed.

Inductive test : unit -> unit -> Prop :=
  | test_intro b : test b (id b).

#[local] Hint Resolve test_intro | 0 (test tt tt) : db.

Goal test tt tt.
Proof.
  Succeed (solve [simple apply test_intro]).
  Fail (solve [debug auto with db]).
(* debug auto: 
* assumption. (*fail*)
* intro. (*fail*)
* simple apply test_intro (in db). (*fail*) *)

Also, auto may misleadingly report that exact failed #16323.

The present PR makes auto more potent, sometimes breaking goal management after tac; auto.

  • Added / updated test-suite.
  • Added changelog.

@ppedrot
Copy link
Member
ppedrot commented Jul 6, 2022

I don't think this solves the linked bug, because there are still discrepancies between (e)auto and simple (e)apply.

@mrhaandi
Copy link
Contributor Author
mrhaandi commented Jul 6, 2022

I don't think this solves the linked bug, because there are still discrepancies between (e)auto and simple (e)apply.

At least, regarding the linked bug, the following works as expected:

Inductive test : nat -> nat -> Prop :=
  | test_intro n : test n (n + 1).

#[local] Hint Immediate test_intro : testdb_immediate.
#[local] Hint Resolve test_intro : testdb_resolve1.
#[local] Hint Resolve test_intro | 1 (test _ _) : testdb_resolve2.
#[local] Hint Extern 1 (test _ _) => simple apply test_intro; solve [ trivial ] : testdb_extern.

Goal test 0 1.
Proof.
  Succeed (simple apply test_intro; solve [ trivial ]).
  Succeed (solve [eauto with testdb_extern]).
  Succeed (solve [eauto with testdb_resolve2]).
  Succeed (solve [eauto with testdb_resolve1]).
  Succeed (solve [eauto with testdb_immediate]).
  Succeed (solve [eauto using test_intro]).
Abort.

Also, the debug output of auto is as expected

(* debug auto: *)
* assumption. (*fail*)
* intro. (*fail*)
* simple apply test_intro (in testdb_resolve1). (*success*)

Of course, there are still remaining (possibly justified) discrepancies between (e)auto and simple (e)apply.

@coqbot-app
Copy link
Contributor
coqbot-app bot commented Jul 6, 2022

🏁 Bench results:

┌─────────────────────────────┬─────────────────────────┬───────────────────────────────────────┬───────────────────────────────────────┬─────────────────────────┬───────────────────┐
│                             │      user time [s]      │              CPU cycles               │           CPU instructions            │  max resident mem [KB]  │    mem faults     │
│                             │                         │                                       │                                       │                         │                   │
│        package_name         │   NEW      OLD    PDIFF │      NEW             OLD        PDIFF │      NEW             OLD        PDIFF │   NEW      OLD    PDIFF │ NEW  OLD   PDIFF  │
├─────────────────────────────┼─────────────────────────┼───────────────────────────────────────┼───────────────────────────────────────┼─────────────────────────┼───────────────────┤
│                    coq-core │  145.99   146.70  -0.48 │   459951158535    461452990381  -0.33 │   483654148463    483694099598  -0.01 │  287096   287308  -0.07 │   1    0      nan │
│              coq-coquelicot │   41.35    41.55  -0.48 │   172159046851    172276856838  -0.07 │   215469970708    215552629191  -0.04 │  748648   748532   0.02 │   2    0      nan │
│                  coq-geocoq │  861.36   864.53  -0.37 │  3630315508023   3638230768607  -0.22 │  5230840020293   5230232674570   0.01 │  975996   980848  -0.49 │   0    0      nan │
│                    coq-hott │  196.58   197.13  -0.28 │   822797845863    822511968613   0.03 │  1205375388875   1204771612492   0.05 │  583476   583600  -0.02 │   0    0      nan │
│                coq-rewriter │  400.16   401.12  -0.24 │  1686440552276   1688840497882  -0.14 │  2525188046772   2524877626311   0.01 │  985672   985660   0.00 │   0    0      nan │
│      coq-mathcomp-character │   81.55    81.73  -0.22 │   343113042469    343586760195  -0.14 │   480953210909    480689200184   0.05 │  719964   719648   0.04 │   0    0      nan │
│       coq-engine-bench-lite │  215.03   215.29  -0.12 │   900639721465    902870366190  -0.25 │  1402320974469   1403178958495  -0.06 │ 1097492  1097444   0.00 │   0    0      nan │
│                coq-bedrock2 │  485.05   485.35  -0.06 │  2058172542385   2057675541046   0.02 │  3307831214995   3307299297517   0.02 │ 1889156  1889156   0.00 │   0    0      nan │
│      coq-mathcomp-odd-order │  604.92   605.14  -0.04 │  2539687016052   2542366317574  -0.11 │  4053376787915   4052784068981   0.01 │  864940   864608   0.04 │   0    0      nan │
│               coq-fourcolor │ 1743.22  1743.51  -0.02 │  7478698773455   7477640813541   0.01 │ 12208278891933  12209844537658  -0.01 │  724168   724472  -0.04 │   0    0      nan │
│      coq-mathcomp-ssreflect │   30.80    30.79   0.03 │   129392103066    129324215166   0.05 │   154817975722    154852377691  -0.02 │  531440   531852  -0.08 │   0    0      nan │
│ coq-rewriter-perf-SuperFast │ 1100.11  1099.58   0.05 │  4651308810593   4667285188946  -0.34 │  7179262026170   7181758379099  -0.03 │ 1214816  1119372   8.53 │  38    6   533.33 │
│                 coq-unimath │ 6227.97  6224.95   0.05 │ 26096106749023  26112501768597  -0.06 │ 45996132632767  46000508905607  -0.01 │ 4133452  4135400  -0.05 │   0    4  -100.00 │
│                   coq-verdi │   57.37    57.31   0.10 │   240736713092    240283490612   0.19 │   333496426527    333592458447  -0.03 │  777432   775528   0.25 │  20    0      nan │
│            coq-math-classes │  113.60   113.45   0.13 │   474562005032    475124931369  -0.12 │   634480410542    634460375445   0.00 │  505720   505956  -0.05 │   0    0      nan │
│       coq-mathcomp-solvable │   98.54    98.31   0.23 │   414676307568    413450563013   0.30 │   586500560790    586127692319   0.06 │  651512   648140   0.52 │   0    0      nan │
│  coq-performance-tests-lite │  986.18   983.58   0.26 │  41633737190
8000
48   4149563142942   0.33 │  6388333879839   6389676894130  -0.02 │ 2326576  2326612  -0.00 │   0    0      nan │
│                 coq-coqutil │   41.12    41.00   0.29 │   172141968664    171442186698   0.41 │   228257303923    227942392360   0.14 │  586336   588136  -0.31 │   0    0      nan │
│          coq-mathcomp-field │  118.92   118.51   0.35 │   499350773051    498363669149   0.20 │   744582667618    744699074790  -0.02 │  639048   638732   0.05 │   0    0      nan │
│       coq-mathcomp-fingroup │   26.71    26.57   0.53 │   112416122068    111786526661   0.56 │   151951293512    151842269007   0.07 │  487996   487056   0.19 │   0    0      nan │
│                 coq-bignums │   33.69    33.51   0.54 │   141636530341    141065416678   0.40 │   183555052468    183622464767  -0.04 │  469332   469240   0.02 │   0    0      nan │
│                  coq-stdlib │  533.79   530.79   0.57 │  1718423721572   1709267902100   0.54 │  1514542255978   1513946459247   0.04 │  698764   699556  -0.11 │   0    0      nan │
│        coq-mathcomp-algebra │   73.94    73.01   1.27 │   310682313782    307253551717   1.12 │   401400999713    394629702261   1.72 │  553996   553636   0.07 │   0    0      nan │
└─────────────────────────────┴─────────────────────────┴───────────────────────────────────────┴───────────────────────────────────────┴─────────────────────────┴───────────────────┘

🐢 Top 25 slow downs
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                                               TOP 25 SLOW DOWNS                                                               │
│                                                                                                                                               │
│   OLD       NEW      DIFF    %DIFF     Ln                     FILE                                                                            │
├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ 161.6840  163.2540  1.5700     0.97%   962  coq-performance-tests-lite/src/fiat_crypto_via_setoid_rewrite_standalone.v.html                   │
│ 140.0290  140.8280  0.7990     0.57%  1104  coq-unimath/UniMath/Bicategories/PseudoFunctors/Preservation/BiadjunctionPreserveInserters.v.html │
│  47.7310   48.2730  0.5420     1.14%   222  coq-performance-tests-lite/PerformanceExperiments/rewrite_lift_lets_map.v.html                    │
│   0.0120    0.5400  0.5280  4400.00%  1904  coq-mathcomp-algebra/mathcomp/algebra/mxpoly.v.html                                               │
│  35.4620   35.9860  0.5240     1.48%  1375  coq-fourcolor/theories/cfmap.v.html                                                               │
│ 162.3980  162.8960  0.4980     0.31%   992  coq-performance-tests-lite/src/fiat_crypto_via_setoid_rewrite_standalone.v.html                   │
│   7.1410    7.5410  0.4000     5.60%   341  coq-unimath/UniMath/CategoryTheory/DisplayedCats/Equivalences.v.html                              │
│  20.1660   20.5130  0.3470     1.72%    29  coq-engine-bench-lite/coq/PerformanceDemos/pattern.v.html                                         │
│  31.7410   32.0880  0.3470     1.09%   310  coq-unimath/UniMath/Bicategories/PseudoFunctors/Preservation/BiadjunctionPreserveProducts.v.html  │
│   5.5830    5.9260  0.3430     6.14%  1468  coq-unimath/UniMath/RealNumbers/NonnegativeRationals.v.html                                       │
│  32.9710   33.2890  0.3180     0.96%  1157  coq-unimath/UniMath/Bicategories/DisplayedBicats/Cartesians.v.html                                │
│   0.0080    0.3220  0.3140  3925.00%  1949  coq-mathcomp-algebra/mathcomp/algebra/mxpoly.v.html                                               │
│   1.9060    2.1990  0.2930    15.37%  1178  coq-unimath/UniMath/Bicategories/MonoidalCategories/ActionBasedStrongFunctorsMonoidal.v.html      │
│  73.8080   74.0950  0.2870     0.39%   641  coq-unimath/UniMath/SubstitutionSystems/LiftingInitial_alt.v.html                                 │
│  64.3290   64.5970  0.2680     0.42%   587  coq-unimath/UniMath/SubstitutionSystems/SignatureExamples.v.html                                  │
│  28.7690   29.0360  0.2670     0.93%    10  coq-fourcolor/theories/job384to398.v.html                                                         │
│  40.0500   40.3080  0.2580     0.64%    10  coq-fourcolor/theories/job254to270.v.html                                                         │
│  55.4940   55.7420  0.2480     0.45%   464  coq-unimath/UniMath/Bicategories/PseudoFunctors/Examples/CompositionPseudoFunctor.v.html          │
│  30.1560   30.3860  0.2300     0.76%    10  coq-fourcolor/theories/job554to562.v.html                                                         │
│  32.7360   32.9620  0.2260     0.69%    10  coq-fourcolor/theories/job499to502.v.html                                                         │
│  49.1420   49.3450  0.2030     0.41%   633  coq-unimath/UniMath/SubstitutionSystems/Signatures.v.html                                         │
│  38.2670   38.4620  0.1950     0.51%    10  coq-fourcolor/theories/job323to383.v.html                                                         │
│  32.1640   32.3520  0.1880     0.58%    10  coq-fourcolor/theories/job279to282.v.html                                                         │
│  48.6670   48.8480  0.1810     0.37%   235  coq-rewriter-perf-SuperFast/src/Rewriter/Rewriter/Examples/PerfTesting/LiftLetsMap.v.html         │
│   7.8410    8.0190  0.1780     2.27%   210  coq-unimath/UniMath/Bicategories/PseudoFunctors/Examples/Reindexing.v.html                        │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

🐇 Top 25 speed ups
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                                               TOP 25 SPEED UPS                                                                │
│                                                                                                                                               │
│   OLD       NEW      DIFF    %DIFF    Ln                     FILE                                                                             │
├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ 180.5380  179.6470  -0.8910  -0.49%   706  coq-bedrock2/bedrock2/src/bedrock2Examples/lightbulb.v.html                                        │
│  32.8100   32.3580  -0.4520  -1.38%    10  coq-fourcolor/theories/job291to294.v.html                                                          │
│  18.3590   17.9370  -0.4220  -2.30%   251  coq-unimath/UniMath/SubstitutionSystems/CCS.v.html                                                 │
│  32.6900   32.3410  -0.3490  -1.07%    10  coq-fourcolor/theories/job618to622.v.html                                                          │
│   9.9730    9.6460  -0.3270  -3.28%   394  coq-unimath/UniMath/Bicategories/DisplayedBicats/ExamplesOfCleavings/FibrationCleaving.v.html      │
│   4.2490    3.9360  -0.3130  -7.37%   171  coq-unimath/UniMath/Bicategories/DisplayedBicats/Examples/Add2Cell.v.html                          │
│  13.9230   13.6730  -0.2500  -1.80%  2081  coq-unimath/UniMath/Bicategories/DisplayedBicats/FiberBicategory/FiberBicategory1.v.html           │
│  31.2280   30.9800  -0.2480  -0.79%    10  coq-fourcolor/theories/job231to234.v.html                                                          │
│  37.2730   37.0330  -0.2400  -0.64%    10  coq-fourcolor/theories/job107to164.v.html                                                          │
│   7.8070    7.5690  -0.2380  -3.05%    62  coq-unimath/UniMath/Bicategories/DisplayedBicats/DispBuilders.v.html                               │
│  31.8090   31.5960  -0.2130  -0.67%    10  coq-fourcolor/theories/job503to506.v.html                                                          │
│  29.1590   28.9470  -0.2120  -0.73%   208  coq-engine-bench-lite/coq/PerformanceDemos/one_step_reduction.v.html                               │
│  20.6480   20.4460  -0.2020  -0.98%    30  coq-performance-tests-lite/src/pattern.v.html                                                      │
│  29.0490   28.8520  -0.1970  -0.68%   318  coq-engine-bench-lite/coq/PerformanceDemos/one_step_reduction.v.html                               │
│  26.5890   26.3950  -0.1940  -0.73%    10  coq-fourcolor/theories/job283to286.v.html                                                          │
│  30.6130   30.4210  -0.1920  -0.63%    10  coq-fourcolor/theories/job295to298.v.html                                                          │
│  35.4870   35.3000  -0.1870  -0.53%  1650  coq-geocoq/Tarski_dev/Ch16_coordinates_with_functions.v.html                                       │
│  26.8910   26.7070  -0.1840  -0.68%    10  coq-fourcolor/theories/job219to222.v.html                                                          │
│  37.0180   36.8380  -0.1800  -0.49%    10  coq-fourcolor/theories/job563to588.v.html                                                          │
│  27.7920   27.6150  -0.1770  -0.64%    10  coq-fourcolor/theories/job511to516.v.html                                                          │
│  36.1470   35.9720  -0.1750  -0.48%  1640  coq-geocoq/Tarski_dev/Ch16_coordinates_with_functions.v.html                                       │
│   6.7080    6.5350  -0.1730  -2.58%  1121  coq-unimath/UniMath/Bicategories/Limits/Examples/SliceBicategoryLimits.v.html                      │
│  30.6210   30.4500  -0.1710  -0.56%   287  coq-unimath/UniMath/Bicategories/MonoidalCategories/Actions.v.html                                 │
│  28.8560   28.6860  -0.1700  -0.59%    10  coq-fourcolor/theories/job303to306.v.html                                                          │
│  20.9810   20.8140  -0.1670  -0.80%   580  coq-unimath/UniMath/Bicategories/PseudoFunctors/Preservation/BiadjunctionPreserveCoproducts.v.html │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

@mrhaandi
Copy link
Contributor Author
mrhaandi commented Jul 7, 2022

The bench shows that setting use_metas_eagerly_in_conv_on_closed_terms = true does not lead to major slowdowns.
However, much of the ci breaks because of the anti-pattern: tac1; auto. tac2. Since (e)auto became slightly stronger, this breaks if tac2 has fewer goals to work with.

@RalfJung The rigorous goal management of stdpp shines here, with one exception in list.v: imap_app follows the mentioned anti-pattern. (Can be fixed easily.)

What is also unfortunate, the // pattern of ssreflect benefits from this PR and solves more goals.
For example, iris/algebra/functions.discrete_fun_singleton_updateP_empty breaks because of the final ... //. apply Hg..
With a slightly stronger // the apply Hg. has no goal to target.
(Other than that iris fully compiles).

It seems that other projects which have a less rigorous goal management break quite badly.

@RalfJung
Copy link
Contributor
RalfJung commented Jul 7, 2022

Yeah, that's why we try to avoid these patterns -- less because of Coq changes and more because of other things that might change the goal.

with one exception in list.v: imap_app follows the mentioned anti-pattern. (Can be fixed easily.)

The auto doesn't even do anything here currently it seems... I'll just get rid of it.

@RalfJung
Copy link
Contributor
RalfJung commented Jul 7, 2022

What is also unfortunate, the // pattern of ssreflect benefits from this PR and solves more goals.

Yeah that pattern is super handy when using rewrite. I wish there was a pattern for saying "solve all the side-conditions arising from the rewrite, but do not try to solve the main goal, and error if any side-condition fails to solve" -- something that reliably leaves exactly one goal behind.

@mrhaandi
Copy link
Contributor Author
mrhaandi commented Jul 7, 2022

"solve all the side-conditions arising from the rewrite, but do not try to solve the main goal, and error if any side-condition fails to solve" -- something that reliably leaves exactly one goal behind.

What I do is: rewrite !xyz; [done..|].. This is brief, reliable, and precisely what you describe.
EDIT: I agree, this syntax is difficult to chain.

@RalfJung
Copy link
Contributor
RalfJung commented Jul 7, 2022

It is very much not brief. Before:

rewrite lemma1 // lemma2 // lemma3 // lemma4 //.

After:

rewrite lemma1; [done..|]. rewrite lemma2; [done..|]. rewrite lemma3; [done..|]. rewrite lemma3; [done..|].

Also all these symbols make it a lot more work to type than //.

@coqbot-app
Copy link
Contributor
coqbot-app bot commented Jul 11, 2022

The job library:ci-fiat_crypto_legacy has failed in allow failure mode
ping @JasonGross

@rocq-prover rocq-prover deleted a comment from coqbot-app bot Jul 11, 2022
@rocq-prover rocq-prover deleted a comment from coqbot-app bot Jul 11, 2022
@mrhaandi mrhaandi added kind: enhancement Enhancement to an existing user-facing feature, tactic, etc. part: auto The tactics auto, eauto, etc. labels Jul 12, 2022
@mrhaandi
Copy link
Contributor Author
mrhaandi commented Jul 12, 2022

@gares when I try to locally compile ci-analysis from coq master, I get error messages such as

Warning:
Cannot open /home/gares/COQ/master/_build/install/default/lib/coq/theories
[cannot-open-path,filesystem]
Error: Cannot find a physical path bound to logical path
Prelude with prefix Coq.

make[4]: *** [Makefile.examples.coq:791: examples/example_data_base.vo] Error 1

EDIT: Also it requires a local opam elpi installation but then also compiles its own?

@ppedrot ppedrot self-assigned this Aug 25, 2022
xavierleroy pushed a commit to AbsInt/CompCert that referenced this pull request Sep 8, 2022
Improves robustness in case of stronger (e)auto.
This is in preparation of a change in Coq that will make auto and eauto stronger
(rocq-prover/rocq#16293).
@coqbot-app

This comment was marked as resolved.

@mrhaandi

This comment was marked as resolved.

@coqbot-app

This comment was marked as resolved.

@coqbot-app

This comment was marked as resolved.

@ppedrot

This comment was marked as resolved.

@ppedrot

This comment was marked as resolved.

@mrhaandi mrhaandi force-pushed the auto-metas branch 2 times, most recently from 952dc3e to b84fb8a Compare September 20, 2022 10:51
@ppedrot
Copy link
Member
ppedrot commented Sep 20, 2022

Let's have a last bench, just in case. @coqbot bench

@coqbot-app
Copy link
Contributor
coqbot-app bot commented Sep 20, 2022

🏁 Bench results:

┌──────────────────────────────┬─────────────────────────┬───────────────────────────────────────┬───────────────────────────────────────┬─────────────────────────┬─────────────────┐
│                              │      user time [s]      │              CPU cycles               │           CPU instructions            │  max resident mem [KB]  │   mem faults    │
│                              │                         │                                       │                                       │                         │                 │
│         package_name         │   NEW      OLD    PDIFF │      NEW             OLD        PDIFF │      NEW             OLD        PDIFF │   NEW      OLD    PDIFF │ NEW  OLD  PDIFF │
├──────────────────────────────┼─────────────────────────┼───────────────────────────────────────┼───────────────────────────────────────┼─────────────────────────┼─────────────────┤
│       coq-mathcomp-ssreflect │   26.62    26.82  -0.75 │   119539757468    119977206354  -0.36 │   157499923893    157331492585   0.11 │  563384   564788  -0.25 │   0    0    nan │
│                 coq-bedrock2 │  399.10   401.57  -0.62 │  1775127038388   1787403176953  -0.69 │  3470379286516   3472689930368  -0.07 │ 1910044  1910052  -0.00 │   0    0    nan │
│          coq-category-theory │  767.39   770.95  -0.46 │  3480389464594   3494971428339  -0.42 │  6011684435476   6013394365177  -0.03 │  927852   932016  -0.45 │   0    0    nan │
│        coq-engine-bench-lite │  159.65   160.35  -0.44 │   679924012154    681800658437  -0.28 │  1292058647115   1302100914469  -0.77 │ 1065884   981236   8.63 │   0    0    nan │
│       coq-mathcomp-character │   68.92    69.21  -0.42 │   312583220312    314158275139  -0.50 │   479086336344    479087073436  -0.00 │  744528   746016  -0.20 │   0    0    nan │
│      coq-metacoq-safechecker │  212.55   213.38  -0.39 │   961004106523    965397469249  -0.46 │  1449135745668   1449068903458   0.00 │ 1379168  1375636   0.26 │   0    0    nan │
│            coq-iris-examples │  487.56   488.62  -0.22 │  2196859239070   2199824111871  -0.13 │  3378545539887   3377984881604   0.02 │ 1232300  1236440  -0.33 │   0    0    nan │
│           coq-mathcomp-field │   94.76    94.93  -0.18 │   430460843187    431119551475  -0.15 │   715663750781    715717840627  -0.01 │  642972   651176  -1.26 │   0    0    nan │
│                 coq-compcert │  296.67   297.08  -0.14 │  1326351967313   1328396235569  -0.15 │  2058181345572   2057564505330   0.03 │ 1128596  1126728   0.17 │   0    0    nan │
│       coq-mathcomp-odd-order │  472.75   473.08  -0.07 │  2152814564295   2155400848726  -0.12 │  3746030948073   3747035640928  -0.03 │  964164   960716   0.36 │   0    0    nan │
│ coq-fiat-crypto-with-bedrock │ 5723.94  5727.64  -0.06 │ 25762930666025  25784369585152  -0.08 │ 47052970967912  47048318565697   0.01 │ 3809304  3818372  -0.24 │   0    0    nan │
│          coq-metacoq-erasure │  183.20   183.31  -0.06 │   820495596782    820890791646  -0.05 │  1339995257594   1340013238299  -0.00 │ 1530940  1526664   0.28 │   0    0    nan │
│   coq-performance-tests-lite │  775.90   775.99  -0.01 │  3461601682563   3460204843266   0.04 │  6260051181537   6260031449373   0.00 │ 2155364  2155368  -0.00 │   0    0    nan │
│        coq-mathcomp-fingroup │   22.85    22.85   0.00 │   103221879178    103225265755  -0.00 │   152855104618    152845180658   0.01 │  496664   498012  -0.27 │   0    0    nan │
│            coq-metacoq-pcuic │  573.70   573.68   0.00 │  2582608848545   2586114160344  -0.14 │  3856457192957   3854450215530   0.05 │ 1953540  1953148   0.02 │   0    0    nan │
│                 coq-rewriter │  331.09   330.92   0.05 │  1495777426830   1494178553673   0.11 │  2520657689141   2521779590301  -0.04 │ 1003192  1003380  -0.02 │   0    0    nan │
│             coq-fiat-parsers │  343.69   343.50   0.06 │  1515023942592   1511659641969   0.22 │  2563151917907   2565246041507  -0.08 │ 3543208  3543400  -0.01 │   0    0    nan │
│                    coq-verdi │   48.29    48.20   0.19 │   216742760406    216463750619   0.13 │   333960512984    334044956959  -0.03 │  791264   790748   0.07 │   0    0    nan │
│                     coq-hott │  163.85   163.54   0.19 │   734303313539    732807096219   0.20 │  1175749192591   1175736966924   0.00 │  573508   579588  -1.05 │   0    0    nan │
│                     coq-corn │  816.12   814.51   0.20 │  3682570405423   3672337280183   0.28 │  5876556783531   5841186831311   0.61 │  854432   862480  -0.93 │   0    0    nan │
│                 coq-coqprime │   44.23    44.14   0.20 │   198301747845    196713849468   0.81 │   304934502176    304686946113   0.08 │  767192   768116  -0.12 │   0    0    nan │
│               coq-verdi-raft │  586.18   584.67   0.26 │  2661030322282   2651811108638   0.35 │  4230677275973   4229288852968   0.03 │ 1235384  1234884   0.04 │   0    0    nan │
│                coq-fourcolor │ 1513.43  1509.53   0.26 │  6675758646888   6669433787140   0.09 │ 12228142286062  12229290214755  -0.01 │  740580   739252   0.18 │   0    0    nan │
│  coq-rewriter-perf-SuperFast │  766.41   763.56   0.37 │  3421919897497   3406818389456   0.44 │  5953472306824   5952708557659   0.01 │ 1144272  1147496  -0.28 │   0    0    nan │
│                    coq-color │  230.29   228.97   0.58 │  1027435140078   1020999692889   0.63 │  1499283928126   1499746731458  -0.03 │ 1186620  1187464  -0.07 │   0    0    nan │
│                   coq-geocoq │  718.95   714.53   0.62 │  3246780245173   3233759434025   0.40 │  5306133563444   5309859322857  -0.07 │ 1109304  1106956   0.21 │   0    0    nan │
│               coq-coquelicot │   37.04    36.79   0.68 │   163835587995    162202792513   1.01 │   226556519128    226421898509   0.06 │  756152   753708   0.32 │   0    0    nan │
│        coq-mathcomp-solvable │   83.26    82.65   0.74 │   377043871634    373780392094   0.87 │   582512039688    582596306587  -0.01 │  662140   664184  -0.31 │   0    0    nan │
│                  coq-coqutil │   36.66    36.38   0.77 │   162349522141    161268791052   0.67 │   236864454114    236925920143  -0.03 │  603040   601068   0.33 │   0    0    nan │
│                     coq-core │  112.83   111.90   0.83 │   454349879161    453522813131   0.18 │   485635647213    485695077322  -0.01 │  287928   286348   0.55 │   0    0    nan │
│                  coq-bignums │   28.52    28.28   0.85 │   128049146975    127580504355   0.37 │   184346238624    184328214849   0.01 │  475816   475792   0.01 │   0    0    nan │
│             coq-math-classes │   97.65    96.81   0.87 │   437179986417    434556802309   0.60 │   643727803140    643619868956   0.02 │  512200   513124  -0.18 │   0    0    nan │
│         coq-metacoq-template │  133.06   131.55   1.15 │   588586739336    583572261159   0.86 │   972033939442    971706068710   0.03 │ 1283740  1287352  -0.28 │   0    0    nan │
│                   coq-stdlib │  426.64   420.99   1.34 │  1778385144236   1761183516703   0.98 │  1539530751998   1539641845029  -0.01 │  704940   706828  -0.27 │   0    0    nan │
│                coq-fiat-core │   58.73    57.75   1.70 │   248488045336    245630903699   1.16 │   363745429483    363879241718  -0.04 │  486760   485260   0.31 │   0    0    nan │
│         coq-mathcomp-algebra │   63.62    62.28   2.15 │   287578355667    281542838159   2.14 │   402751821249    396186598414   1.66 │  573644   575544  -0.33 │   0    0    nan │
└──────────────────────────────┴─────────────────────────┴───────────────────────────────────────┴───────────────────────────────────────┴─────────────────────────┴─────────────────┘

🐢 Top 25 slow downs
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                                           TOP 25 SLOW DOWNS                                                            │
│                                                                                                                                        │
│   OLD       NEW      DIFF    %DIFF     Ln                    FILE                                                                      │
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ 210.5700  212.1080  1.5380     0.73%   139  coq-fiat-crypto-with-bedrock/src/UnsaturatedSolinasHeuristics/Tests.v.html                 │
│ 120.6720  121.6310  0.9590     0.79%   153  coq-fiat-crypto-with-bedrock/src/UnsaturatedSolinasHeuristics/Tests.v.html                 │
│  10.6090   11.4750  0.8660     8.16%    66  coq-fiat-crypto-with-bedrock/src/Bedrock/End2End/X25519/MontgomeryLadder.v.html            │
│  28.0180   28.5930  0.5750     2.05%    10  coq-fourcolor/theories/job535to541.v.html                                                  │
│   6.9080    7.3640  0.4560     6.60%   355  coq-rewriter-perf-SuperFast/src/Rewriter/Rewriter/Wf.v.html                                │
│   8.6490    9.0960  0.4470     5.17%   495  coq-rewriter-perf-SuperFast/src/Rewriter/Rewriter/Wf.v.html                                │
│   7.8870    8.3120  0.4250     5.39%   672  coq-rewriter-perf-SuperFast/src/Rewriter/Rewriter/Wf.v.html                                │
│   0.0100    0.4110  0.4010  4010.00%  1904  coq-mathcomp-algebra/mathcomp/algebra/mxpoly.v.html                                        │
│  19.3360   19.7160  0.3800     1.97%    10  coq-fourcolor/theories/job550to553.v.html                                                  │
│  32.7140   33.0890  0.3750     1.15%    10  coq-fourcolor/theories/job589to610.v.html                                                  │
│  40.4950   40.8560  0.3610     0.89%   222  coq-performance-tests-lite/PerformanceExperiments/rewrite_lift_lets_map.v.html             │
│  21.0010   21.3600  0.3590     1.71%    10  coq-fourcolor/theories/job207to214.v.html                                                  │
│   7.9420    8.2850  0.3430     4.32%   672  coq-rewriter/src/Rewriter/Rewriter/Wf.v.html                                               │
│  26.1240   26.4490  0.3250     1.24%    10  coq-fourcolor/theories/job495to498.v.html                                                  │
│  23.2480   23.5310  0.2830     1.22%    10  coq-fourcolor/theories/job219to222.v.html                                                  │
│  33.9140   34.1910  0.2770     0.82%    10  coq-fourcolor/theories/job165to189.v.html                                                  │
│   0.0070    0.2590  0.2520  3600.00%  1949  coq-mathcomp-algebra/mathcomp/algebra/mxpoly.v.html                                        │
│  41.3250   41.5660  0.2410     0.58%   235  coq-rewriter/src/Rewriter/Rewriter/Examples/PerfTesting/LiftLetsMap.v.html                 │
│  34.9310   35.1700  0.2390     0.68%    10  coq-fourcolor/theories/job254to270.v.html                                                  │
│   2.7750    3.0080  0.2330     8.40%   642  coq-fiat-crypto-with-bedrock/src/Bedrock/Field/Synthesis/Generic/UnsaturatedSolinas.v.html │
│   9.4740    9.7000  0.2260     2.39%    47  coq-bedrock2/bedrock2/src/bedrock2/WeakestPreconditionProperties.v.html                    │
│   0.0770    0.3000  0.2230   289.61%   229  coq-fiat-crypto-with-bedrock/src/Bedrock/Field/Translation/Proofs/Cmd.v.html               │
│   0.8290    1.0480  0.2190    26.42%   265  coq-fiat-crypto-with-bedrock/src/Assembly/Symbolic.v.html                                  │
│  29.8650   30.0830  0.2180     0.73%    10  coq-fourcolor/theories/job190to206.v.html                                                  │
│   8.2620    8.4680  0.2060     2.49%  3151  coq-fiat-crypto-with-bedrock/src/Assembly/WithBedrock/Proofs.v.html                        │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

🐇 Top 25 speed ups
┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                                         TOP 25 SPEED UPS                                                         │
│                                                                                                                                  │
│   OLD       NEW      DIFF     %DIFF    Ln                   FILE                                                                 │
├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ 138.4230  137.0060  -1.4170   -1.02%    47  coq-fiat-crypto-with-bedrock/src/Curves/Weierstrass/AffineProofs.v.html              │
│  53.2910   52.3400  -0.9510   -1.78%    50  coq-fiat-crypto-with-bedrock/src/Bedrock/Field/Synthesis/Examples/p224_64_new.v.html │
│  64.4260   63.7340  -0.6920   -1.07%   137  coq-fiat-parsers/src/Parsers/Refinement/SharpenedJSON.v.html                         │
│ 134.2930  133.7050  -0.5880   -0.44%   706  coq-bedrock2/bedrock2/src/bedrock2Examples/lightbulb.v.html                          │
│  30.0260   29.4520  -0.5740   -1.91%  1375  coq-fourcolor/theories/cfmap.v.html                                                  │
│  38.5420   38.0680  -0.4740   -1.23%   833  coq-fiat-crypto-with-bedrock/src/Fancy/Compiler.v.html                               │
│  23.7460   23.2840  -0.4620   -1.95%   114  coq-fiat-crypto-with-bedrock/src/Bedrock/Field/Synthesis/Examples/p224_64_new.v.html │
│  23.7910   23.3380  -0.4530   -1.90%   106  coq-fiat-crypto-with-bedrock/src/Bedrock/Field/Synthesis/Examples/p224_64_new.v.html │
│   4.2630    3.8740  -0.3890   -9.13%   127  coq-fiat-parsers/src/Parsers/BooleanRecognizerOptimizedReflectiveCorrectness.v.html  │
│  24.6900   24.3130  -0.3770   -1.53%    24  coq-fiat-crypto-with-bedrock/src/Rewriter/Passes/MultiRetSplit.v.html                │
│  10.8680   10.4980  -0.3700   -3.40%   436  coq-mathcomp-odd-order/theories/PFsection12.v.html                                   │
│  22.9790   22.6220  -0.3570   -1.55%    16  coq-fiat-crypto-with-bedrock/src/Bedrock/Field/Synthesis/Examples/p224_64.v.html     │
│  10.4740   10.1320  -0.3420   -3.27%    53  coq-category-theory/Construction/Comma/Natural/Transformation.v.html                 │
│  61.3340   61.0030  -0.3310   -0.54%    27  coq-fiat-crypto-with-bedrock/src/Rewriter/Passes/ToFancyWithCasts.v.html             │
│  28.2330   27.9150  -0.3180   -1.13%    10  coq-fourcolor/theories/job227to230.v.html                                            │
│   2.3970    2.0860  -0.3110  -12.97%   750  coq-bedrock2/bedrock2/src/bedrock2Examples/lightbulb.v.html                          │
│  12.4760   12.1740  -0.3020   -2.42%  1552  coq-fiat-crypto-with-bedrock/src/Util/FSets/FMapTrie.v.html                          │
│  91.8710   91.5860  -0.2850   -0.31%    22  coq-fiat-crypto-with-bedrock/src/Rewriter/Passes/ArithWithCasts.v.html               │
│  25.2300   24.9470  -0.2830   -1.12%    10  coq-fourcolor/theories/job384to398.v.html                                            │
│  24.5060   24.2250  -0.2810   -1.15%    24  coq-fiat-crypto-with-bedrock/src/Spec/Test/X25519.v.html                             │
│  24.5580   24.2840  -0.2740   -1.12%    19  coq-fiat-crypto-with-bedrock/src/Spec/Test/X25519.v.html                             │
│   0.6100    0.3360  -0.2740  -44.92%   237  coq-fiat-crypto-with-bedrock/src/Bedrock/Field/Translation/Proofs/Cmd.v.html         │
│  53.8110   53.5550  -0.2560   -0.48%    50  coq-fiat-crypto-with-bedrock/src/Bedrock/Field/Synthesis/Examples/p224_64_new.v.html │
│  22.2950   22.0440  -0.2510   -1.13%    16  coq-fiat-crypto-with-bedrock/src/Bedrock/Field/Synthesis/Examples/p256_64.v.html     │
│  46.0670   45.8330  -0.2340   -0.51%   578  coq-bedrock2/bedrock2/src/bedrock2Examples/insertionsort.v.html                      │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

@Alizter
Copy link
Contributor
Alizter commented Sep 20, 2022

The bench shows a few significant slow downs. Are they worth investigating?

F438

@mrhaandi
Copy link
Contributor Author

The bench shows a few significant slow downs. Are they worth investigating?

I did inspect the percentually biggest slowdowns. They come from the trivial tactic being applied to a goal, which is not solved by trivial even with the more general unification heuristic use_metas_eagerly_in_conv_on_closed_terms = true;.
This results in a slower no-op than previously, which is expected. However, this particular setting is closer to the documented behavior of trivial.
The local performance fixes in the few cases are easy (do not apply trivial blindly to all goals).

@ppedrot ppedrot added this to the 8.17+rc1 milestone Sep 23, 2022
…s in simple apply)

deduplicated auto and trivial code
added regression tests, changelog entry
@mrhaandi
Copy link
Contributor Author
mrhaandi commented Sep 23, 2022

@ppedrot Everything seems to be accounted for. I squashed and rebased on master for the merge.

@ppedrot
Copy link
Member
ppedrot commented Sep 23, 2022

Erm, in that case squashing was probably not that good if we wanted to spot the difference of behaviour patch-by-patch. Well, I guess it doesn't matter now. @coqbot merge now

@coqbot-app coqbot-app bot merged commit 8bde022 into rocq-prover:master Sep 23, 2022
proux01 added a commit to proux01/real-closed that referenced this pull request Sep 28, 2022
CohenCyril added a commit to math-comp/real-closed that referenced this pull request Sep 28, 2022
jfehrle pushed a commit to jfehrle/coq that referenced this pull request Dec 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind: enhancement Enhancement to an existing user-facing feature, tactic, etc. part: auto The tactics auto, eauto, etc.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

auto incoherent on goal with evars eauto fails with Hint Immediate/Hint Resolve on matching head symbol where simple apply succeeds
6 participants
0