8000 feat: better universe normalization by arthur-adjedj · Pull Request #8769 · leanprover/lean4 · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

feat: better universe normalization #8769

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

arthur-adjedj
Copy link
Contributor

This PR attempts to make Lean's universe equality checker more complete, by improving the normalization procedure over universe le 8000 vels. In particular, imax u (imax v w) now reduces to imax (max u v) w, and imax u (max v w) reduces to max (imax u v) (imax u w) (both of these reductions are valid under Lean's usual model for universes)

Taking advantage of the better normalisation procedure, this PR also improves the generation for linear NoConfusions by being as Sort-polymorphic as the original quadratic algorithm.

This is only a draft PR to experiment with this change, and bench the potential performance changes.

@nomeata
Copy link
Collaborator
nomeata commented Jun 13, 2025

Thanks for experimenting with this. This is blocking my idea of exposing these withCtor functions as generally available helpers, and as building blocks for a possible future different match elaboration.

addOffset (mkIMaxAux l₁ l₂) k
match l₂ with
| max l₂ l₃ =>
normalize (mkLevelMax (mkIMaxAux l₁ l₂) (mkIMaxAux l₁ l₃)) k
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a bad idea, since that adds a lot of duplication (also, dedent the match arms)

Copy link
Contributor Author
@arthur-adjedj arthur-adjedj Jun 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also not a fan of this particular branch. I wonder how many terms of the form imax u (max v w) appear in real world examples ? In my mental model of universes, maxes only appear when manipulating things that are already for sure in Types and not in arbitrary Sorts, as such, I don't see how such a thing could appear on the rhs of an imax ? Examples are welcome.

@@ -359,10 +359,10 @@ private def isExplicitSubsumed (lvls : Array Level) (firstNonExplicit : Nat) : B
let max := lvls[firstNonExplicit - 1]!.getOffset
isExplicitSubsumedAux lvls max firstNonExplicit

partial def normalize (l : Level) : Level :=
partial def normalize (l : Level) (offset := 0): Level :=
if isAlreadyNormalizedCheap l then l
Copy link
Contributor
@Rob23oba Rob23oba Jun 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming this is meant to make normalize l off equivalent to addOffset (normalize l) off? Then surely the then part of here should return addOffset l off

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, I'll just remove the offset for now (I mainly just wanted for the function to be tail-recursive).

@Rob23oba
Copy link
Contributor

Hmm this seems to introduce some problems, one of which being that imax (imax u v) (imax u v) gets turned into imax (max (imax u v) u) v and not imax u v.

@arthur-adjedj
Copy link
Contributor Author

Hmm this seems to introduce some problems, one of which being that imax (imax u v) (imax u v) gets turned into imax (max (imax u v) u) v and not imax u v.

Ideally, any imax being normalized should have been constructed using mkIMaxAux, so imax (imax u v) (imax u v) would already be "normalized" to imax u v before this normalization step.

As a sidenote, I am not fond of the fact that different normalizations steps are done when constructing terms, and when normalizing them during equality checking. I would much rather have an explicit pass that does all of this.

This draft will probably be stale for a month or so (holiday+paper deadline incoming), but I've started on implementing a complete checker for universe equality instead of simply juggling with better normalization rules. This in particular, involves doing case splits on certain universe levels to get rid of problematic imaxes and being smart about it to avoid any exponential behaviour. The core advantage of such a checker would be that it not influencing normalize too much would help avoid unification issues that have already arisen in this draft, making it an (ideally) backwards-compatible solution to improving universe-checking.

@Rob23oba
Copy link
Contributor

Yes, I agree that merely adding normalization rules won't solve the problem that the procedure is simply incomplete. I also had an idea of how to write a complete procedure and am interested in your approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0