8000 SDF Refactor by elalish · Pull Request #876 · elalish/manifold · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

SDF Refactor #876

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 18 commits into from
Jul 26, 2024
Merged

SDF Refactor #876

merged 18 commits into from
Jul 26, 2024

Conversation

elalish
Copy link
Owner
@elalish elalish commented Jul 23, 2024

Fixes #655

The big breaking change here is moving LevelSet from a free function in its own library to MeshGL::LevelSet - I decided this is a better home so that we can deprecate Mesh, but also only incur the cost of converting to a Manifold for those who want to (it can be a lot of work finding the flat faces and collapsing edges).

I also changed LevelSet to precalculate the SDF values and reference them - I'm seeing a 50% speedup on larger tests (I added SDF.Blobs for this). It requires more memory, but if we want, as a follow-on, it would be fairly easy now to calculate it in blocks of a fixed memory size.

I've also updated our bindings to the API changes.

@elalish elalish added the breaking change API changes for major versions label Jul 23, 2024
@elalish elalish self-assigned this Jul 23, 2024
@elalish
Copy link
Owner Author
elalish commented Jul 24, 2024

Related to #655 - with the SDF getting pre-computed, this would also make it a lot easier to do batch processing for Python, etc. I'm not totally thrilled with the API complexity of it, but we did do it for Warp, so...

Also, the batching won't speed up getting a higher precision SDF. Personally, I'd rather these languages figure out how to bind a pure function so it can be invoked in parallel, but alas. @zalo opinions?

@elalish elalish requested a review from pca006132 July 24, 2024 00:01
@elalish
Copy link
Owner Author
elalish commented Jul 24, 2024

@pca006132 Weird, I'm getting vec out of range, but only for Python. Any ideas?

@@ -12,7 +12,8 @@
"program": "${workspaceFolder}/build/test/manifold_test",
"args": [
"--gtest_break_on_failure",
"--gtest_filter=Smooth.RefineQuads"
"--gtest_catch_exceptions=0",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we should gitignore this later

@pca006132
Copy link
Collaborator

No idea, but I can help to debug this later. It seems pretty consistent.

@zalo
Copy link
Contributor
zalo commented Jul 24, 2024

The memory-poor way of doing it would be to give the client program some minimal amount of data that it could use to compute the sample points for processing in parallel. This seems needlessly complicated, imo.

The numpythonic way of doing it would be similar to my old PR: pass the SDF sample grid as an [N,3] (or [3, N], depending on speed) numpy array of float coordinates into the delegate function, so the SDF can be evaluated as a batched numpy operation on the Python-side.

A compromise might be to pass only some of the sample points with each call, to balance memory usage and parallelism, and then to call it many times.

This has the added benefit of subsequent binary search queries only needing to sample the points near the surface, vastly improving the speed.

@elalish
Copy link
Owner Author
elalish commented Jul 25, 2024

@zalo, agreed - I figure once I make it batched for ourselves, I can use that same system to batch to Python. However, I don't follow this:

This has the added benefit of subsequent binary search queries only needing to sample the points near the surface, vastly improving the speed.

Can you explain how the speed is faster near the surface, and also how this relates to the batching? I sort of felt the very nature of binary search would pretty much preclude batching.

@zalo
Copy link
Contributor
zalo commented Jul 25, 2024

My intuition is that, when there's a zero-crossing, you can record the two samples points on either side of that for the binary search later; there's no need to sample the empty volume anymore far from the zero-crossing isosurface...

It's not really a batching-specific thing... just that every batched call of the algorithm doesn't need to include all of the points in the box volume.

@pca006132
Copy link
Collaborator

but in general you have no way of knowing if some volume far away from the zero-crossing is empty unless you sample it? e.g. gyroid.

@zalo
Copy link
Contributor
zalo commented Jul 25, 2024

@pca006132 Are you supposed to binary search between two positive distances? 😅
My thinking is: if the field has small details that slip between the cracks, then perhaps it's best to increase the base resolution...

@pca006132
Copy link
Collaborator

I think we have no information about anything outside a sphere with radius r when we get a positive distance r. Binary search can help you find the root when you are already near it (kind of move the points of the evaluation grid when we are near the surface?), but it does not make things faster because we still have to evaluate nearby points according to the base resolution.

@@ -216,24 +201,26 @@ struct ComputeVerts {
neighborIndex += 1;
neighborIndex.w = 0;
}
const float val = BoundedSDF(neighborIndex);
const float val =
voxels[EncodeIndex(neighborIndex + glm::ivec4(1, 1, 1, 0), gridPow)];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that this neighborIndex + glm::ivec4(1, 1, 1, 0) can exceed the gridSize range. I see neighborIndex = glm::ivec4(31, 32, 1, 0) for grdSize = glm::ivec3(31, 31, 31).

And this line is causing the index out of range error.

@pca006132
Copy link
Collaborator

Also, how should we batch compute the sdf function if we want to do root finding?

@elalish
Copy link
Owner Author
elalish commented Jul 25, 2024

@zalo @pca006132 I think you're actually in agreement. Checking the whole volume is necessary as the first step and can be batched. Doing binary search only happens on the output verts, but cannot be batched. You'll see this is how it works in this PR already.

@elalish
Copy link
Owner Author
elalish commented Jul 25, 2024

ah, I got it to repro with address sanitizer! Should be able to get to the bottom of it now.

@zalo
Copy link
Contributor
zalo commented Jul 25, 2024

Also, how should we batch compute the sdf function if we want to do root finding?

The root-finding routines would need to enqeue their sample requests and store their state until the batched query can get back to it 😅.

@elalish
Copy link
Owner Author
elalish commented Jul 25, 2024

I figured out why I wasn't catching it locally - somehow despite not specifying the -DMANIFOLD_EXCEPTIONS flag, I was not getting the default of ON, probably due to some CMake cache. Also, the Python gyroid example has a different n, which was what happened to trigger this case.

@elalish elalish merged commit 3bf7f4b into master Jul 26, 2024
19 checks passed
@elalish elalish deleted the sdfBlocks branch July 26, 2024 16:17
@elalish elalish mentioned this pull request Nov 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking change API changes for major versions
Projects
None yet
Development

Successfully merging this pull request may close these issues.

SDF updates
3 participants
0