-
Notifications
You must be signed in to change notification settings - Fork 137
improved decimation #1147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
improved decimation #1147
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #1147 +/- ##
==========================================
- Coverage 91.70% 91.68% -0.03%
==========================================
Files 30 30
Lines 5931 5951 +20
==========================================
+ Hits 5439 5456 +17
- Misses 492 495 +3 ☔ View full report in Codecov by Sentry. |
src/manifold.cpp
Outdated
return Manifold(impl); | ||
// const double oldTolerance = GetCsgLeafNode().GetImpl()->tolerance_; | ||
// Manifold result = SetTolerance(tolerance); | ||
// result.GetCsgLeafNode().GetImpl()->tolerance_ = oldTolerance; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pca006132 What do you think is the right pattern to reduce duplication between these functions? The one I've commented out here didn't compile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eh, I guess this is fine - only a few lines duplicated really.
@@ -77,14 +77,14 @@ TEST(Properties, Tolerance) { | |||
} | |||
|
|||
TEST(Properties, ToleranceSphere) { | |||
const int n = 100; | |||
const int n = 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This used to fail dramatically, but now works well.
@@ -567,7 +570,7 @@ void Manifold::Impl::CollapseEdge(const int edge, std::vector<int>& edges) { | |||
const vec3 pNew = vertPos_[endVert]; | |||
const vec3 pOld = vertPos_[toRemove.startVert]; | |||
const vec3 delta = pNew - pOld; | |||
const bool shortEdge = la::dot(delta, delta) < tolerance_ * tolerance_; | |||
const bool shortEdge = la::dot(delta, delta) < epsilon_ * epsilon_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be epsilon? I thought we want to remove edges smaller than tolerance (tolerance is the shortest edges that the user want).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, actually the check was already changed to epsilon above, but this line was forgotten. Really the flagged edge collapse should be a superset of the short edge collapse, but the actually the short edges relax a few things. I think that's really only valid for degenerate triangles, rather than "within tolerance" - all the longer edges should really go through the additional flagged edge checks.
src/edge_op.cpp
Outdated
@@ -317,36 +320,35 @@ void Manifold::Impl::SimplifyTopology() { | |||
numFlagged = 0; | |||
ShortEdge se{halfedge_, vertPos_, epsilon_}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this ShortEdge
check limit to newly added edges? Adding the check to this seems fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried, but it fails when two retained verts are at the same position in space - that edge can't be collapsed, even though it's zero length.
#endif | ||
} | ||
|
||
{ | ||
ZoneScopedN("RecursiveEdgeSwap"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Edge swap as well: Should we swap old edges? Adding the check makes 3 tests fail with too many degenerate triangles, wondering why.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I guess I know why, should check for both candidates. Will try that later...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably put the check within the actual edge swap function as well, because we recursively do edge swapping?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we should look into this one more - I was surprised by those failures too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice job, that was easier than I'd feared.
diff --git a/src/edge_op.cpp b/src/edge_op.cpp
index 6ed9e1fd..b5a5f065 100644
--- a/src/edge_op.cpp
+++ b/src/edge_op.cpp
@@ -48,10 +48,13 @@ struct ShortEdge {
VecView<const Halfedge> halfedge;
VecView<const vec3> vertPos;
const double tolerance;
+ const int firstNewVert;
bool operator()(int edge) const {
const Halfedge& half = halfedge[edge];
- if (half.pairedHalfedge < 0) return false;
+ if (half.pairedHalfedge < 0 ||
+ (half.startVert < firstNewVert && half.endVert < firstNewVert))
+ return false;
// Flag short edges
const vec3 delta = vertPos[half.endVert] - vertPos[half.startVert];
return la::dot(delta, delta) < tolerance * tolerance;
@@ -94,10 +97,15 @@ struct SwappableEdge {
VecView<const vec3> vertPos;
VecView<const vec3> triNormal;
const double tolerance;
+ const int firstNewVert;
bool operator()(int edge) const {
const Halfedge& half = halfedge[edge];
if (half.pairedHalfedge < 0) return false;
+ if (half.startVert < firstNewVert && half.endVert < firstNewVert &&
+ halfedge[NextHalfedge(edge)].endVert < firstNewVert &&
+ halfedge[NextHalfedge(half.pairedHalfedge)].endVert < firstNewVert)
+ return false;
int tri = edge / 3;
ivec3 triEdge = TriOf(edge);
@@ -318,7 +326,7 @@ void Manifold::Impl::SimplifyTopology(int firstNewVert) {
{
ZoneScopedN("CollapseShortEdge");
numFlagged = 0;
- ShortEdge se{halfedge_, vertPos_, epsilon_};
+ ShortEdge se{halfedge_, vertPos_, epsilon_, firstNewVert};
s.run(nbEdges, se, [&](size_t i) {
const bool didCollapse = CollapseEdge(i, scratchBuffer);
if (didCollapse) numFlagged++;
@@ -353,7 +361,8 @@ void Manifold::Impl::SimplifyTopology(int firstNewVert) {
{
ZoneScopedN("RecursiveEdgeSwap");
numFlagged = 0;
- SwappableEdge se{halfedge_, vertPos_, faceNormal_, tolerance_};
+ SwappableEdge se{halfedge_, vertPos_, faceNormal_, tolerance_,
+ firstNewVert};
std::vector<int> edgeSwapStack;
std::vector<int> visited(halfedge_.size(), -1);
int tag = 0; |
OK parallelized pinched verts splitting: void Manifold::Impl::SplitPinchedVerts() {
ZoneScoped;
#if MANIFOLD_PAR == 1
if (halfedge_.size() > 1e5) {
std::mutex mutex;
std::vector<size_t> problematic;
std::vector<size_t> largestEdge(NumVert(),
std::numeric_limits<size_t>::max());
auto halfedgeCount = halfedge_.size();
tbb::combinable<std::vector<bool>> store(
[halfedgeCount]() { return std::vector<bool>(halfedgeCount, false); });
tbb::parallel_for(
tbb::blocked_range<size_t>(0, halfedge_.size()),
[&store, &mutex, &problematic, &largestEdge, this](const auto& r) {
auto& local = store.local();
for (auto i = r.begin(); i < r.end(); ++i) {
if (local[i]) continue;
local[i] = true;
const int vert = halfedge_[i].startVert;
size_t largest = i;
ForVert(i, [&local, &largest](int current) {
local[current] = true;
largest = std::max(largest, static_cast<size_t>(current));
});
auto expected = std::numeric_limits<size_t>::max();
if (!reinterpret_cast<std::atomic<size_t>*>(largestEdge.data() +
vert)
->compare_exchange_strong(expected, largest) &&
expected != largest) {
// we know that there is another loop...
mutex.lock();
problematic.push_back(largest);
mutex.unlock();
}
}
6DB6
});
std::vector<bool> halfedgeProcessed(halfedge_.size(), false);
for (size_t i : problematic) {
if (halfedgeProcessed[i]) continue;
vertPos_.push_back(vertPos_[halfedge_[i].startVert]);
const int vert = NumVert() - 1;
ForVert(i, [this, vert, &halfedgeProcessed](int current) {
halfedgeProcessed[current] = true;
halfedge_[current].startVert = vert;
halfedge_[halfedge_[current].pairedHalfedge].endVert = vert;
});
}
} else
#endif
{
std::vector<bool> vertProcessed(NumVert(), false);
std::vector<bool> halfedgeProcessed(halfedge_.size(), false);
for (size_t i = 0; i < halfedge_.size(); ++i) {
if (halfedgeProcessed[i]) continue;
int vert = halfedge_[i].startVert;
if (vertProcessed[vert]) {
vertPos_.push_back(vertPos_[vert]);
vert = NumVert() - 1;
} else {
vertProcessed[vert] = true;
}
ForVert(i, [this, &halfedgeProcessed, vert](int current) {
halfedgeProcessed[current] = true;
halfedge_[current].startVert = vert;
halfedge_[halfedge_[current].pairedHalfedge].endVert = vert;
});
}
}
} |
And thinking about it, edge deduplication shouldn't require sorting all the entries. We are guaranteed that there are no pinched vertices, so we can just check locally around the starting vertex to search for edge deduplication. That should allow us to eliminate the expensive sorting operation for rare edge duplication cases. |
I'm impressed with your pinched vert parallelization - I looked at that and found I couldn't figure out how to parallelize it. Care to open a PR with that? And the deduplication stuff too? |
Done. And I think my patch for |
Hmm, weird:
With my patch, this one is failing. Probably because I forgot to turn on |
87e6ad9
to
cec28f7
Compare
I tweaked that test slightly and it passes - I think the behavior is expected. Thanks for the patch! |
Fixes #1017
Added
Simplify()
which is a lot likeSetTolerance()
, but with a clearer name that matches ourCrossSection
API.We now have the beginnings of a halfway-decent decimator; in 3 seconds we can take a sphere with 8 million triangles and reduce it to under 2,500, looking like this:

I've also updated our edge collapse to only operate on intersected edges when it runs as part of the Boolean cleanup, which has been requested by several people, and should increase performance a bit too. I tried to apply that logic to the edge swapping as well, but it broke a few tests, so I'll come back to that.