-
Notifications
You must be signed in to change notification settings - Fork 137
new JS Mesh class #272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new JS Mesh class #272
Conversation
Sorry I don't have time to review this for now, paper deadline on Friday! |
I like the idea of using this for pushing |
On my end there was a I think the performance of the library in general is amazing, but if you're looking for even more areas of optimisation, the biggest performance loss on my project from the JS API is preparing the mesh for Manifold and converting the result back to a mesh usable for the game engine I'm using: Nothing can be done about the mesh format conversion, but I think an API for using meshes without merged vertices directly would help with this use-case a lot. On my project with 2 meshes, 45ms (approx. 10% of total time) is spent merging vertices, which is not a lot, but needs to be done for every mesh. |
Good catch, fixed. How do you go about merging verts? If you care about manifoldness, you won't want to do it geometrically (how close their positions are). I'm thinking some kind of parallel data structure will be needed alongside the unmerged version to facilitate getting back. |
Codecov ReportBase: 92.10% // Head: 91.70% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #272 +/- ##
==========================================
- Coverage 92.10% 91.70% -0.40%
==========================================
Files 32 32
Lines 3446 3461 +15
==========================================
Hits 3174 3174
- Misses 272 287 +15
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
I hope I'm doing this right, now that you mention manifoldness, but the way I do it is: // merge vertices with same position and add vertices to mesh
const hasher = new VertexHasher();
const mergedIndices = new Array<number>();
let nextIdx = 0;
for (let i = 0; i < packedVertexCount; i++) {
const pos = positions.get(i);
if (hasher.isUnique(pos)) {
mesh.vertPos.push(pos);
mergedIndices.push(nextIdx++);
} else {
const [x, y, z] = pos;
let j = 0;
for (; j < mesh.vertPos.length; j++) {
const [ox, oy, oz] = mesh.vertPos[j];
if (ox === x && oy === y && oz === z) {
break;
}
}
if (j === mesh.vertPos.length) {
mesh.vertPos.push(pos);
mergedIndices.push(nextIdx++);
} else {
mergedIndices.push(j);
}
}
}
// add triangles to mesh
let j = 0;
if (indexData === null) {
for (let i = 0; i < vertexCount; i += 3) {
mesh.triVerts[j++] = [
mergedIndices[i], mergedIndices[i + 1], mergedIndices[i + 2]
];
}
} else {
for (let i = 0; i < vertexCount; i += 3) {
mesh.triVerts[j++] = [
mergedIndices[indexData[i]],
mergedIndices[indexData[i + 1]],
mergedIndices[indexData[i + 2]]
];
}
} It makes a vertPos buffer by iterating through each vertex, reusing vertices that have the same exact position. It then saves all the original triangle indices so that it can unmerge the vertices later, and make the triVerts buffer.
Comparing with all the previous vertices is very slow, so to speed up the process a helper class is used for checking the uniqueness of a vertex's position (checking uniqueness also saves the hash in the helper), which uses murmur32 hashing. The helper also handles collisions by creating a bucket per hash with all the positions for that hash, but I still need to test if it performs better without buckets. Edit: the original triangle indices code was unnecessary: old code// merge vertices with same position and add vertices to mesh
const hasher = new VertexHasher();
const mergedIndices = new Array<number>();
let nextIdx = 0;
for (let i = 0; i < packedVertexCount; i++) {
const pos = positions.get(i);
if (hasher.isUnique(pos)) {
mesh.vertPos.push(pos);
mergedIndices.push(nextIdx++);
} else {
const [x, y, z] = pos;
let j = 0;
for (; j < mesh.vertPos.length; j++) {
const [ox, oy, oz] = mesh.vertPos[j];
if (ox === x && oy === y && oz === z) {
break;
}
}
if (j === mesh.vertPos.length) {
mesh.vertPos.push(pos);
mergedIndices.push(nextIdx++);
} else {
mergedIndices.push(j);
}
}
}
// add triangles to mesh and map triangles to original indices
let j = 0;
const origTris: OrigTris = new Array(triCount);
if (indexData === null) {
for (let i = 0; i < vertexCount; i += 3) {
origTris[j] = [i, i + 1, i + 2];
mesh.triVerts[j++] = [
mergedIndices[i], mergedIndices[i + 1], mergedIndices[i + 2]
];
}
} else {
for (let i = 0; i < vertexCount;) {
const a = indexData[i++];
const b = indexData[i++];
const c = indexData[i++];
origTris[j] = [a, b, c];
mesh.triVerts[j++] = [
mergedIndices[a], mergedIndices[b], mergedIndices[c]
];
}
} |
Alright, I've got the new Mesh hooked up for both input and output now. Definitely cleans up the code, so I think we're moving in the right direction. Definitely more to do though. |
@rafern Yeah, so this is the classic problem - say your model is two cubes that share an edge. Manifold can successfully union these and give manifold output, but only by keeping those verts duplicated. If you run your algorithm, you will lose the manifoldness of the result, and in general it can be hard to recover which was what's neighbor. Basically if you ever use a floating-point comparison to change your topology, your manifoldness will not be robust. Of course, you may be reading a format that has this problem, in which case repairing the manifold will be necessary anyway. The real question is, what do you consider your source of truth to be? What is your input and output? |
@elalish reading the diffs, this is looking good. hope to have time to test soon. |
@elalish I never took any of this into consideration, but I see what you mean now. I'm using mesh objects from the game engine as an input and output (no extra information about connectivity, only triangle data), and the input meshes are usually read from a GLB file. From what I understand, GLB doesn't have any face connectivity information. I tried looking for formats with connectivity information, but the best I could find was that OBJ maybe supports connectivity information. For now this isn't a problem as each input mesh is a single manifold with no shared edges from disconnected faces, but I can see myself running into this problem in the future. Maybe I'll implement the cutting and stitching method for trying to reconstruct the connectivity, and keep using GLB. |
@rafern GLB has a lot more connectivity info than, say STL, since it uses referenced verts - you can represent a manifold just fine in GLB, which is what ManifoldCAD.org does. The problem comes when you want multiple properties per vert, since graphics drivers don't support that. So when you want a sharp edge (different normals) or a change of textures/UVs, you end up having to duplicate the verts along those joints for the format. The only format I know of that retains manifoldness in general is 3MF. I contribute to the Khronos glTF working group, so I've been thinking of proposing an extension to represent that lost topology. If you'd be interested in working together on it that would be a big motivator for me! |
@elalish i really like 3mf btw. I was a bit annoyed that prusa slicer does not respect it in some regards that were importnant to me for some tinkering. I just want to ask you because you have much more experience one thing. I am right to assume that keeping my geometry in the format like manifold does is better, because it is easier to expand it before pushing to GPU, than trying to rebuild manifold from mesh soup. I am making thigs to 3d print, not caring about textures or VUs or normals. |
@elalish I'm much less familiar with this problem than you, so I would gladly help by making an example project with a new proposed format, but I'm not sure if I should be drafting a spec 😅. For example, as far as I know, the extra information we need is a list of manifolds, where each manifold is a list of triangle indices and cuts/disconnected vertices (or list of connected vertices, which would use up more space but require almost no processing), but I'm probably missing something. Maybe there's extra data that I'm not aware of needed by some CAD software. |
@hrgdavor I'm glad you like 3MF! I wrote most of that spec back when I was at M$. What did Prusa get wrong? I might go bug him about it 😄. @rafern No problem, I have plenty of experience with specs. I'm thinking a list of connected edges; for most uses I think the extra storage will be minimal. An example project is exactly what would be helpful: someone besides me saying it has a real-world application. |
vertNormal(vertNormal_), | ||
halfedgeTangent(halfedgeTangent_) {} | ||
|
||
Mesh(const MeshGL& in) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I lied, this is another extraneous copy. Hopefully doesn't hurt perf too much, but we can always come back and optimize it out later.
I posted a feature request, and explained some issues there. They would not budge an inch probably due to how thir internals work, it would be a lot of changes to support it. I got anoyed, and was rude and closed the ticked 😄 . I am not claiming I was right, but I was angry anyway :D. They do not consider combined objects as one, but look at the parts is my impression. after that I was nagging stewart from kiri:moto and he implemented changes to better handle 3mf in few days. it was all of my experiments with voxels and instancing and being lazy to use booleans to combine many cubes before export. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. For avoiding copying, I think we may need to ask the user to move the vector to us? We can probably optimizeVecDH
to lazily copy to unified memory until we absolutely need to, but that will probably require some large refactoring.
@@ -92,7 +92,13 @@ | |||
"codecvt": "cpp", | |||
"regex": "cpp", | |||
"future": "cpp", | |||
"shared_mutex": "cpp" | |||
"shared_mutex": "cpp", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think maybe we should put this into .gitignore
to reduce noise. Btw you can use compile_commands.json
instead of manually configuring it, see https://cmake.org/cmake/help/latest/variable/CMAKE_EXPORT_COMPILE_COMMANDS.html and https://code.visualstudio.com/docs/cpp/faq-cpp#_how-do-i-get-intellisense-to-work-correctly
|
||
meshJS.set("triVerts", | ||
val(typed_memory_view(mesh.triVerts.size(), mesh.triVerts.data())) | ||
.call<val>("slice")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a documentation for typed_memory_view
and this slice
thing? I cannot find it in the documentation for val.h
and embind
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, some of their docs are pretty well-hidden: https://emscripten.org/docs/porting/connecting_cpp_and_javascript/embind.html?highlight=typed_memory_view#memory-views and slice
is just the JS Array method; the cool thing is val.call
lets you call any JS function you want.
Now that I know about this, I'm thinking it would be better to move more of our logic from bindings.js
to bindings.cpp
so we can minimize the footprint of our exposed bindings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a comment. I'm not sure if I'm right, but I feel there is some overhead in using the val
as it has to switch context between JS and wasm. It might be better to use EM_JS
to do all the javascript work in one context switch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That may be true, but val
is also vastly easier to use (no restrictions on types). Still, that may be reason enough not to bother moving the other logic. Some perf comparisons would be interesting!
@stewartoallen Looking forward to any perf data you find; hopefully this is most of the way to your solution's improvement. If there's still a significant gap, I'm sure we can optimize further. |
@elalish re-integrated with my project this morning and it's working as expected. the performance is a bit slower due to copying of data that was optional in my implementation. but it's still orders of magnitude faster than the previous version. so all good there. I am getting memory errors when running inside of animation loops. It runs for a bit then crashes in WASM-land. Still haven't distilled this down to a bit of code you can run stand-alone. But it's not fancy. Just boolean subtract in a loop with proper cleanup. Runs in a web-worker context and copies the data to a shared buffer that the main thread renders in ThreeJS. |
Great to hear on the perf, thank you! Can you repro with a script in manifoldcad.org? That might be simplest. |
I've tried but so far not been able to |
@elalish one more cool thing in 3mf that is is unfortunately not supported or used by prusa (I think), |
Ah, that one was added after my time. Interesting. |
* new JS Mesh class * fix Mesh * switch MeshGL to vectors * new Mesh constructor * export Mesh and updated types * got scallop working again * optional normals * fixed Mesh constructor * added halfedgeTangent * cleanup
I think I have a much better, cleaner, and faster way to expose our
Mesh
in JS. @stewartoallen, @rafern would you mind taking a look at this breaking API change? The only copy operation isslice
, which ensures our memory is managed by the JS garbage collector, rather than needing manual C++ deletion, I believe. I believe this should be pretty fast, but it would be great if someone wanted to benchmark it to verify.