diff --git a/CHANGELOG.md b/CHANGELOG.md
index e4fdf73..1ff87ed 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,9 +2,24 @@
The following changes are present in the `main` branch of the repository and are not yet part of a release:
+ - N/A
+
+## Version 0.10.0
+
+This release implements ["Weighted Laplacian Smoothing for Surface Reconstruction of Particle-based Fluids" (Löschner, Böttcher, Jeske, Bender; 2023)](https://animation.rwth-aachen.de/publication/0583/), mesh cleanup based on ["Mesh Displacement: An Improved Contouring Method for Trivariate Data" (Moore, Warren; 1991)](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49.5214&rep=rep1&type=pdf) and a new, more efficient domain decomposition (see README.md for more details).
+
+ - Lib: Implement new spatial decomposition based on a regular grid of subdomains (subdomains are dense marching cubes grids)
- CLI: Make new spatial decomposition available in CLI with `--subdomain-grid=on`
- - Lib: Implement new spatial decomposition based on a regular grid of subdomains, subdomains are dense marching cubes grids
- - Lib: Support for reading and writing PLY meshes
+ - Lib: Implement weighted Laplacian smoothing to remove bumps from surfaces according to paper "Weighted Laplacian Smoothing for Surface Reconstruction of Particle-based Fluids" (Löschner, Böttcher, Jeske, Bender 2023)
+ - CLI: Add arguments to enable and control weighted Laplacian smoothing `--mesh-smoothing-iters=...`, `--mesh-smoothing-weights=on` etc.
+ - Lib: Implement `marching_cubes_cleanup` function: a marching cubes "mesh cleanup" decimation inspired by "Mesh Displacement: An Improved Contouring Method for Trivariate Data" (Moore, Warren 1991)
+ - CLI: Add argument to enable mesh cleanup: `--mesh-cleanup=on`
+ - Lib: Add functions to `TriMesh3d` to find non-manifold edges and vertices
+ - CLI: Add arguments to check if output meshes are manifold (no non-manifold edges and vertices): `--mesh-check-manifold=on`, `--mesh-check-closed=on`
+ - Lib: Support for mixed triangle and quad meshes
+ - Lib: Implement `convert_tris_to_quads` function: greedily merge triangles to quads if they fulfill certain criteria (maximum angle in quad, "squareness" of the quad, angle between triangle normals)
+ - CLI: Add arguments to enable and control triangle to quad conversion with `--generate-quads=on` etc.
+ - Lib: Support for reading and writing PLY meshes (`MixedTriQuadMesh3d`)
- CLI: Support for filtering input particles using an AABB with `--particle-aabb-min`/`--particle-aabb-max`
- CLI: Support for clamping the triangle mesh using an AABB with `--mesh-aabb-min`/`--mesh-aabb-max`
@@ -69,9 +84,9 @@ The following changes are present in the `main` branch of the repository and are
This release fixes a couple of bugs that may lead to inconsistent surface reconstructions when using domain decomposition (i.e. reconstructions with artificial bumps exactly at the subdomain boundaries, especially on flat surfaces). Currently there are no other known bugs and the domain decomposed approach appears to be really fast and robust.
-In addition the CLI now reports more detailed timing statistics for multi-threaded reconstructions.
+In addition, the CLI now reports more detailed timing statistics for multi-threaded reconstructions.
-Otherwise this release contains just some small changes to command line parameters.
+Otherwise, this release contains just some small changes to command line parameters.
- Lib: Add a `ParticleDensityComputationStrategy` enum to the `SpatialDecompositionParameters` struct. In order for domain decomposition to work consistently, the per particle densities have to be evaluated to a consistent value between domains. This is especially important for the ghost particles. Previously, this resulted inconsistent density values on boundaries if the ghost particle margin was not at least 2x the compact support radius (as this ensures that the inner ghost particles actually have the correct density). This option is now still available as the `IndependentSubdomains` strategy. The preferred way, that avoids the 2x ghost particle margin is the `SynchronizeSubdomains` where the density values of the particles in the subdomains are first collected into a global storage. This can be faster as the previous method as this avoids having to collect a double-width ghost particle layer. In addition there is the "playing it safe" option, the `Global` strategy, where the particle densities are computed in a completely global step before any domain decomposition. This approach however is *really* slow for large quantities of particles. For more information, read the documentation on the `ParticleDensityComputationStrategy` enum.
- Lib: Fix bug where the workspace storage was not cleared correctly leading to inconsistent results depending on the sequence of processed subdomains
@@ -91,7 +106,7 @@ Otherwise this release contains just some small changes to command line paramete
The biggest new feature is a domain decomposed approach for the surface reconstruction by performing a spatial decomposition of the particle set with an octree.
The resulting local patches can then be processed in parallel (leaving a single layer of boundary cells per patch untriangulated to avoid incompatible boundaries).
-Afterwards, a stitching procedure walks the octree back upwards and merges the octree leaves by averaging density values on the boundaries.
+Afterward, a stitching procedure walks the octree back upwards and merges the octree leaves by averaging density values on the boundaries.
As the library uses task based parallelism, a task for stitching can be enqueued as soon as all children of an octree node are processed.
Depending on the number of available threads and the particle data, this approach results in a speedup of 4-10x in comparison to the global parallel approach in selected benchmarks.
At the moment, this domain decomposition approach is only available when allowing to parallelize over particles using the `--mt-particles` flag.
diff --git a/CITATION.cff b/CITATION.cff
index 2cca56a..af22017 100644
--- a/CITATION.cff
+++ b/CITATION.cff
@@ -2,7 +2,7 @@
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
-title: splashsurf
+title: '"splashsurf" Surface Reconstruction Software'
message: >-
If you use this software in your work, please consider
citing it using these metadata.
@@ -12,10 +12,10 @@ authors:
given-names: Fabian
affiliation: RWTH Aachen University
orcid: 'https://orcid.org/0000-0001-6818-2953'
-url: 'https://www.floeschner.de/splashsurf'
+url: 'https://splashsurf.physics-simulation.org'
abstract: >-
Splashsurf is a surface reconstruction tool and framework
for reconstructing surfaces from particle data.
license: MIT
-version: 0.9.1
-date-released: '2023-04-19'
+version: 0.10.0
+date-released: '2023-09-25'
diff --git a/Cargo.lock b/Cargo.lock
index cd452f9..3183845 100644
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -10,9 +10,9 @@ checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"
[[package]]
name = "aho-corasick"
-version = "1.0.5"
+version = "1.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "0c378d78423fdad8089616f827526ee33c19f2fddbd5de1629152c9593ba4783"
+checksum = "ea5d730647d4fadd988536d06fecce94b7b4f2a7efdae548f1cf4b63205518ab"
dependencies = [
"memchr 2.6.3",
]
@@ -148,9 +148,9 @@ checksum = "b4682ae6287fcf752ecaabbfcc7b6f9b72aa33933dc23a554d853aea8eea8635"
[[package]]
name = "bumpalo"
-version = "3.13.0"
+version = "3.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "a3e2c3daef883ecc1b5d58c15adae93470a91d425f3532ba1695849656af3fc1"
+checksum = "7f30e7476521f6f8af1a1c4c0b8cc94f0bee37d91763d0ca2665f299b6cd8aec"
[[package]]
name = "bytecount"
@@ -172,7 +172,7 @@ checksum = "965ab7eb5f8f97d2a083c799f3a1b994fc397b2fe2da5d1da1626ce15a39f2b1"
dependencies = [
"proc-macro2",
"quote",
- "syn 2.0.32",
+ "syn 2.0.37",
]
[[package]]
@@ -235,9 +235,9 @@ checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]]
name = "chrono"
-version = "0.4.30"
+version = "0.4.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "defd4e7873dbddba6c7c91e199c7fcb946abc4a6a4ac3195400bcfb01b5de877"
+checksum = "7f2c685bad3eb3d45a01354cedb7d5faa66194d1d58ba6e267a8de788f79db38"
dependencies = [
"android-tzdata",
"iana-time-zone",
@@ -276,9 +276,9 @@ dependencies = [
[[package]]
name = "clap"
-version = "4.4.2"
+version = "4.4.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "6a13b88d2c62ff462f88e4a121f17a82c1af05693a2f192b5c38d14de73c19f6"
+checksum = "b1d7b8d5ec32af0fadc644bf1fd509a688c2103b185644bb1e29d164e0703136"
dependencies = [
"clap_builder",
"clap_derive",
@@ -286,9 +286,9 @@ dependencies = [
[[package]]
name = "clap_builder"
-version = "4.4.2"
+version = "4.4.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "2bb9faaa7c2ef94b2743a21f5a29e6f0010dff4caa69ac8e9d6cf8b6fa74da08"
+checksum = "5179bb514e4d7c2051749d8fcefa2ed6d06a9f4e6d69faf3805f5d80b8cf8d56"
dependencies = [
"anstream",
"anstyle",
@@ -305,7 +305,7 @@ dependencies = [
"heck",
"proc-macro2",
"quote",
- "syn 2.0.32",
+ "syn 2.0.37",
]
[[package]]
@@ -390,16 +390,6 @@ version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7059fff8937831a9ae6f0fe4d658ffabf58f2ca96aa9dec1c889f936f705f216"
-[[package]]
-name = "crossbeam-channel"
-version = "0.5.8"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "a33c2bf77f2df06183c3aa30d1e96c0695a313d4f9c453cc3762a6db39f99200"
-dependencies = [
- "cfg-if",
- "crossbeam-utils",
-]
-
[[package]]
name = "crossbeam-deque"
version = "0.8.3"
@@ -490,9 +480,9 @@ dependencies = [
[[package]]
name = "fastrand"
-version = "2.0.0"
+version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "6999dc1837253364c2ebb0704ba97994bd874e8f195d665c50b7548f6ea92764"
+checksum = "25cbce373ec4653f1a01a31e8a5e5ec0c622dc27ff9c4e6606eefef5cbbed4a5"
[[package]]
name = "fern"
@@ -581,9 +571,9 @@ checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8"
[[package]]
name = "hermit-abi"
-version = "0.3.2"
+version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "443144c8cdadd93ebf52ddb4056d257f5b52c04d3c804e657d19eb73fc33668b"
+checksum = "d77f7ec81a6d05a3abb01ab6eb7590f6083d08449fe5a1c8b1e620283546ccb7"
[[package]]
name = "iana-time-zone"
@@ -610,9 +600,9 @@ dependencies = [
[[package]]
name = "indicatif"
-version = "0.17.6"
+version = "0.17.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "0b297dc40733f23a0e52728a58fa9489a5b7638a324932de16b41adc3ef80730"
+checksum = "fb28741c9db9a713d93deb3bb9515c20788cef5815265bee4980e87bde7e0f25"
dependencies = [
"console",
"instant",
@@ -691,9 +681,9 @@ dependencies = [
[[package]]
name = "libc"
-version = "0.2.147"
+version = "0.2.148"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "b4668fb0ea861c1df094127ac5f1da3409a82116a4ba74fca2e58ef927159bb3"
+checksum = "9cdc71e17332e86d2e1d38c1f99edcb6288ee11b815fb1a4b049eaa2114d369b"
[[package]]
name = "libm"
@@ -748,9 +738,9 @@ dependencies = [
[[package]]
name = "matrixmultiply"
-version = "0.3.7"
+version = "0.3.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "090126dc04f95dc0d1c1c91f61bdd474b3930ca064c1edc8a849da2c6cbe1e77"
+checksum = "7574c1cf36da4798ab73da5b215bbf444f50718207754cb522201d78d1cd0ff2"
dependencies = [
"autocfg",
"rawpointer",
@@ -895,16 +885,6 @@ dependencies = [
"libm",
]
-[[package]]
-name = "num_cpus"
-version = "1.16.0"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "4161fcb6d602d4d2081af7c3a45852d875a03dd337a6bfdd6e06407b61342a43"
-dependencies = [
- "hermit-abi",
- "libc",
-]
-
[[package]]
name = "number_prefix"
version = "0.4.0"
@@ -1049,9 +1029,9 @@ checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de"
[[package]]
name = "proc-macro2"
-version = "1.0.66"
+version = "1.0.67"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "18fb31db3f9bddb2ea821cde30a9f70117e3f119938b5ee630b7403aa6e2ead9"
+checksum = "3d433d9f1a3e8c1263d9456598b16fec66f4acc9a74dacffd35c7bb09b3a1328"
dependencies = [
"unicode-ident",
]
@@ -1134,9 +1114,9 @@ checksum = "60a357793950651c4ed0f3f52338f53b2f809f32d83a07f72909fa13e4c6c1e3"
[[package]]
name = "rayon"
-version = "1.7.0"
+version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "1d2df5196e37bcc87abebc0053e20787d73847bb33134a69841207dd0a47f03b"
+checksum = "9c27db03db7734835b3f53954b534c91069375ce6ccaa2e065441e07d9b6cdb1"
dependencies = [
"either",
"rayon-core",
@@ -1144,14 +1124,12 @@ dependencies = [
[[package]]
name = "rayon-core"
-version = "1.11.0"
+version = "1.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "4b8f95bd6966f5c87776639160a66bd8ab9895d9d4ab01ddba9fc60661aebe8d"
+checksum = "5ce3fb6ad83f861aac485e76e1985cd109d9a3713802152be56c3b1f0e0658ed"
dependencies = [
- "crossbeam-channel",
"crossbeam-deque",
"crossbeam-utils",
- "num_cpus",
]
[[package]]
@@ -1214,9 +1192,9 @@ dependencies = [
[[package]]
name = "rustix"
-version = "0.38.13"
+version = "0.38.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "d7db8590df6dfcd144d22afd1b83b36c21a18d7cbc1dc4bb5295a8712e9eb662"
+checksum = "747c788e9ce8e92b12cd485c49ddf90723550b654b32508f979b71a7b1ecda4f"
dependencies = [
"bitflags 2.4.0",
"errno",
@@ -1265,9 +1243,9 @@ dependencies = [
[[package]]
name = "semver"
-version = "1.0.18"
+version = "1.0.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "b0293b4b29daaf487284529cc2f5675b8e57c61f70167ba415a463651fd6a918"
+checksum = "ad977052201c6de01a8ef2aa3378c4bd23217a056337d1d6da40468d267a4fb0"
dependencies = [
"serde",
]
@@ -1289,14 +1267,14 @@ checksum = "4eca7ac642d82aa35b60049a6eccb4be6be75e599bd2e9adb5f875a737654af2"
dependencies = [
"proc-macro2",
"quote",
- "syn 2.0.32",
+ "syn 2.0.37",
]
[[package]]
name = "serde_json"
-version = "1.0.106"
+version = "1.0.107"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "2cc66a619ed80bf7a0f6b17dd063a84b88f6dea1813737cf469aef1d081142c2"
+checksum = "6b420ce6e3d8bd882e9b243c6eed35dbc9a6110c9769e74b584e0d68d1f20c65"
dependencies = [
"itoa",
"ryu",
@@ -1333,9 +1311,9 @@ dependencies = [
[[package]]
name = "smallvec"
-version = "1.11.0"
+version = "1.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "62bb4feee49fdd9f707ef802e22365a35de4b7b299de4763d44bfea899442ff9"
+checksum = "942b4a808e05215192e39f4ab80813e599068285906cc91aa64f923db842bd5a"
[[package]]
name = "spin"
@@ -1425,9 +1403,9 @@ dependencies = [
[[package]]
name = "syn"
-version = "2.0.32"
+version = "2.0.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "239814284fd6f1a4ffe4ca893952cdd93c224b6a1571c9a9eadd670295c0c9e2"
+checksum = "7303ef2c05cd654186cb250d29049a24840ca25d2747c25c0381c8d9e2f582e8"
dependencies = [
"proc-macro2",
"quote",
@@ -1464,7 +1442,7 @@ checksum = "49922ecae66cc8a249b77e68d1d0623c1b2c514f0060c27cdc68bd62a1219d35"
dependencies = [
"proc-macro2",
"quote",
- "syn 2.0.32",
+ "syn 2.0.37",
]
[[package]]
@@ -1489,9 +1467,9 @@ dependencies = [
[[package]]
name = "typenum"
-version = "1.16.0"
+version = "1.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "497961ef93d974e23eb6f433eb5fe1b7930b659f06d12dec6fc44a8f554c0bba"
+checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825"
[[package]]
name = "ultraviolet"
@@ -1513,15 +1491,15 @@ dependencies = [
[[package]]
name = "unicode-ident"
-version = "1.0.11"
+version = "1.0.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "301abaae475aa91687eb82514b328ab47a211a533026cb25fc3e519b86adfc3c"
+checksum = "3354b9ac3fae1ff6755cb6db53683adb661634f67557942dea4facebec0fee4b"
[[package]]
name = "unicode-width"
-version = "0.1.10"
+version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "c0edd1e5b14653f783770bce4a4dabb4a5108a5370a5f5d8cfe8710c361f6c8b"
+checksum = "e51733f11c9c4f72aa0c160008246859e340b00807569a0da0e7a1079b27ba85"
[[package]]
name = "utf8parse"
@@ -1591,7 +1569,7 @@ dependencies = [
"once_cell",
"proc-macro2",
"quote",
- "syn 2.0.32",
+ "syn 2.0.37",
"wasm-bindgen-shared",
]
@@ -1613,7 +1591,7 @@ checksum = "54681b18a46765f095758388f2d0cf16eb8d4169b639ab575a8f5693af210c7b"
dependencies = [
"proc-macro2",
"quote",
- "syn 2.0.32",
+ "syn 2.0.37",
"wasm-bindgen-backend",
"wasm-bindgen-shared",
]
@@ -1662,9 +1640,9 @@ checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
[[package]]
name = "winapi-util"
-version = "0.1.5"
+version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "70ec6ce85bb158151cae5e5c87f95a8e97d2c0c4b001223f33a334e3ce5de178"
+checksum = "f29e6f9198ba0d26b4c9f07dbe6f9ed633e1f3d5b8b414090084349e46a52596"
dependencies = [
"winapi",
]
diff --git a/README.md b/README.md
index 2abe987..8aea2dc 100644
--- a/README.md
+++ b/README.md
@@ -25,19 +25,27 @@ reconstructed from this particle data. The next image shows a reconstructed surf
with a "smoothing length" of `2.2` times the particles radius and a cell size of `1.1` times the particle radius. The
third image shows a finer reconstruction with a cell size of `0.45` times the particle radius. These surface meshes can
then be fed into 3D rendering software such as [Blender](https://www.blender.org/) to generate beautiful water animations.
-The result might look something like this (please excuse the lack of 3D rendering skills):
+The result might look something like this:
+Note: This animation does not show the recently added smoothing features of the tool, for more recent rendering see [this video](https://youtu.be/2bYvaUXlBQs).
+
+---
+
**Contents**
- [The `splashsurf` CLI](#the-splashsurf-cli)
- [Introduction](#introduction)
+ - [Domain decomposition](#domain-decomposition)
+ - [Octree-based decomposition](#octree-based-decomposition)
+ - [Subdomain grid-based decomposition](#subdomain-grid-based-decomposition)
- [Notes](#notes)
- [Installation](#installation)
- [Usage](#usage)
- [Recommended settings](#recommended-settings)
+ - [Weighted surface smoothing](#weighted-surface-smoothing)
- [Benchmark example](#benchmark-example)
- [Sequences of files](#sequences-of-files)
- [Input file formats](#input-file-formats)
@@ -53,34 +61,56 @@ The result might look something like this (please excuse the lack of 3D renderin
- [The `convert` subcommand](#the-convert-subcommand)
- [License](#license)
+
# The `splashsurf` CLI
The following sections mainly focus on the CLI of `splashsurf`. For more information on the library, see the [corresponding readme](https://github.com/InteractiveComputerGraphics/splashsurf/blob/main/splashsurf_lib) in the `splashsurf_lib` subfolder or the [`splashsurf_lib` crate](https://crates.io/crates/splashsurf_lib) on crates.io.
## Introduction
-This is a basic but high-performance implementation of a marching cubes based surface reconstruction for SPH fluid simulations (e.g performed with [SPlisHSPlasH](https://github.com/InteractiveComputerGraphics/SPlisHSPlasH)).
+This is CLI to run a fast marching cubes based surface reconstruction for SPH fluid simulations (e.g. performed with [SPlisHSPlasH](https://github.com/InteractiveComputerGraphics/SPlisHSPlasH)).
The output of this tool is the reconstructed triangle surface mesh of the fluid.
At the moment it supports computing normals on the surface using SPH gradients and interpolating scalar and vector particle attributes to the surface.
-No additional smoothing or decimation operations are currently implemented.
-As input, it supports reading particle positions from `.vtk`, `.bgeo`, `.ply`, `.json` and binary `.xyz` files (i.e. files containing a binary dump of a particle position array).
-In addition, required parameters are the kernel radius and particle radius (to compute the volume of particles) used for the original SPH simulation as well as the surface threshold.
-
-By default, a domain decomposition of the particle set is performed using octree-based subdivision.
-The implementation first computes the density of each particle using the typical SPH approach with a cubic kernel.
-This density is then evaluated or mapped onto a sparse grid using spatial hashing in the support radius of each particle.
-This implies that memory is only allocated in areas where the fluid density is non-zero. This is in contrast to a naive approach where the marching cubes background grid is allocated for the whole domain.
-The marching cubes reconstruction is performed only in the narrowband of grid cells where the density values cross the surface threshold. Cells completely in the interior of the fluid are skipped. For more details, please refer to the [readme of the library]((https://github.com/InteractiveComputerGraphics/splashsurf/blob/main/splashsurf_lib/README.md)).
+To get rid of the typical bumps from SPH simulations, it supports a weighted Laplacian smoothing approach [detailed below](#weighted-surface-smoothing).
+As input, it supports reading particle positions from `.vtk`/`.vtu`, `.bgeo`, `.ply`, `.json` and binary `.xyz` (i.e. files containing a binary dump of a particle position array) files.
+Required parameters to perform a reconstruction are the kernel radius and particle radius (to compute the volume of particles) used for the original SPH simulation as well as the marching cubes resolution (a default iso-surface threshold is pre-configured).
+
+## Domain decomposition
+
+A naive dense marching cubes reconstruction allocating a full 3D array over the entire fulid domain quickly becomes infeasible for larger simulations.
+Instead, one could use a global hashmap where only cubes that contain non-zero fluid density values are allocated.
+This approach is used in `splashsurf` if domain decomposition is disabled completely.
+However, the global hashmap approach does not lead to good cache locality and is not well suited for parallelization (even specialized parallel map implementations like [`dashmap`](https://github.com/xacrimon/dashmap) have their performance limitations).
+To improve on this situation `splashsurf` currently implements two domain decomposition approaches.
+
+### Octree-based decomposition
+The octree-based decomposition is currently the default approach if no other option is specified but will probably be replaced by the grid-based approach described below.
+For the octree-based decomposition an octree is built over all particles with an automatically determined target number of particles per leaf node.
+For each leaf node, a hashmap is used like outlined above.
+As each hashmap is smaller, cache locality is improved and due to the decomposition, each thread can work on its own local hashmap.
Finally, all surface patches are stitched together by walking the octree back up, resulting in a closed surface.
+Downsides of this approach are that the octree construction starting from the root and stitching back towards the root limit the amount of paralleism during some stages.
+
+### Subdomain grid-based decomposition
+
+Since version 0.10.0, `splashsurf` implements a new domain decomposition approach called the "subdomain grid" approach, toggeled with the `--subdomain-grid=on` flag.
+Here, the goal is to divide the fluid domain into subdomains with a fixed number of marching cubes cells, by default `64x64x64` cubes.
+For each subdomain a dense 3D array is allocated for the marching cubes cells.
+Of course, only subdomains that contain fluid particles are actually allocated.
+For subdomains that contain only a very small number of fluid particles (less th 5% of the largest subdomain) a hashmap is used instead to not waste too much storage.
+As most domains are dense however, the marching cubes triangulation per subdomain is very fast as it can make full use of cache locality and the entire procedure is trivially parallelizable.
+For the stitching we ensure that we perform floating point operations in the same order at the subdomain boundaries (this can be ensured without synchronization).
+If the field values on the subdomain boundaries are identical from both sides, the marching cubes triangulations will be topologically compatible and can be merged in a post-processing step that is also parallelizable.
+Overall, this approach should almost always be faster than the previous octree-based aproach.
+
## Notes
-For small numbers of fluid particles (i.e. in the low thousands or less) the multithreaded implementation may have worse performance due to the task based parallelism and the additional overhead of domain decomposition and stitching.
+For small numbers of fluid particles (i.e. in the low thousands or less) the domain decomposition implementation may have worse performance due to the task based parallelism and the additional overhead of domain decomposition and stitching.
In this case, you can try to disable the domain decomposition. The reconstruction will then use a global approach that is parallelized using thread-local hashmaps.
For larger quantities of particles the decomposition approach is expected to be always faster.
Due to the use of hash maps and multi-threading (if enabled), the output of this implementation is not deterministic.
-In the future, flags may be added to switch the internal data structures to use binary trees for debugging purposes.
As shown below, the tool can handle the output of large simulations.
However, it was not tested with a wide range of parameters and may not be totally robust against corner-cases or extreme parameters.
@@ -103,6 +133,29 @@ Good settings for the surface reconstruction depend on the original simulation a
- `surface-threshold`: a good value depends on the selected `particle-radius` and `smoothing-length` and can be used to counteract a fluid volume increase e.g. due to a larger particle radius. In combination with the other recommended values a threshold of `0.6` seemed to work well.
- `cube-size` usually should not be chosen larger than `1.0` to avoid artifacts (e.g. single particles decaying into rhomboids), start with a value in the range of `0.75` to `0.5` and decrease/increase it if the result is too coarse or the reconstruction takes too long.
+### Weighted surface smoothing
+The CLI implements the paper ["Weighted Laplacian Smoothing for Surface Reconstruction of Particle-based Fluids" (Löschner, Böttcher, Jeske, Bender; 2023)](https://animation.rwth-aachen.de/publication/0583/) which proposes a fast smoothing approach to avoid typical bumpy surfaces while preventing loss of volume that typically occurs with simple smoothing methods.
+The following images show a rendering of a typical surface reconstruction (on the right) with visible bumps due to the particles compared to the same surface reconstruction with weighted smoothing applied (on the left):
+
+
`splashsurf` is a tool to reconstruct surfaces meshes from SPH particle data.
@@ -19,19 +19,27 @@ reconstructed from this particle data. The next image shows a reconstructed surf
with a "smoothing length" of `2.2` times the particles radius and a cell size of `1.1` times the particle radius. The
third image shows a finer reconstruction with a cell size of `0.45` times the particle radius. These surface meshes can
then be fed into 3D rendering software such as [Blender](https://www.blender.org/) to generate beautiful water animations.
-The result might look something like this (please excuse the lack of 3D rendering skills):
+The result might look something like this:
+Note: This animation does not show the recently added smoothing features of the tool, for more recent rendering see [this video](https://youtu.be/2bYvaUXlBQs).
+
+---
+
**Contents**
- [The `splashsurf` CLI](#the-splashsurf-cli)
- [Introduction](#introduction)
+ - [Domain decomposition](#domain-decomposition)
+ - [Octree-based decomposition](#octree-based-decomposition)
+ - [Subdomain grid-based decomposition](#subdomain-grid-based-decomposition)
- [Notes](#notes)
- [Installation](#installation)
- [Usage](#usage)
- [Recommended settings](#recommended-settings)
+ - [Weighted surface smoothing](#weighted-surface-smoothing)
- [Benchmark example](#benchmark-example)
- [Sequences of files](#sequences-of-files)
- [Input file formats](#input-file-formats)
@@ -47,34 +55,56 @@ The result might look something like this (please excuse the lack of 3D renderin
- [The `convert` subcommand](#the-convert-subcommand)
- [License](#license)
+
# The `splashsurf` CLI
The following sections mainly focus on the CLI of `splashsurf`. For more information on the library, see the [corresponding readme](https://github.com/InteractiveComputerGraphics/splashsurf/blob/main/splashsurf_lib) in the `splashsurf_lib` subfolder or the [`splashsurf_lib` crate](https://crates.io/crates/splashsurf_lib) on crates.io.
## Introduction
-This is a basic but high-performance implementation of a marching cubes based surface reconstruction for SPH fluid simulations (e.g performed with [SPlisHSPlasH](https://github.com/InteractiveComputerGraphics/SPlisHSPlasH)).
+This is CLI to run a fast marching cubes based surface reconstruction for SPH fluid simulations (e.g. performed with [SPlisHSPlasH](https://github.com/InteractiveComputerGraphics/SPlisHSPlasH)).
The output of this tool is the reconstructed triangle surface mesh of the fluid.
At the moment it supports computing normals on the surface using SPH gradients and interpolating scalar and vector particle attributes to the surface.
-No additional smoothing or decimation operations are currently implemented.
-As input, it supports reading particle positions from `.vtk`, `.bgeo`, `.ply`, `.json` and binary `.xyz` files (i.e. files containing a binary dump of a particle position array).
-In addition, required parameters are the kernel radius and particle radius (to compute the volume of particles) used for the original SPH simulation as well as the surface threshold.
-
-By default, a domain decomposition of the particle set is performed using octree-based subdivision.
-The implementation first computes the density of each particle using the typical SPH approach with a cubic kernel.
-This density is then evaluated or mapped onto a sparse grid using spatial hashing in the support radius of each particle.
-This implies that memory is only allocated in areas where the fluid density is non-zero. This is in contrast to a naive approach where the marching cubes background grid is allocated for the whole domain.
-The marching cubes reconstruction is performed only in the narrowband of grid cells where the density values cross the surface threshold. Cells completely in the interior of the fluid are skipped. For more details, please refer to the [readme of the library]((https://github.com/InteractiveComputerGraphics/splashsurf/blob/main/splashsurf_lib/README.md)).
+To get rid of the typical bumps from SPH simulations, it supports a weighted Laplacian smoothing approach [detailed below](#weighted-surface-smoothing).
+As input, it supports reading particle positions from `.vtk`/`.vtu`, `.bgeo`, `.ply`, `.json` and binary `.xyz` (i.e. files containing a binary dump of a particle position array) files.
+Required parameters to perform a reconstruction are the kernel radius and particle radius (to compute the volume of particles) used for the original SPH simulation as well as the marching cubes resolution (a default iso-surface threshold is pre-configured).
+
+## Domain decomposition
+
+A naive dense marching cubes reconstruction allocating a full 3D array over the entire fulid domain quickly becomes infeasible for larger simulations.
+Instead, one could use a global hashmap where only cubes that contain non-zero fluid density values are allocated.
+This approach is used in `splashsurf` if domain decomposition is disabled completely.
+However, the global hashmap approach does not lead to good cache locality and is not well suited for parallelization (even specialized parallel map implementations like [`dashmap`](https://github.com/xacrimon/dashmap) have their performance limitations).
+To improve on this situation `splashsurf` currently implements two domain decomposition approaches.
+
+### Octree-based decomposition
+The octree-based decomposition is currently the default approach if no other option is specified but will probably be replaced by the grid-based approach described below.
+For the octree-based decomposition an octree is built over all particles with an automatically determined target number of particles per leaf node.
+For each leaf node, a hashmap is used like outlined above.
+As each hashmap is smaller, cache locality is improved and due to the decomposition, each thread can work on its own local hashmap.
Finally, all surface patches are stitched together by walking the octree back up, resulting in a closed surface.
+Downsides of this approach are that the octree construction starting from the root and stitching back towards the root limit the amount of paralleism during some stages.
+
+### Subdomain grid-based decomposition
+
+Since version 0.10.0, `splashsurf` implements a new domain decomposition approach called the "subdomain grid" approach, toggeled with the `--subdomain-grid=on` flag.
+Here, the goal is to divide the fluid domain into subdomains with a fixed number of marching cubes cells, by default `64x64x64` cubes.
+For each subdomain a dense 3D array is allocated for the marching cubes cells.
+Of course, only subdomains that contain fluid particles are actually allocated.
+For subdomains that contain only a very small number of fluid particles (less th 5% of the largest subdomain) a hashmap is used instead to not waste too much storage.
+As most domains are dense however, the marching cubes triangulation per subdomain is very fast as it can make full use of cache locality and the entire procedure is trivially parallelizable.
+For the stitching we ensure that we perform floating point operations in the same order at the subdomain boundaries (this can be ensured without synchronization).
+If the field values on the subdomain boundaries are identical from both sides, the marching cubes triangulations will be topologically compatible and can be merged in a post-processing step that is also parallelizable.
+Overall, this approach should almost always be faster than the previous octree-based aproach.
+
## Notes
-For small numbers of fluid particles (i.e. in the low thousands or less) the multithreaded implementation may have worse performance due to the task based parallelism and the additional overhead of domain decomposition and stitching.
+For small numbers of fluid particles (i.e. in the low thousands or less) the domain decomposition implementation may have worse performance due to the task based parallelism and the additional overhead of domain decomposition and stitching.
In this case, you can try to disable the domain decomposition. The reconstruction will then use a global approach that is parallelized using thread-local hashmaps.
For larger quantities of particles the decomposition approach is expected to be always faster.
Due to the use of hash maps and multi-threading (if enabled), the output of this implementation is not deterministic.
-In the future, flags may be added to switch the internal data structures to use binary trees for debugging purposes.
As shown below, the tool can handle the output of large simulations.
However, it was not tested with a wide range of parameters and may not be totally robust against corner-cases or extreme parameters.
@@ -97,6 +127,29 @@ Good settings for the surface reconstruction depend on the original simulation a
- `surface-threshold`: a good value depends on the selected `particle-radius` and `smoothing-length` and can be used to counteract a fluid volume increase e.g. due to a larger particle radius. In combination with the other recommended values a threshold of `0.6` seemed to work well.
- `cube-size` usually should not be chosen larger than `1.0` to avoid artifacts (e.g. single particles decaying into rhomboids), start with a value in the range of `0.75` to `0.5` and decrease/increase it if the result is too coarse or the reconstruction takes too long.
+### Weighted surface smoothing
+The CLI implements the paper ["Weighted Laplacian Smoothing for Surface Reconstruction of Particle-based Fluids" (Löschner, Böttcher, Jeske, Bender; 2023)](https://animation.rwth-aachen.de/publication/0583/) which proposes a fast smoothing approach to avoid typical bumpy surfaces while preventing loss of volume that typically occurs with simple smoothing methods.
+The following images show a rendering of a typical surface reconstruction (on the right) with visible bumps due to the particles compared to the same surface reconstruction with weighted smoothing applied (on the left):
+
+
+
+
+
+You can see this rendering in motion in [this video](https://youtu.be/2bYvaUXlBQs).
+To apply this smoothing, we recommend the following settings:
+ - `--mesh-smoothing-weights=on`: This enables the use of special weights during the smoothing process that preserve fluid details. For more information we refer to the [paper](https://animation.rwth-aachen.de/publication/0583/).
+ - `--mesh-smoothing-iters=25`: This enables smoothing of the output mesh. The individual iterations are relatively fast and 25 iterations appeared to strike a good balance between an initially bumpy surface and potential over-smoothing.
+ - `--mesh-cleanup=on`/`--decimate-barnacles=on`: On of the options should be used when applying smoothing, otherwise artifacts can appear on the surface (for more details see the paper). The `mesh-cleanup` flag enables a general purpose marching cubes mesh cleanup procedure that removes small sliver triangles everywhere on the mesh. The `decimate-barnacles` enables a more targeted decimation that only removes specific triangle configurations that are problematic for the smoothing. The former approach results in a "nicer" mesh overall but can be slower than the latter.
+ - `--normals-smoothing-iters=10`: If normals are being exported (with `--normals=on`), this results in an even smoother appearance during rendering.
+
+For the reconstruction parameters in conjunction with the weighted smoothing we recommend parameters close to the simulation parameters.
+That means selecting the same particle radius as in the simulation, a corresponding smoothing length (e.g. for SPlisHSPlasH a value of `2.0`), a surface-threshold between `0.6` and `0.7` and a cube size usually between `0.5` and `1.0`.
+
+A full invocation of the tool might look like this:
+```
+splashsurf reconstruct particles.vtk -r=0.025 -l=2.0 -c=0.5 -t=0.6 --subdomain-grid=on --mesh-cleanup=on --mesh-smoothing-weights=on --mesh-smoothing-iters=25 --normals=on --normals-smoothing-iters=10
+```
+
### Benchmark example
For example:
```
@@ -173,9 +226,19 @@ Note that the tool collects all existing filenames as soon as the command is inv
The first and last file of a sequences that should be processed can be specified with the `-s`/`--start-index` and/or `-e`/`--end-index` arguments.
By specifying the flag `--mt-files=on`, several files can be processed in parallel.
-If this is enabled, you should ideally also set `--mt-particles=off` as enabling both will probably degrade performance.
+If this is enabled, you should also set `--mt-particles=off` as enabling both will probably degrade performance.
The combination of `--mt-files=on` and `--mt-particles=off` can be faster if many files with only few particles have to be processed.
+The number of threads can be influenced using the `--num-threads`/`-n` argument or the `RAYON_NUM_THREADS` environment variable
+
+**NOTE:** Currently, some functions do not have a sequential implementation and always parallelize over the particles or the mesh/domain.
+This includes:
+ - the new "subdomain-grid" domain decomposition approach, as an alternative to the previous octree-based approach
+ - some post-processing functionality (interpolation of smoothing weights, interpolation of normals & other fluid attributes)
+
+Using the `--mt-particles=off` argument does not have an effect on these parts of the surface reconstruction.
+For now, it is therefore recommended to not parallelize over multiple files if this functionality is used.
+
## Input file formats
### VTK
@@ -236,16 +299,18 @@ The file format is inferred from the extension of output filename.
### The `reconstruct` command
```
-splashsurf-reconstruct (v0.9.3) - Reconstruct a surface from particle data
+splashsurf-reconstruct (v0.10.0) - Reconstruct a surface from particle data
Usage: splashsurf reconstruct [OPTIONS] --particle-radius --smoothing-length --cube-size
Options:
+ -q, --quiet Enable quiet mode (no output except for severe panic messages), overrides verbosity level
+ -v... Print more verbose output, use multiple "v"s for even more verbose output (-v, -vv)
-h, --help Print help
-V, --version Print version
Input/output:
- -o, --output-file Filename for writing the reconstructed surface to disk (default: "{original_filename}_surface.vtk")
+ -o, --output-file Filename for writing the reconstructed surface to disk (supported formats: VTK, PLY, OBJ, default: "{original_filename}_surface.vtk")
--output-dir Optional base directory for all output files (default: current working directory)
-s, --start-index Index of the first input file to process when processing a sequence of files (default: lowest index of the sequence)
-e, --end-index Index of the last input file to process when processing a sequence of files (default: highest index of the sequence)
@@ -262,39 +327,79 @@ Numerical reconstruction parameters:
The cube edge length used for marching cubes in multiplies of the particle radius, corresponds to the cell size of the implicit background grid
-t, --surface-threshold
The iso-surface threshold for the density, i.e. the normalized value of the reconstructed density level that indicates the fluid surface (in multiplies of the rest density) [default: 0.6]
- --domain-min
+ --particle-aabb-min
Lower corner of the domain where surface reconstruction should be performed (requires domain-max to be specified)
- --domain-max
+ --particle-aabb-max
Upper corner of the domain where surface reconstruction should be performed (requires domain-min to be specified)
Advanced parameters:
- -d, --double-precision= Whether to enable the use of double precision for all computations [default: off] [possible values: off, on]
- --mt-files= Flag to enable multi-threading to process multiple input files in parallel [default: off] [possible values: off, on]
- --mt-particles= Flag to enable multi-threading for a single input file by processing chunks of particles in parallel [default: on] [possible values: off, on]
+ -d, --double-precision= Enable the use of double precision for all computations [default: off] [possible values: off, on]
+ --mt-files= Enable multi-threading to process multiple input files in parallel (NOTE: Currently, the subdomain-grid domain decomposition approach and some post-processing functions including interpolation do not have sequential versions and therefore do not work well with this option enabled) [default: off] [possible values: off, on]
+ --mt-particles= Enable multi-threading for a single input file by processing chunks of particles in parallel [default: on] [possible values: off, on]
-n, --num-threads Set the number of threads for the worker thread pool
-Octree (domain decomposition) parameters:
+Domain decomposition (octree or grid) parameters:
+ --subdomain-grid=
+ Enable spatial decomposition using a regular grid-based approach [default: off] [possible values: off, on]
+ --subdomain-cubes
+ Each subdomain will be a cube consisting of this number of MC cube cells along each coordinate axis [default: 64]
--octree-decomposition=
- Whether to enable spatial decomposition using an octree (faster) instead of a global approach [default: on] [possible values: off, on]
+ Enable spatial decomposition using an octree (faster) instead of a global approach [default: on] [possible values: off, on]
--octree-stitch-subdomains=
- Whether to enable stitching of the disconnected local meshes resulting from the reconstruction when spatial decomposition is enabled (slower, but without stitching meshes will not be closed) [default: on] [possible values: off, on]
+ Enable stitching of the disconnected local meshes resulting from the reconstruction when spatial decomposition is enabled (slower, but without stitching meshes will not be closed) [default: on] [possible values: off, on]
--octree-max-particles
The maximum number of particles for leaf nodes of the octree, default is to compute it based on the number of threads and particles
--octree-ghost-margin-factor
Safety factor applied to the kernel compact support radius when it's used as a margin to collect ghost particles in the leaf nodes when performing the spatial decomposition
--octree-global-density=
- Whether to compute particle densities in a global step before domain decomposition (slower) [default: off] [possible values: off, on]
+ Enable computing particle densities in a global step before domain decomposition (slower) [default: off] [possible values: off, on]
--octree-sync-local-density=
- Whether to compute particle densities per subdomain but synchronize densities for ghost-particles (faster, recommended). Note: if both this and global particle density computation is disabled the ghost particle margin has to be increased to at least 2.0 to compute correct density values for ghost particles [default: on] [possible values: off, on]
+ Enable computing particle densities per subdomain but synchronize densities for ghost-particles (faster, recommended). Note: if both this and global particle density computation is disabled the ghost particle margin has to be increased to at least 2.0 to compute correct density values for ghost particles [default: on] [possible values: off, on]
-Interpolation:
+Interpolation & normals:
--normals=
- Whether to compute surface normals at the mesh vertices and write them to the output file [default: off] [possible values: off, on]
+ Enable omputing surface normals at the mesh vertices and write them to the output file [default: off] [possible values: off, on]
--sph-normals=
- Whether to compute the normals using SPH interpolation (smoother and more true to actual fluid surface, but slower) instead of just using area weighted triangle normals [default: on] [possible values: off, on]
+ Enable computing the normals using SPH interpolation instead of using the area weighted triangle normals [default: off] [possible values: off, on]
+ --normals-smoothing-iters
+ Number of smoothing iterations to run on the normal field if normal interpolation is enabled (disabled by default)
+ --output-raw-normals=
+ Enable writing raw normals without smoothing to the output mesh if normal smoothing is enabled [default: off] [possible values: off, on]
--interpolate-attributes
List of point attribute field names from the input file that should be interpolated to the reconstructed surface. Currently this is only supported for VTK and VTU input files
+Postprocessing:
+ --mesh-cleanup=
+ Enable MC specific mesh decimation/simplification which removes bad quality triangles typically generated by MC [default: off] [possible values: off, on]
+ --decimate-barnacles=
+ Enable decimation of some typical bad marching cubes triangle configurations (resulting in "barnacles" after Laplacian smoothing) [default: off] [possible values: off, on]
+ --keep-verts=
+ Enable keeping vertices without connectivity during decimation instead of filtering them out (faster and helps with debugging) [default: off] [possible values: off, on]
+ --mesh-smoothing-iters
+ Number of smoothing iterations to run on the reconstructed mesh
+ --mesh-smoothing-weights=
+ Enable feature weights for mesh smoothing if mesh smoothing enabled. Preserves isolated particles even under strong smoothing [default: off] [possible values: off, on]
+ --mesh-smoothing-weights-normalization
+ Normalization value from weighted number of neighbors to mesh smoothing weights [default: 13.0]
+ --output-smoothing-weights=
+ Enable writing the smoothing weights as a vertex attribute to the output mesh file [default: off] [possible values: off, on]
+ --generate-quads=
+ Enable trying to convert triangles to quads if they meet quality criteria [default: off] [possible values: off, on]
+ --quad-max-edge-diag-ratio
+ Maximum allowed ratio of quad edge lengths to its diagonals to merge two triangles to a quad (inverse is used for minimum) [default: 1.75]
+ --quad-max-normal-angle
+ Maximum allowed angle (in degrees) between triangle normals to merge them to a quad [default: 10]
+ --quad-max-interior-angle
+ Maximum allowed vertex interior angle (in degrees) inside of a quad to merge two triangles to a quad [default: 135]
+ --mesh-aabb-min
+ Lower corner of the bounding-box for the surface mesh, triangles completely outside are removed (requires mesh-aabb-max to be specified)
+ --mesh-aabb-max
+ Upper corner of the bounding-box for the surface mesh, triangles completely outside are removed (requires mesh-aabb-min to be specified)
+ --mesh-aabb-clamp-verts=
+ Enable clamping of vertices outside of the specified mesh AABB to the AABB (only has an effect if mesh-aabb-min/max are specified) [default: off] [possible values: off, on]
+ --output-raw-mesh=
+ Enable writing the raw reconstructed mesh before applying any post-processing steps [default: off] [possible values: off, on]
+
Debug options:
--output-dm-points
Optional filename for writing the point cloud representation of the intermediate density map to disk
@@ -303,7 +408,13 @@ Debug options:
--output-octree
Optional filename for writing the octree used to partition the particles to disk
--check-mesh=
- Whether to check the final mesh for topological problems such as holes (note that when stitching is disabled this will lead to a lot of reported problems) [default: off] [possible values: off, on]
+ Enable checking the final mesh for holes and non-manifold edges and vertices [default: off] [possible values: off, on]
+ --check-mesh-closed=
+ Enable checking the final mesh for holes [default: off] [possible values: off, on]
+ --check-mesh-manifold=
+ Enable checking the final mesh for non-manifold edges and vertices [default: off] [possible values: off, on]
+ --check-mesh-debug=
+ Enable debug output for the check-mesh operations (has no effect if no other check-mesh option is enabled) [default: off] [possible values: off, on]
```
### The `convert` subcommand
@@ -337,6 +448,6 @@ Options:
# License
-For license information of this project, see the LICENSE file.
+For license information of this project, see the [LICENSE](LICENSE) file.
The splashsurf logo is based on two graphics ([1](https://www.svgrepo.com/svg/295647/wave), [2](https://www.svgrepo.com/svg/295652/surfboard-surfboard)) published on SVG Repo under a CC0 ("No Rights Reserved") license.
The dragon model shown in the images on this page are part of the ["Stanford 3D Scanning Repository"](https://graphics.stanford.edu/data/3Dscanrep/).
diff --git a/splashsurf/src/io.rs b/splashsurf/src/io.rs
index 9232c73..c58f805 100644
--- a/splashsurf/src/io.rs
+++ b/splashsurf/src/io.rs
@@ -1,14 +1,12 @@
use crate::io::vtk_format::VtkFile;
use anyhow::{anyhow, Context};
use log::{info, warn};
-use splashsurf_lib::mesh::MeshAttribute;
+use splashsurf_lib::mesh::{
+ IntoVtkUnstructuredGridPiece, Mesh3d, MeshAttribute, MeshWithData, TriMesh3d,
+};
use splashsurf_lib::nalgebra::Vector3;
use splashsurf_lib::Real;
use splashsurf_lib::{io, profile};
-use splashsurf_lib::{
- mesh::{Mesh3d, MeshWithData, TriMesh3d},
- vtkio::model::DataSet,
-};
use std::collections::HashSet;
use std::fs::File;
use std::io::{BufWriter, Write};
@@ -245,7 +243,7 @@ pub fn write_mesh<'a, R: Real, MeshT: Mesh3d, P: AsRef>(
_format_params: &OutputFormatParameters,
) -> Result<(), anyhow::Error>
where
- &'a MeshWithData: Into,
+ for<'b> &'b MeshWithData: IntoVtkUnstructuredGridPiece,
{
let output_file = output_file.as_ref();
info!(
diff --git a/splashsurf/src/reconstruction.rs b/splashsurf/src/reconstruction.rs
index b155c99..6b0ca4c 100644
--- a/splashsurf/src/reconstruction.rs
+++ b/splashsurf/src/reconstruction.rs
@@ -1,17 +1,14 @@
use crate::{io, logging};
use anyhow::{anyhow, Context};
-use arguments::{
- ReconstructionRunnerArgs, ReconstructionRunnerPathCollection, ReconstructionRunnerPaths,
-};
use clap::value_parser;
use indicatif::{ProgressBar, ProgressStyle};
use log::info;
use rayon::prelude::*;
use splashsurf_lib::mesh::{AttributeData, Mesh3d, MeshAttribute, MeshWithData, PointCloud3d};
use splashsurf_lib::nalgebra::{Unit, Vector3};
-use splashsurf_lib::profile;
use splashsurf_lib::sph_interpolation::SphInterpolator;
-use splashsurf_lib::{density_map, Index, Real};
+use splashsurf_lib::{density_map, profile, Aabb3d, Index, Real};
+use std::borrow::Cow;
use std::convert::TryFrom;
use std::path::PathBuf;
@@ -24,7 +21,7 @@ static ARGS_BASIC: &str = "Numerical reconstruction parameters";
static ARGS_ADV: &str = "Advanced parameters";
static ARGS_OCTREE: &str = "Domain decomposition (octree or grid) parameters";
static ARGS_DEBUG: &str = "Debug options";
-static ARGS_INTERP: &str = "Interpolation";
+static ARGS_INTERP: &str = "Interpolation & normals";
static ARGS_POSTPROC: &str = "Postprocessing";
static ARGS_OTHER: &str = "Remaining options";
@@ -65,7 +62,7 @@ pub struct ReconstructSubcommandArgs {
#[arg(help_heading = ARGS_BASIC, short = 't', long, default_value = "0.6")]
pub surface_threshold: f64,
- /// Whether to enable the use of double precision for all computations
+ /// Enable the use of double precision for all computations
#[arg(
help_heading = ARGS_ADV,
short = 'd',
@@ -97,7 +94,7 @@ pub struct ReconstructSubcommandArgs {
)]
pub particle_aabb_max: Option>,
- /// Flag to enable multi-threading to process multiple input files in parallel
+ /// Enable multi-threading to process multiple input files in parallel (NOTE: Currently, the subdomain-grid domain decomposition approach and some post-processing functions including interpolation do not have sequential versions and therefore do not work well with this option enabled)
#[arg(
help_heading = ARGS_ADV,
long = "mt-files",
@@ -107,7 +104,7 @@ pub struct ReconstructSubcommandArgs {
require_equals = true
)]
pub parallelize_over_files: Switch,
- /// Flag to enable multi-threading for a single input file by processing chunks of particles in parallel
+ /// Enable multi-threading for a single input file by processing chunks of particles in parallel
#[arg(
help_heading = ARGS_ADV,
long = "mt-particles",
@@ -121,7 +118,7 @@ pub struct ReconstructSubcommandArgs {
#[arg(help_heading = ARGS_ADV, long, short = 'n')]
pub num_threads: Option,
- /// Whether to enable spatial decomposition using a regular grid-based approach
+ /// Enable spatial decomposition using a regular grid-based approach
#[arg(
help_heading = ARGS_OCTREE,
long,
@@ -135,7 +132,7 @@ pub struct ReconstructSubcommandArgs {
#[arg(help_heading = ARGS_OCTREE, long, default_value="64")]
pub subdomain_cubes: u32,
- /// Whether to enable spatial decomposition using an octree (faster) instead of a global approach
+ /// Enable spatial decomposition using an octree (faster) instead of a global approach
#[arg(
help_heading = ARGS_OCTREE,
long,
@@ -145,7 +142,7 @@ pub struct ReconstructSubcommandArgs {
require_equals = true
)]
pub octree_decomposition: Switch,
- /// Whether to enable stitching of the disconnected local meshes resulting from the reconstruction when spatial decomposition is enabled (slower, but without stitching meshes will not be closed)
+ /// Enable stitching of the disconnected local meshes resulting from the reconstruction when spatial decomposition is enabled (slower, but without stitching meshes will not be closed)
#[arg(
help_heading = ARGS_OCTREE,
long,
@@ -161,7 +158,7 @@ pub struct ReconstructSubcommandArgs {
/// Safety factor applied to the kernel compact support radius when it's used as a margin to collect ghost particles in the leaf nodes when performing the spatial decomposition
#[arg(help_heading = ARGS_OCTREE, long)]
pub octree_ghost_margin_factor: Option,
- /// Whether to compute particle densities in a global step before domain decomposition (slower)
+ /// Enable computing particle densities in a global step before domain decomposition (slower)
#[arg(
help_heading = ARGS_OCTREE,
long,
@@ -171,7 +168,7 @@ pub struct ReconstructSubcommandArgs {
require_equals = true
)]
pub octree_global_density: Switch,
- /// Whether to compute particle densities per subdomain but synchronize densities for ghost-particles (faster, recommended).
+ /// Enable computing particle densities per subdomain but synchronize densities for ghost-particles (faster, recommended).
/// Note: if both this and global particle density computation is disabled the ghost particle margin has to be increased to at least 2.0
/// to compute correct density values for ghost particles.
#[arg(
@@ -184,7 +181,7 @@ pub struct ReconstructSubcommandArgs {
)]
pub octree_sync_local_density: Switch,
- /// Whether to compute surface normals at the mesh vertices and write them to the output file
+ /// Enable omputing surface normals at the mesh vertices and write them to the output file
#[arg(
help_heading = ARGS_INTERP,
long,
@@ -194,21 +191,111 @@ pub struct ReconstructSubcommandArgs {
require_equals = true
)]
pub normals: Switch,
- /// Whether to compute the normals using SPH interpolation (smoother and more true to actual fluid surface, but slower) instead of just using area weighted triangle normals
+ /// Enable computing the normals using SPH interpolation instead of using the area weighted triangle normals
#[arg(
help_heading = ARGS_INTERP,
long,
- default_value = "on",
+ default_value = "off",
value_name = "off|on",
ignore_case = true,
require_equals = true
)]
pub sph_normals: Switch,
+ /// Number of smoothing iterations to run on the normal field if normal interpolation is enabled (disabled by default)
+ #[arg(help_heading = ARGS_INTERP, long)]
+ pub normals_smoothing_iters: Option,
+ /// Enable writing raw normals without smoothing to the output mesh if normal smoothing is enabled
+ #[arg(
+ help_heading = ARGS_INTERP,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub output_raw_normals: Switch,
/// List of point attribute field names from the input file that should be interpolated to the reconstructed surface. Currently this is only supported for VTK and VTU input files.
#[arg(help_heading = ARGS_INTERP, long)]
pub interpolate_attributes: Vec,
- /// Lower corner of the bounding-box for the surface mesh, mesh outside gets cut away (requires mesh-max to be specified)
+ /// Enable MC specific mesh decimation/simplification which removes bad quality triangles typically generated by MC
+ #[arg(
+ help_heading = ARGS_POSTPROC,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub mesh_cleanup: Switch,
+ /// Enable decimation of some typical bad marching cubes triangle configurations (resulting in "barnacles" after Laplacian smoothing)
+ #[arg(
+ help_heading = ARGS_POSTPROC,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub decimate_barnacles: Switch,
+ /// Enable keeping vertices without connectivity during decimation instead of filtering them out (faster and helps with debugging)
+ #[arg(
+ help_heading = ARGS_POSTPROC,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub keep_verts: Switch,
+ /// Number of smoothing iterations to run on the reconstructed mesh
+ #[arg(help_heading = ARGS_POSTPROC, long)]
+ pub mesh_smoothing_iters: Option,
+ /// Enable feature weights for mesh smoothing if mesh smoothing enabled. Preserves isolated particles even under strong smoothing.
+ #[arg(
+ help_heading = ARGS_POSTPROC,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub mesh_smoothing_weights: Switch,
+ /// Normalization value from weighted number of neighbors to mesh smoothing weights
+ #[arg(help_heading = ARGS_POSTPROC, long, default_value = "13.0")]
+ pub mesh_smoothing_weights_normalization: f64,
+ /// Enable writing the smoothing weights as a vertex attribute to the output mesh file
+ #[arg(
+ help_heading = ARGS_POSTPROC,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub output_smoothing_weights: Switch,
+
+ /// Enable trying to convert triangles to quads if they meet quality criteria
+ #[arg(
+ help_heading = ARGS_POSTPROC,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub generate_quads: Switch,
+ /// Maximum allowed ratio of quad edge lengths to its diagonals to merge two triangles to a quad (inverse is used for minimum)
+ #[arg(help_heading = ARGS_POSTPROC, long, default_value = "1.75")]
+ pub quad_max_edge_diag_ratio: f64,
+ /// Maximum allowed angle (in degrees) between triangle normals to merge them to a quad
+ #[arg(help_heading = ARGS_POSTPROC, long, default_value = "10")]
+ pub quad_max_normal_angle: f64,
+ /// Maximum allowed vertex interior angle (in degrees) inside of a quad to merge two triangles to a quad
+ #[arg(help_heading = ARGS_POSTPROC, long, default_value = "135")]
+ pub quad_max_interior_angle: f64,
+
+ /// Lower corner of the bounding-box for the surface mesh, triangles completely outside are removed (requires mesh-aabb-max to be specified)
#[arg(
help_heading = ARGS_POSTPROC,
long,
@@ -218,7 +305,7 @@ pub struct ReconstructSubcommandArgs {
requires = "mesh_aabb_max",
)]
pub mesh_aabb_min: Option>,
- /// Upper corner of the bounding-box for the surface mesh, mesh outside gets cut away (requires mesh-min to be specified)
+ /// Upper corner of the bounding-box for the surface mesh, triangles completely outside are removed (requires mesh-aabb-min to be specified)
#[arg(
help_heading = ARGS_POSTPROC,
long,
@@ -228,6 +315,27 @@ pub struct ReconstructSubcommandArgs {
requires = "mesh_aabb_min",
)]
pub mesh_aabb_max: Option>,
+ /// Enable clamping of vertices outside of the specified mesh AABB to the AABB (only has an effect if mesh-aabb-min/max are specified)
+ #[arg(
+ help_heading = ARGS_POSTPROC,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub mesh_aabb_clamp_verts: Switch,
+
+ /// Enable writing the raw reconstructed mesh before applying any post-processing steps
+ #[arg(
+ help_heading = ARGS_POSTPROC,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub output_raw_mesh: Switch,
/// Optional filename for writing the point cloud representation of the intermediate density map to disk
#[arg(help_heading = ARGS_DEBUG, long, value_parser = value_parser!(PathBuf))]
@@ -238,7 +346,7 @@ pub struct ReconstructSubcommandArgs {
/// Optional filename for writing the octree used to partition the particles to disk
#[arg(help_heading = ARGS_DEBUG, long, value_parser = value_parser!(PathBuf))]
pub output_octree: Option,
- /// Whether to check the final mesh for topological problems such as holes (note that when stitching is disabled this will lead to a lot of reported problems)
+ /// Enable checking the final mesh for holes and non-manifold edges and vertices
#[arg(
help_heading = ARGS_DEBUG,
long,
@@ -248,6 +356,36 @@ pub struct ReconstructSubcommandArgs {
require_equals = true
)]
pub check_mesh: Switch,
+ /// Enable checking the final mesh for holes
+ #[arg(
+ help_heading = ARGS_DEBUG,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub check_mesh_closed: Switch,
+ /// Enable checking the final mesh for non-manifold edges and vertices
+ #[arg(
+ help_heading = ARGS_DEBUG,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub check_mesh_manifold: Switch,
+ /// Enable debug output for the check-mesh operations (has no effect if no other check-mesh option is enabled)
+ #[arg(
+ help_heading = ARGS_DEBUG,
+ long,
+ default_value = "off",
+ value_name = "off|on",
+ ignore_case = true,
+ require_equals = true
+ )]
+ pub check_mesh_debug: Switch,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, clap::ValueEnum)]
@@ -342,15 +480,33 @@ mod arguments {
use walkdir::WalkDir;
pub struct ReconstructionRunnerPostprocessingArgs {
- pub check_mesh: bool,
+ pub check_mesh_closed: bool,
+ pub check_mesh_manifold: bool,
+ pub check_mesh_debug: bool,
+ pub mesh_cleanup: bool,
+ pub decimate_barnacles: bool,
+ pub keep_vertices: bool,
pub compute_normals: bool,
pub sph_normals: bool,
+ pub normals_smoothing_iters: Option,
pub interpolate_attributes: Vec,
+ pub mesh_smoothing_iters: Option,
+ pub mesh_smoothing_weights: bool,
+ pub mesh_smoothing_weights_normalization: f64,
+ pub generate_quads: bool,
+ pub quad_max_edge_diag_ratio: f64,
+ pub quad_max_normal_angle: f64,
+ pub quad_max_interior_angle: f64,
+ pub output_mesh_smoothing_weights: bool,
+ pub output_raw_normals: bool,
+ pub output_raw_mesh: bool,
pub mesh_aabb: Option>,
+ pub mesh_aabb_clamp_vertices: bool,
}
/// All arguments that can be supplied to the surface reconstruction tool converted to useful types
pub struct ReconstructionRunnerArgs {
+ /// Parameters passed directly to the surface reconstruction
pub params: splashsurf_lib::Parameters,
pub use_double_precision: bool,
pub io_params: io::FormatParameters,
@@ -468,6 +624,7 @@ mod arguments {
particle_aabb,
enable_multi_threading: args.parallelize_over_particles.into_bool(),
spatial_decomposition,
+ global_neighborhood_list: args.mesh_smoothing_weights.into_bool(),
};
// Optionally initialize thread pool
@@ -476,11 +633,30 @@ mod arguments {
}
let postprocessing = ReconstructionRunnerPostprocessingArgs {
- check_mesh: args.check_mesh.into_bool(),
+ check_mesh_closed: args.check_mesh.into_bool()
+ || args.check_mesh_closed.into_bool(),
+ check_mesh_manifold: args.check_mesh.into_bool()
+ || args.check_mesh_manifold.into_bool(),
+ check_mesh_debug: args.check_mesh_debug.into_bool(),
+ mesh_cleanup: args.mesh_cleanup.into_bool(),
+ decimate_barnacles: args.decimate_barnacles.into_bool(),
+ keep_vertices: args.keep_verts.into_bool(),
compute_normals: args.normals.into_bool(),
sph_normals: args.sph_normals.into_bool(),
+ normals_smoothing_iters: args.normals_smoothing_iters,
interpolate_attributes: args.interpolate_attributes.clone(),
+ mesh_smoothing_iters: args.mesh_smoothing_iters,
+ mesh_smoothing_weights: args.mesh_smoothing_weights.into_bool(),
+ mesh_smoothing_weights_normalization: args.mesh_smoothing_weights_normalization,
+ generate_quads: args.generate_quads.into_bool(),
+ quad_max_edge_diag_ratio: args.quad_max_edge_diag_ratio,
+ quad_max_normal_angle: args.quad_max_normal_angle,
+ quad_max_interior_angle: args.quad_max_interior_angle,
+ output_mesh_smoothing_weights: args.output_smoothing_weights.into_bool(),
+ output_raw_normals: args.output_raw_normals.into_bool(),
+ output_raw_mesh: args.output_raw_mesh.into_bool(),
mesh_aabb,
+ mesh_aabb_clamp_vertices: args.mesh_aabb_clamp_verts.into_bool(),
};
Ok(ReconstructionRunnerArgs {
@@ -863,84 +1039,294 @@ pub(crate) fn reconstruction_pipeline_generic(
// Perform the surface reconstruction
let reconstruction =
- splashsurf_lib::reconstruct_surface::(particle_positions.as_slice(), ¶ms)?;
+ splashsurf_lib::reconstruct_surface::(particle_positions.as_slice(), params)?;
let grid = reconstruction.grid();
- let mesh = reconstruction.mesh();
+ let mut mesh_with_data = MeshWithData::new(Cow::Borrowed(reconstruction.mesh()));
- let mesh = if let Some(aabb) = &postprocessing.mesh_aabb {
- profile!("clamp mesh to aabb");
- info!("Post-processing: Clamping mesh to AABB...");
+ if postprocessing.output_raw_mesh {
+ profile!("write surface mesh to file");
- let mut mesh = mesh.clone();
- mesh.clamp_with_aabb(
- &aabb
- .try_convert()
- .ok_or_else(|| anyhow!("Failed to convert mesh AABB"))?,
+ let output_path = paths
+ .output_file
+ .parent()
+ // Add a trailing separator if the parent is non-empty
+ .map(|p| p.join(""))
+ .unwrap_or_else(PathBuf::new);
+ let output_filename = format!(
+ "raw_{}",
+ paths.output_file.file_name().unwrap().to_string_lossy()
);
- mesh
- } else {
- mesh.clone()
- };
-
- // Add normals to mesh if requested
- let mesh = if postprocessing.compute_normals || !attributes.is_empty() {
- profile!("compute normals");
+ let raw_output_file = output_path.join(output_filename);
info!(
- "Constructing global acceleration structure for SPH interpolation to {} vertices...",
- mesh.vertices.len()
+ "Writing unprocessed surface mesh to \"{}\"...",
+ raw_output_file.display()
);
- let particle_rest_density = params.rest_density;
- let particle_rest_volume = R::from_f64((4.0 / 3.0) * std::f64::consts::PI).unwrap()
- * params.particle_radius.powi(3);
- let particle_rest_mass = particle_rest_volume * particle_rest_density;
-
- let particle_densities = reconstruction
- .particle_densities()
- .ok_or_else(|| anyhow::anyhow!("Particle densities were not returned by surface reconstruction but are required for SPH normal computation"))?
- .as_slice();
- assert_eq!(
- particle_positions.len(),
- particle_densities.len(),
- "There has to be one density value per particle"
- );
+ io::write_mesh(&mesh_with_data, raw_output_file, &io_params.output).with_context(|| {
+ anyhow!(
+ "Failed to write raw output mesh to file \"{}\"",
+ paths.output_file.display()
+ )
+ })?;
+ }
- let interpolator = SphInterpolator::new(
- &particle_positions,
- particle_densities,
- particle_rest_mass,
- params.compact_support_radius,
- );
+ // Perform post-processing
+ {
+ profile!("postprocessing");
+
+ let mut vertex_connectivity = None;
+
+ if postprocessing.mesh_cleanup {
+ info!("Post-processing: Performing mesh cleanup");
+ let tris_before = mesh_with_data.mesh.triangles.len();
+ let verts_before = mesh_with_data.mesh.vertices.len();
+ vertex_connectivity = Some(splashsurf_lib::postprocessing::marching_cubes_cleanup(
+ mesh_with_data.mesh.to_mut(),
+ reconstruction.grid(),
+ 5,
+ postprocessing.keep_vertices,
+ ));
+ let tris_after = mesh_with_data.mesh.triangles.len();
+ let verts_after = mesh_with_data.mesh.vertices.len();
+ info!("Post-processing: Cleanup reduced number of vertices to {:.2}% and number of triangles to {:.2}% of original mesh.", (verts_after as f64 / verts_before as f64) * 100.0, (tris_after as f64 / tris_before as f64) * 100.0)
+ }
+
+ // Decimate mesh if requested
+ if postprocessing.decimate_barnacles {
+ info!("Post-processing: Performing decimation");
+ vertex_connectivity = Some(splashsurf_lib::postprocessing::decimation(
+ mesh_with_data.mesh.to_mut(),
+ postprocessing.keep_vertices,
+ ));
+ }
+
+ // Initialize SPH interpolator if required later
+ let interpolator_required = postprocessing.mesh_smoothing_weights
+ || postprocessing.sph_normals
+ || !attributes.is_empty();
+ let interpolator = if interpolator_required {
+ profile!("initialize interpolator");
+ info!("Post-processing: Initializing interpolator...");
+
+ info!(
+ "Constructing global acceleration structure for SPH interpolation to {} vertices...",
+ mesh_with_data.vertices().len()
+ );
+
+ let particle_rest_density = params.rest_density;
+ let particle_rest_volume = R::from_f64((4.0 / 3.0) * std::f64::consts::PI).unwrap()
+ * params.particle_radius.powi(3);
+ let particle_rest_mass = particle_rest_volume * particle_rest_density;
+
+ let particle_densities = reconstruction
+ .particle_densities()
+ .ok_or_else(|| anyhow::anyhow!("Particle densities were not returned by surface reconstruction but are required for SPH normal computation"))?
+ .as_slice();
+ assert_eq!(
+ particle_positions.len(),
+ particle_densities.len(),
+ "There has to be one density value per particle"
+ );
+
+ Some(SphInterpolator::new(
+ &particle_positions,
+ particle_densities,
+ particle_rest_mass,
+ params.compact_support_radius,
+ ))
+ } else {
+ None
+ };
- let mut mesh_with_data = MeshWithData::new(mesh);
- let mesh = &mesh_with_data.mesh;
+ // Compute mesh vertex-vertex connectivity map if required later
+ let vertex_connectivity_required = postprocessing.normals_smoothing_iters.is_some()
+ || postprocessing.mesh_smoothing_iters.is_some();
+ if vertex_connectivity.is_none() && vertex_connectivity_required {
+ vertex_connectivity = Some(mesh_with_data.mesh.vertex_vertex_connectivity());
+ }
- // Compute normals if requested
+ // Compute smoothing weights if requested
+ let smoothing_weights = if postprocessing.mesh_smoothing_weights {
+ profile!("compute smoothing weights");
+ info!("Post-processing: Computing smoothing weights...");
+
+ // TODO: Switch between parallel/single threaded
+ // TODO: Re-use data from reconstruction?
+
+ // Global neighborhood search
+ let nl = reconstruction
+ .particle_neighbors()
+ .map(|nl| Cow::Borrowed(nl))
+ .unwrap_or_else(||
+ {
+ let search_radius = params.compact_support_radius;
+
+ let mut domain = Aabb3d::from_points(particle_positions.as_slice());
+ domain.grow_uniformly(search_radius);
+
+ let mut nl = Vec::new();
+ splashsurf_lib::neighborhood_search::neighborhood_search_spatial_hashing_parallel::(
+ &domain,
+ particle_positions.as_slice(),
+ search_radius,
+ &mut nl,
+ );
+ assert_eq!(nl.len(), particle_positions.len());
+ Cow::Owned(nl)
+ }
+ );
+
+ // Compute weighted neighbor count
+ let squared_r = params.compact_support_radius * params.compact_support_radius;
+ let weighted_ncounts = nl
+ .par_iter()
+ .enumerate()
+ .map(|(i, nl)| {
+ nl.iter()
+ .copied()
+ .map(|j| {
+ let dist =
+ (particle_positions[i] - particle_positions[j]).norm_squared();
+ let weight = R::one() - (dist / squared_r).clamp(R::zero(), R::one());
+ return weight;
+ })
+ .fold(R::zero(), R::add)
+ })
+ .collect::>();
+
+ let vertex_weighted_num_neighbors = {
+ profile!("interpolate weighted neighbor counts");
+ interpolator
+ .as_ref()
+ .expect("interpolator is required")
+ .interpolate_scalar_quantity(
+ weighted_ncounts.as_slice(),
+ &mesh_with_data.vertices(),
+ true,
+ )
+ };
+
+ let smoothing_weights = {
+ let offset = R::zero();
+ let normalization =
+ R::from_f64(postprocessing.mesh_smoothing_weights_normalization).expect(
+ "smoothing weight normalization value cannot be represented as Real type",
+ ) - offset;
+
+ // Normalize number of neighbors
+ let smoothing_weights = vertex_weighted_num_neighbors
+ .par_iter()
+ .copied()
+ .map(|n| (n - offset).max(R::zero()))
+ .map(|n| (n / normalization).min(R::one()))
+ // Smooth-Step function
+ .map(|x| x.powi(5).times(6) - x.powi(4).times(15) + x.powi(3).times(10))
+ .collect::>();
+
+ if postprocessing.output_mesh_smoothing_weights {
+ // Raw distance-weighted number of neighbors value per vertex (can be used to determine normalization value)
+ mesh_with_data.point_attributes.push(MeshAttribute::new(
+ "wnn".to_string(),
+ AttributeData::ScalarReal(vertex_weighted_num_neighbors),
+ ));
+ // Final smoothing weights per vertex
+ mesh_with_data.point_attributes.push(MeshAttribute::new(
+ "sw".to_string(),
+ AttributeData::ScalarReal(smoothing_weights.clone()),
+ ));
+ }
+
+ smoothing_weights
+ };
+
+ Some(smoothing_weights)
+ } else {
+ None
+ };
+
+ // Perform smoothing if requested
+ if let Some(mesh_smoothing_iters) = postprocessing.mesh_smoothing_iters {
+ profile!("mesh smoothing");
+ info!("Post-processing: Smoothing mesh...");
+
+ // TODO: Switch between parallel/single threaded
+
+ let smoothing_weights = smoothing_weights
+ .unwrap_or_else(|| vec![R::one(); mesh_with_data.vertices().len()]);
+
+ splashsurf_lib::postprocessing::par_laplacian_smoothing_inplace(
+ mesh_with_data.mesh.to_mut(),
+ vertex_connectivity
+ .as_ref()
+ .expect("vertex connectivity is required"),
+ mesh_smoothing_iters,
+ R::one(),
+ &smoothing_weights,
+ );
+ }
+
+ // Add normals to mesh if requested
if postprocessing.compute_normals {
+ profile!("compute normals");
+ info!("Post-processing: Computing surface normals...");
+
+ // Compute normals
let normals = if postprocessing.sph_normals {
info!("Using SPH interpolation to compute surface normals");
- let sph_normals = interpolator.interpolate_normals(mesh.vertices());
+ let sph_normals = interpolator
+ .as_ref()
+ .expect("interpolator is required")
+ .interpolate_normals(mesh_with_data.vertices());
bytemuck::allocation::cast_vec::>, Vector3>(sph_normals)
} else {
info!("Using area weighted triangle normals for surface normals");
profile!("mesh.par_vertex_normals");
- let tri_normals = mesh.par_vertex_normals();
+ let tri_normals = mesh_with_data.mesh.par_vertex_normals();
// Convert unit vectors to plain vectors
bytemuck::allocation::cast_vec::>, Vector3>(tri_normals)
};
- mesh_with_data.point_attributes.push(MeshAttribute::new(
- "normals".to_string(),
- AttributeData::Vector3Real(normals),
- ));
+ // Smooth normals
+ if let Some(smoothing_iters) = postprocessing.normals_smoothing_iters {
+ info!("Post-processing: Smoothing normals...");
+
+ let mut smoothed_normals = normals.clone();
+ splashsurf_lib::postprocessing::par_laplacian_smoothing_normals_inplace(
+ &mut smoothed_normals,
+ vertex_connectivity
+ .as_ref()
+ .expect("vertex connectivity is required"),
+ smoothing_iters,
+ );
+
+ mesh_with_data.point_attributes.push(MeshAttribute::new(
+ "normals".to_string(),
+ AttributeData::Vector3Real(smoothed_normals),
+ ));
+ if postprocessing.output_raw_normals {
+ mesh_with_data.point_attributes.push(MeshAttribute::new(
+ "raw_normals".to_string(),
+ AttributeData::Vector3Real(normals),
+ ));
+ }
+ } else {
+ mesh_with_data.point_attributes.push(MeshAttribute::new(
+ "normals".to_string(),
+ AttributeData::Vector3Real(normals),
+ ));
+ }
}
// Interpolate attributes if requested
if !attributes.is_empty() {
+ profile!("interpolate attributes");
+ info!("Post-processing: Interpolating attributes...");
+ let interpolator = interpolator.as_ref().expect("interpolator is required");
+
for attribute in attributes.into_iter() {
info!("Interpolating attribute \"{}\"...", attribute.name);
@@ -948,7 +1334,7 @@ pub(crate) fn reconstruction_pipeline_generic(
AttributeData::ScalarReal(values) => {
let interpolated_values = interpolator.interpolate_scalar_quantity(
values.as_slice(),
- mesh.vertices(),
+ mesh_with_data.vertices(),
true,
);
mesh_with_data.point_attributes.push(MeshAttribute::new(
@@ -959,7 +1345,7 @@ pub(crate) fn reconstruction_pipeline_generic(
AttributeData::Vector3Real(values) => {
let interpolated_values = interpolator.interpolate_vector_quantity(
values.as_slice(),
- mesh.vertices(),
+ mesh_with_data.vertices(),
true,
);
mesh_with_data.point_attributes.push(MeshAttribute::new(
@@ -971,10 +1357,43 @@ pub(crate) fn reconstruction_pipeline_generic(
}
}
}
+ }
+
+ // Remove and clamp cells outside of AABB
+ let mesh_with_data = if let Some(mesh_aabb) = &postprocessing.mesh_aabb {
+ profile!("clamp mesh to aabb");
+ info!("Post-processing: Clamping mesh to AABB...");
+ mesh_with_data.par_clamp_with_aabb(
+ &mesh_aabb
+ .try_convert()
+ .ok_or_else(|| anyhow!("Failed to convert mesh AABB"))?,
+ postprocessing.mesh_aabb_clamp_vertices,
+ postprocessing.keep_vertices,
+ )
+ } else {
mesh_with_data
+ };
+
+ // Convert triangles to quads
+ let (tri_mesh, tri_quad_mesh) = if postprocessing.generate_quads {
+ info!("Post-processing: Convert triangles to quads...");
+ let non_squareness_limit = R::from_f64(postprocessing.quad_max_edge_diag_ratio).unwrap();
+ let normal_angle_limit_rad =
+ R::from_f64(postprocessing.quad_max_normal_angle.to_radians()).unwrap();
+ let max_interior_angle =
+ R::from_f64(postprocessing.quad_max_interior_angle.to_radians()).unwrap();
+
+ let tri_quad_mesh = splashsurf_lib::postprocessing::convert_tris_to_quads(
+ &mesh_with_data.mesh,
+ non_squareness_limit,
+ normal_angle_limit_rad,
+ max_interior_angle,
+ );
+
+ (None, Some(mesh_with_data.with_mesh(tri_quad_mesh)))
} else {
- MeshWithData::new(mesh)
+ (Some(mesh_with_data), None)
};
// Store the surface mesh
@@ -985,7 +1404,17 @@ pub(crate) fn reconstruction_pipeline_generic(
paths.output_file.display()
);
- io::write_mesh(&mesh, paths.output_file.clone(), &io_params.output).with_context(|| {
+ match (&tri_mesh, &tri_quad_mesh) {
+ (Some(mesh), None) => {
+ io::write_mesh(mesh, paths.output_file.clone(), &io_params.output)
+ }
+ (None, Some(mesh)) => {
+ io::write_mesh(mesh, paths.output_file.clone(), &io_params.output)
+ }
+
+ _ => unreachable!(),
+ }
+ .with_context(|| {
anyhow!(
"Failed to write output mesh to file \"{}\"",
paths.output_file.display()
@@ -998,11 +1427,7 @@ pub(crate) fn reconstruction_pipeline_generic(
if let Some(output_octree_file) = &paths.output_octree_file {
info!("Writing octree to \"{}\"...", output_octree_file.display());
io::vtk_format::write_vtk(
- reconstruction
- .octree()
- .unwrap()
- .hexmesh(grid, true)
- .to_unstructured_grid(),
+ reconstruction.octree().unwrap().hexmesh(grid, true),
output_octree_file,
"mesh",
)
@@ -1061,20 +1486,32 @@ pub(crate) fn reconstruction_pipeline_generic(
output_density_map_grid_file.display()
);
- io::vtk_format::write_vtk(
- density_mesh.to_unstructured_grid(),
- output_density_map_grid_file,
- "density_map",
- )?;
+ io::vtk_format::write_vtk(density_mesh, output_density_map_grid_file, "density_map")?;
info!("Done.");
}
- if postprocessing.check_mesh {
- if let Err(err) = splashsurf_lib::marching_cubes::check_mesh_consistency(grid, &mesh.mesh) {
+ if postprocessing.check_mesh_closed
+ || postprocessing.check_mesh_manifold
+ || postprocessing.check_mesh_debug
+ {
+ if let Err(err) = match (&tri_mesh, &tri_quad_mesh) {
+ (Some(mesh), None) => splashsurf_lib::marching_cubes::check_mesh_consistency(
+ grid,
+ &mesh.mesh,
+ postprocessing.check_mesh_closed,
+ postprocessing.check_mesh_manifold,
+ postprocessing.check_mesh_debug,
+ ),
+ (None, Some(_mesh)) => {
+ info!("Checking for mesh consistency not implemented for quad mesh at the moment.");
+ return Ok(());
+ }
+ _ => unreachable!(),
+ } {
return Err(anyhow!("{}", err));
} else {
- info!("Checked mesh for problems (holes, etc.), no problems were found.");
+ info!("Checked mesh for problems (holes: {}, non-manifold edges/vertices: {}), no problems were found.", postprocessing.check_mesh_closed, postprocessing.check_mesh_manifold);
}
}
diff --git a/splashsurf_lib/Cargo.toml b/splashsurf_lib/Cargo.toml
index 5a6ba83..8e2b7ae 100644
--- a/splashsurf_lib/Cargo.toml
+++ b/splashsurf_lib/Cargo.toml
@@ -51,7 +51,7 @@ fxhash = "0.2"
bitflags = "2.4"
smallvec = { version = "1.11", features = ["union"] }
arrayvec = "0.7"
-bytemuck = "1.9"
+bytemuck = { version = "1.9", features = ["extern_crate_alloc"] }
bytemuck_derive = "1.3"
numeric_literals = "0.2"
rstar = "0.11"
diff --git a/splashsurf_lib/benches/benches/bench_full.rs b/splashsurf_lib/benches/benches/bench_full.rs
index 1550c21..8b8cf2d 100644
--- a/splashsurf_lib/benches/benches/bench_full.rs
+++ b/splashsurf_lib/benches/benches/bench_full.rs
@@ -104,6 +104,7 @@ pub fn surface_reconstruction_dam_break(c: &mut Criterion) {
particle_aabb: None,
enable_multi_threading: true,
spatial_decomposition: None,
+ global_neighborhood_list: false,
};
let mut group = c.benchmark_group("full surface reconstruction");
@@ -202,6 +203,7 @@ pub fn surface_reconstruction_double_dam_break(c: &mut Criterion) {
particle_aabb: None,
enable_multi_threading: true,
spatial_decomposition: None,
+ global_neighborhood_list: false,
};
let mut group = c.benchmark_group("full surface reconstruction");
@@ -300,6 +302,7 @@ pub fn surface_reconstruction_double_dam_break_inplace(c: &mut Criterion) {
particle_aabb: None,
enable_multi_threading: true,
spatial_decomposition: None,
+ global_neighborhood_list: false,
};
let mut group = c.benchmark_group("full surface reconstruction");
diff --git a/splashsurf_lib/benches/benches/bench_mesh.rs b/splashsurf_lib/benches/benches/bench_mesh.rs
index 36951e2..6df2448 100644
--- a/splashsurf_lib/benches/benches/bench_mesh.rs
+++ b/splashsurf_lib/benches/benches/bench_mesh.rs
@@ -33,6 +33,7 @@ fn reconstruct_particles>(particle_file: P) -> SurfaceReconstruct
ParticleDensityComputationStrategy::SynchronizeSubdomains,
},
)),
+ global_neighborhood_list: false,
};
reconstruct_surface::(particle_positions.as_slice(), ¶meters).unwrap()
diff --git a/splashsurf_lib/benches/benches/bench_subdomain_grid.rs b/splashsurf_lib/benches/benches/bench_subdomain_grid.rs
index 24dec38..c853d69 100644
--- a/splashsurf_lib/benches/benches/bench_subdomain_grid.rs
+++ b/splashsurf_lib/benches/benches/bench_subdomain_grid.rs
@@ -25,6 +25,7 @@ fn parameters_canyon() -> Parameters {
subdomain_num_cubes_per_dim: 32,
},
)),
+ global_neighborhood_list: false,
};
parameters
diff --git a/splashsurf_lib/src/aabb.rs b/splashsurf_lib/src/aabb.rs
index c149c2d..18db882 100644
--- a/splashsurf_lib/src/aabb.rs
+++ b/splashsurf_lib/src/aabb.rs
@@ -6,7 +6,7 @@ use std::fmt::Debug;
use nalgebra::SVector;
use rayon::prelude::*;
-use crate::{Real, ThreadSafe};
+use crate::{Real, RealConvert, ThreadSafe};
/// Type representing an axis aligned bounding box in arbitrary dimensions
#[derive(Clone, Eq, PartialEq)]
@@ -119,8 +119,8 @@ where
T: Real,
{
Some(AxisAlignedBoundingBox::new(
- T::try_convert_vec_from(&self.min)?,
- T::try_convert_vec_from(&self.max)?,
+ self.min.try_convert()?,
+ self.max.try_convert()?,
))
}
diff --git a/splashsurf_lib/src/dense_subdomains.rs b/splashsurf_lib/src/dense_subdomains.rs
index e04b0b5..b82c40c 100644
--- a/splashsurf_lib/src/dense_subdomains.rs
+++ b/splashsurf_lib/src/dense_subdomains.rs
@@ -19,7 +19,7 @@ use crate::neighborhood_search::{
neighborhood_search_spatial_hashing_flat_filtered,
neighborhood_search_spatial_hashing_parallel, FlatNeighborhoodList,
};
-use crate::uniform_grid::{EdgeIndex, UniformCartesianCubeGrid3d};
+use crate::uniform_grid::{EdgeIndex, GridConstructionError, UniformCartesianCubeGrid3d};
use crate::{
new_map, new_parallel_map, profile, Aabb3d, MapType, Parameters, SpatialDecomposition,
SurfaceReconstruction,
@@ -72,6 +72,25 @@ pub(crate) struct ParametersSubdomainGrid {
subdomain_grid: UniformCartesianCubeGrid3d,
/// Chunk size for chunked parallel processing
chunk_size: usize,
+ /// Whether to return the global particle neighborhood list instead of only using per-domain lists internally
+ global_neighborhood_list: bool,
+}
+
+impl ParametersSubdomainGrid {
+ pub(crate) fn global_marching_cubes_grid(
+ &self,
+ ) -> Result, GridConstructionError> {
+ let n_cells = self.global_marching_cubes_grid.cells_per_dim();
+ UniformCartesianCubeGrid3d::new(
+ self.global_marching_cubes_grid.aabb().min(),
+ &[
+ I::from(n_cells[0]).ok_or(GridConstructionError::IndexTypeTooSmallCellsPerDim)?,
+ I::from(n_cells[1]).ok_or(GridConstructionError::IndexTypeTooSmallCellsPerDim)?,
+ I::from(n_cells[2]).ok_or(GridConstructionError::IndexTypeTooSmallCellsPerDim)?,
+ ],
+ self.global_marching_cubes_grid.cell_size(),
+ )
+ }
}
/// Result of the subdomain decomposition procedure
@@ -238,6 +257,7 @@ pub(crate) fn initialize_parameters<'a, I: Index, R: Real>(
global_marching_cubes_grid: global_mc_grid,
subdomain_grid,
chunk_size,
+ global_neighborhood_list: parameters.global_neighborhood_list,
})
}
@@ -493,15 +513,16 @@ pub(crate) fn decomposition<
})
}
-pub(crate) fn compute_global_density_vector(
+pub(crate) fn compute_global_densities_and_neighbors(
parameters: &ParametersSubdomainGrid,
global_particles: &[Vector3],
subdomains: &Subdomains,
-) -> Vec {
+) -> (Vec, Vec>) {
profile!(parent, "compute_global_density_vector");
info!("Starting computation of global density vector.");
let global_particle_densities = Mutex::new(vec![R::zero(); global_particles.len()]);
+ let global_neighbors = Mutex::new(vec![Vec::new(); global_particles.len()]);
#[derive(Default)]
struct SubdomainWorkspace {
@@ -610,9 +631,35 @@ pub(crate) fn compute_global_density_vector(
global_particle_densities[particle_idx] = density;
});
}
+
+ // Write particle neighbor lists into global storage
+ if parameters.global_neighborhood_list {
+ profile!("update global neighbor list");
+ // Lock global vector while this subdomain writes into it
+ let mut global_neighbors = global_neighbors.lock();
+ is_inside
+ .iter()
+ .copied()
+ .zip(
+ subdomain_particle_indices
+ .iter()
+ .copied()
+ .zip(neighborhood_lists.iter()),
+ )
+ // Update density values only for particles inside of the subdomain (ghost particles have wrong values)
+ .filter(|(is_inside, _)| *is_inside)
+ .for_each(|(_, (particle_idx, neighbors))| {
+ global_neighbors[particle_idx] = neighbors
+ .iter()
+ .copied()
+ .map(|local| subdomain_particle_indices[local])
+ .collect();
+ });
+ }
});
let global_particle_densities = global_particle_densities.into_inner();
+ let global_neighbors = global_neighbors.into_inner();
/*
{
@@ -625,7 +672,7 @@ pub(crate) fn compute_global_density_vector(
}
*/
- global_particle_densities
+ (global_particle_densities, global_neighbors)
}
pub(crate) struct SurfacePatch {
diff --git a/splashsurf_lib/src/density_map.rs b/splashsurf_lib/src/density_map.rs
index b136b07..9aebac8 100644
--- a/splashsurf_lib/src/density_map.rs
+++ b/splashsurf_lib/src/density_map.rs
@@ -13,7 +13,7 @@
//! "flat point indices". These are computed from the background grid point coordinates `(i,j,k)`
//! analogous to multidimensional array index flattening. That means for a grid with dimensions
//! `[n_x, n_y, n_z]`, the flat point index is given by the expression `i*n_x + j*n_y + k*n_z`.
-//! For these point index operations, the [`UniformGrid`](crate::UniformGrid) is used.
+//! For these point index operations, the [`UniformGrid`] is used.
//!
//! Note that all density mapping functions always use the global background grid for flat point
//! indices, even if the density map is only generated for a smaller subdomain.
diff --git a/splashsurf_lib/src/halfedge_mesh.rs b/splashsurf_lib/src/halfedge_mesh.rs
new file mode 100644
index 0000000..74bcde4
--- /dev/null
+++ b/splashsurf_lib/src/halfedge_mesh.rs
@@ -0,0 +1,586 @@
+//! Basic implementation of a half-edge based triangle mesh
+//!
+//! See [`HalfEdgeTriMesh`] for more information.
+
+use crate::mesh::{Mesh3d, TriMesh3d, TriMesh3dExt};
+use crate::{profile, Real, SetType};
+use nalgebra::Vector3;
+use rayon::prelude::*;
+use thiserror::Error as ThisError;
+
+impl TriMesh3dExt for HalfEdgeTriMesh {
+ fn tri_vertices(&self) -> &[Vector3] {
+ &self.vertices
+ }
+}
+
+/// A half-edge in a [`HalfEdgeTriMesh`]
+#[derive(Copy, Clone, Debug, Default)]
+pub struct HalfEdge {
+ /// Unique global index of this half-edge in the mesh
+ pub idx: usize,
+ /// Vertex this half-edge points to
+ pub to: usize,
+ /// Enclosed face of this half-edge loop (or `None` if boundary)
+ pub face: Option,
+ /// The next half-edge along the half-edge loop (or `None` if boundary)
+ pub next: Option,
+ /// Index of the half-edge going into the opposite direction
+ pub opposite: usize,
+}
+
+impl HalfEdge {
+ /// Returns whether the given half-edge is a boundary edge
+ pub fn is_boundary(&self) -> bool {
+ self.face.is_none()
+ }
+}
+
+/// A half-edge based triangle mesh data structure
+///
+/// The main purpose of this data structure is to provide methods to perform consistent collapses of
+/// half-edges for decimation procedures.
+///
+/// As [`splashsurf_lib`](crate) is focused on closed meshes, handling of holes is not specifically tested.
+/// In particular, it is not directly possible to walk along a mesh boundary using the half-edges of
+/// this implementation.
+///
+/// A [`HalfEdgeTriMesh`] can be easily constructed from a [`TriMesh3d`] using a [`From`](HalfEdgeTriMesh::from::) implementation.
+///
+/// Note that affected vertex/face/half-edge indices become "invalid" after half-edge collapse is performed.
+/// The corresponding data still exist (i.e. they can be retrieved from the mesh) but following these
+/// indices amounts to following outdated connectivity.
+/// Therefore, it should be checked if an index was marked as removed after a collapse using the
+/// [`is_valid_vertex`](HalfEdgeTriMesh::is_valid_vertex)/[`is_valid_triangle`](HalfEdgeTriMesh::is_valid_triangle)/[`is_valid_half_edge`](HalfEdgeTriMesh::is_valid_half_edge)
+/// methods.
+#[derive(Clone, Debug, Default)]
+pub struct HalfEdgeTriMesh {
+ /// All vertices in the mesh
+ pub vertices: Vec>,
+ /// All triangles in the mesh
+ pub triangles: Vec<[usize; 3]>,
+ /// All half-edges in the mesh
+ pub half_edges: Vec,
+ /// Connectivity map of all vertices to their connected neighbors
+ vertex_half_edge_map: Vec>,
+ /// Set of all vertices marked for removal
+ removed_vertices: SetType,
+ /// Set of all triangles marked for removal
+ removed_triangles: SetType,
+ /// Set of all half edges marked for removal
+ removed_half_edges: SetType,
+}
+
+/// Error indicating why a specific half-edge collapse is illegal
+#[derive(Copy, Clone, Debug, Eq, PartialEq, ThisError)]
+pub enum IllegalHalfEdgeCollapse {
+ /// Trying to collapse an edge with boundary vertices at both ends
+ #[error("trying to collapse an edge with boundary vertices at both ends")]
+ BoundaryCollapse,
+ /// Trying to collapse an edge with vertices that share incident vertices other than the vertices directly opposite to the edge
+ #[error("trying to collapse an edge with vertices that share incident vertices other than the vertices directly opposite to the edge")]
+ IntersectionOfOneRing,
+ /// Trying to collapse an edge without faces
+ #[error("trying to collapse an edge without faces")]
+ FacelessEdge,
+}
+
+impl HalfEdgeTriMesh {
+ /// Converts this mesh into a simple triangle mesh and a vertex-vertex connectivity map
+ pub fn into_parts(mut self, keep_vertices: bool) -> (TriMesh3d, Vec>) {
+ Self::compute_vertex_vertex_connectivity(&mut self.vertex_half_edge_map, &self.half_edges);
+ self.garbage_collection_for_trimesh(keep_vertices);
+ let mesh = TriMesh3d {
+ vertices: self.vertices,
+ triangles: self.triangles,
+ };
+ (mesh, self.vertex_half_edge_map)
+ }
+
+ /// Returns the valence of a vertex (size of its one-ring)
+ pub fn vertex_one_ring_len(&self, vertex: usize) -> usize {
+ self.vertex_half_edge_map[vertex].len()
+ }
+
+ /// Returns the index of the `i`-th vertex from the one-ring of the given vertex
+ pub fn vertex_one_ring_ith(&self, vertex: usize, i: usize) -> usize {
+ self.half_edges[self.vertex_half_edge_map[vertex][i]].to
+ }
+
+ /// Iterator over the one-ring vertex neighbors of the given vertex
+ pub fn vertex_one_ring<'a>(&'a self, vertex: usize) -> impl Iterator + 'a {
+ self.vertex_half_edge_map[vertex]
+ .iter()
+ .copied()
+ .map(|he_i| self.half_edges[he_i].to)
+ }
+
+ /// Iterator over the outgoing half-edges of the given vertex
+ pub fn outgoing_half_edges<'a>(&'a self, vertex: usize) -> impl Iterator + 'a {
+ self.vertex_half_edge_map[vertex]
+ .iter()
+ .copied()
+ .map(|he_i| self.half_edges[he_i].clone())
+ }
+
+ /// Iterator over all incident faces of the given vertex
+ pub fn incident_faces<'a>(&'a self, vertex: usize) -> impl Iterator + 'a {
+ self.outgoing_half_edges(vertex).filter_map(|he| he.face)
+ }
+
+ /// Returns the half-edge between the "from" and "to" vertex if it exists in the mesh
+ pub fn half_edge(&self, from: usize, to: usize) -> Option {
+ let from_edges = self
+ .vertex_half_edge_map
+ .get(from)
+ .expect("vertex must be part of the mesh");
+ for &he_idx in from_edges {
+ let he = &self.half_edges[he_idx];
+ if he.to == to {
+ return Some(he.clone());
+ }
+ }
+
+ None
+ }
+
+ /// Returns whether the given half-edge or its opposite half-edge is a boundary edge
+ pub fn is_boundary_edge(&self, half_edge: HalfEdge) -> bool {
+ return half_edge.is_boundary() || self.opposite(half_edge).is_boundary();
+ }
+
+ /// Returns whether the given vertex is a boundary vertex
+ pub fn is_boundary_vertex(&self, vert_idx: usize) -> bool {
+ let hes = self
+ .vertex_half_edge_map
+ .get(vert_idx)
+ .expect("vertex must be part of the mesh");
+ hes.iter()
+ .copied()
+ .any(|he_idx| self.half_edges[he_idx].is_boundary())
+ }
+
+ /// Returns whether the given triangle is valid (i.e. not marked as removed)
+ pub fn is_valid_triangle(&self, triangle_idx: usize) -> bool {
+ !self.removed_triangles.contains(&triangle_idx)
+ }
+
+ /// Returns whether the given vertex is valid (i.e. not marked as removed)
+ pub fn is_valid_vertex(&self, vertex_idx: usize) -> bool {
+ !self.removed_vertices.contains(&vertex_idx)
+ }
+
+ /// Returns whether the given vertex is valid (i.e. not marked as removed)
+ pub fn is_valid_half_edge(&self, half_edge_idx: usize) -> bool {
+ !self.removed_half_edges.contains(&half_edge_idx)
+ }
+
+ /// Returns the next half-edge in the loop of the given half-edge, panics if there is none
+ pub fn next(&self, half_edge: HalfEdge) -> HalfEdge {
+ self.half_edges[half_edge
+ .next
+ .expect("half edge must have a next reference")]
+ }
+
+ /// Returns the next half-edge in the loop of the given half-edge if it exists
+ pub fn try_next(&self, half_edge: HalfEdge) -> Option {
+ half_edge.next.map(|n| self.half_edges[n])
+ }
+
+ /// Returns the opposite half-edge of the given half-edge
+ pub fn opposite(&self, half_edge: HalfEdge) -> HalfEdge {
+ self.half_edges[half_edge.opposite]
+ }
+
+ /// Returns a mutable reference to the opposite half-edge of the given half-edge
+ pub fn opposite_mut(&mut self, half_edge: usize) -> &mut HalfEdge {
+ let opp_idx = self.half_edges[half_edge].opposite;
+ &mut self.half_edges[opp_idx]
+ }
+
+ /// Checks if the collapse of the given half-edge is topologically legal
+ pub fn is_collapse_ok(&self, half_edge: HalfEdge) -> Result<(), IllegalHalfEdgeCollapse> {
+ // Based on PMP library:
+ // https://github.com/pmp-library/pmp-library/blob/86099e4e274c310d23e8c46c4829f881242814d3/src/pmp/SurfaceMesh.cpp#L755
+
+ let v0v1 = half_edge;
+ let v1v0 = self.opposite(v0v1);
+
+ let v0 = v1v0.to; // From vertex
+ let v1 = v0v1.to; // To vertex
+
+ // Checks if edges to opposite vertex of half-edge are boundary edges and returns opposite vertex
+ let check_opposite_vertex =
+ |he: HalfEdge| -> Result