Skip to content

Latest commit

 

History

History
85 lines (69 loc) · 4.49 KB

README.md

File metadata and controls

85 lines (69 loc) · 4.49 KB

WebGPU Isosurface Visualization

This repo holds the code for the TVCG paper, "Interactive Isosurface Visualization in Memory Constrained Environments Using Deep Learning and Speculative Raycasting" by Landon Dyken, Will Usher, and Sidharth Kumar. This work expands the algorithm of "Speculative Progressive Raycasting for Memory Constrained Isosurface Visualization of Massive Volumes" (LDAV 2023 Best Paper) by using a pretrained image reconstruction network to infer perceptual approximates from intermediate output, along with optimizing the speculative raycasting using first pass speculation and larger computational buffers to increase speculation counts in early passes.

Demo

There is an interactive demo for several datasets online:

Note that due to initially loading the datasets, it will take some time for the rendering to appear when visiting the pages for the first time.

All datasets are available on the Open SciVis Datasets page.

Recreating a Representative Figure

Code was tested on an XPS-17 running a fresh install of Ubuntu v22.04.3 in Windows Subsystem for Linux (WSL) kernel v5.10.16, python v3.10.12, npm v8.5.1, node v12.22.9. See here for a video demonstration of this installation and test.

It is necessary to be on a device with node, npm and python3 with pip installed. To install these in WSL, it is recommended to run

sudo apt update
sudo apt install nodejs npm python3-pip

Remember to reload the terminal window or run

. $HOME/.profile

after installing to make these commands available.

Automatic Install

After cloning the repo, first make all the scripts executable by running

chmod +x run_server.sh shaders/glslc.exe shaders/tint.exe

Then one can install needed dependencies and start serving the application with

./run_server.sh

From here, the application will be served at localhost:8000.

Manual Install

After cloning the repo run

npm install

Then navigate to the shaders/ folder and run

python3 embed_shaders.py ./glslc.exe ./tint.exe

Then back to the top folder run

npm run build

Then move the files in the ml-models/ folder into the built dist/ folder.

Then download the compressed datasets (Chameleon, Magnetic Reconnection, and Miranda) using the following commands

pip install gdown
gdown 1iAN-LucPq6nUAh74I1BIXa24KaXo650k
gdown 1t98uqIjGB99k3Xso8R1EQL4fgefHlKBR
gdown 1YTBFATCaK1ApFpcefEuAj5iQTPm998pU

and create a folder dist/bcmc-data/ and move them there.

You can then serve the application from the dist/ folder using

python3 -m http.server

Which will default to serving the application at localhost:8000.

Running Benchmarks

Once the application is hosted, visit 'localhost:8000/#autobenchmark=0' to begin benchmarks. This will automatically run 27 benchmarks including the Plasma, Chameleon, and Miranda datasets at 360p, 720p, and 1080p, and download .json benchmark files to your default download location. Make sure to allow automatic downloads in your browser before running these benchmarks, follow the instructions here and add the url http://localhost:8000 to your automatic downloads list. A video showing the benchmarking process is here.

Converting Benchmarks to Data Figure

Once the autobenchmark is complete, move all downloaded .json files to the benchmarks/ folder in this repo. Run

python3 plot_figure6.py

and files labeled "ResultsAt85%Complete.png" and "ResultsAt100%Complete.png" will be created in the folder, matching Figure 6 in the TVCG paper.

Model Training

Another repo is provided containing all the model training code here. This repo includes checkpoints for our pretrained model and example data for training new models. Unlike this repo, an NVIDIA GPU with CUDA support is required for model training code.