From f18763d97d1658c394f6046ffaa308ddd1520c26 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Thomas=20M=C3=BCller?= Date: Mon, 14 Feb 2022 15:16:13 +0100 Subject: [PATCH] README fixup --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index ebd7b914..c3b7c4cd 100644 --- a/README.md +++ b/README.md @@ -123,7 +123,7 @@ __tiny-cuda-nn__ comes with a [PyTorch](https://github.com/pytorch/pytorch) exte These bindings can be significantly faster than full Python implementations; in particular for the [multiresolution hash encoding](https://raw.githubusercontent.com/NVlabs/tiny-cuda-nn/master/data/readme/multiresolution-hash-encoding-diagram.png). > The overheads of Python/PyTorch can nonetheless be extensive. -> For example, the bundled `mlp_learning_an_image` example is __~3x slower__ through PyTorch versus native CUDA. +> For example, the bundled `mlp_learning_an_image` example is __~3x slower__ through PyTorch than native CUDA. Begin by setting up a Python 3.X environment with a recent, CUDA-enabled version of PyTorch. Then, invoke the following commands: @@ -149,7 +149,7 @@ model = tcnn.NetworkWithInputEncoding( # Option 2: separate modules. Slower but more flexible. encoding = tcnn.Encoding(n_input_dims, config["encoding"]) -network = tcnn.Network(n_input_dims, n_output_dims, config["network"]) +network = tcnn.Network(encoding.n_output_dims, n_output_dims, config["network"]) model = torch.nn.Sequential(encoding, network) ```