So I've been teaching myself a hard programming language and thought to myself: "How do I make it even harder?" That's when I decided to write this raytracer. And then I did.
It's not perfect by any means, but it's mine, and I like it.
If you want to try it out, here's what you should do.
- Install Rust.
- Clone this repo.
- Run
cargo run --release
. - There is no step 3.
Wait a few seconds and you should get a PNG file.
- You can render any scene as long as it's only spheres.
- Lambertian, metallic and dielectric materials.
- Automatically uses all CPU cores for rendering.
- General code clean up. Eliminate copy pasta and C-isms (return by writing to a parameter).
- Make
pbrt::prelude
more useful. BuffVector
andPoint
with conversions, casting and general. - Command-line arguments support: render size, samples per pixel, output file name. Editing the source just to move the camera is silly.
- More features: emissive materials and lights, more
Shape
types (quadrics, planes, boxes). - Triangle meshes.
- Transforms and animations.
- While we're at it, why not throw in full glTF scene support?
- Non-projective cameras.
- SIMD/GPGPU support, benchmarks.
Half of this raytracer was written under inspiration from Physically-Based Rendering: From Theory To Implementation. It's awesome and very well written, but you have to basically copy all of it to get any pixels on the screen.
Another half was hacked together in a few hours, closely following the steps of Ray Tracing in One Weekend. It is the opposite to PBR book in a way that it's hacky, but you get the result right away.
My raytracer borrows from both of these books, and Rust compiler allows it. (Get it? Because borrow checker!)