Skip to content

Latest commit

 

History

History
795 lines (578 loc) · 33.4 KB

TODO

File metadata and controls

795 lines (578 loc) · 33.4 KB

Todo list for snogray -*- mode:org; coding:utf-8 -*-

Make image I/O classes dynamically loadable

Dynamic loading in general might not be worth the trouble, but image I/O classes tend to be a little annoying, dragging in lots of libraries (not used by anything else), bloating the resulting executable if they’re static, etc.

So having a way to build them as dynamic objects, and then have a simple system for mapping file extensions / patterns to library names might be nice… still a bit of work…

Actually the best route is probably to just arrange that Lua is able to be involved when creating image-output objects, and then use Lua’s dynamic-loading machinery, which we get for free. This should be fairly natural as we’re already moving to Lua for most of the loading and driver phases.

Add color handling framework

Snogray’s current handling of color is trivial – R,G,B are just uninterpreted values, which are read/written without modification from/to image textures, result images, user color specs, etc.

The current representation is mostly fine for rendering purpose, treating the color components as samples in an SPD (spectral power distribution), as most rendering operations act independently on spectral samples, but color I/O should really pay attention to the color representation of the input/output format, as different image formats are not all the same in this respect (they’re similar enough that things “mostly look OK” without interpretation, but for instance textures loaded from TGA files in snogray look subtly wrong when the result is written to a jpeg file).

It’d be nice to have a nicely general framework that allows, for instance, the user to specify colors as arbitrary spectral distributions, blackbody temperatures, etc, and then approximate these with an appropriate SPD.

This isn’t all that hard, but there are lots of details to handle.

Most existing color-management libraries are not actually very useful as they typically seem to represent the problem as only one of conversion between fixed endpoints, whereas we’d like something that allows arbitrary endpoint specifications, and calculates the necessary conversion matrices.

  • Idea: The basic representation of a color space can be the number

of components in that space, and a pair of matrices for converting a vector of components to/from an SPD of some fixed resolution (the intermediate SPD resolution can be fairly high, because operations using them are fairly rare). A color-space conversion matrix between color-spaces A and B can then be calculated by multiplying A.TO_SPD and B.FROM_SPD.

Multi-thread photon-shooting

The photon-shooting phase of photon-mapping is often not so long compared to rendering, but can be annoying for low-quality test renders. Multi-threading this phase probably isn’t very hard, as each photon is independent, and most of our infrastructure is already thread safe.

Actually the main annoyance is just making progress-reporting thread-safe… and the temptation is to just rewrite the whole progress-reporting infrastructure…

Make photon-mapping more usable

photon-mapping currently isn’t very easy to use – there are often severe artifacts, and tuning the parameters to get rid of them often isn’t obvious (or even possible).

photon-mapping as a general technique is clearly pretty decent as there are lots of renderers using it for GI, so we need to figure out what’s wrong, and how to improve it. Is it a real bug, or just a lack of parameter tuning?

PBRTv2 claims to have an improved photon-mapping implementation over v1, so look at what changed and see if those changes are useful.

Also “importon” mapping would be a useful addition, and probably not all that hard. Importon mapping an initial forward-tracing pass that shoots “importons” from the camera to the lights, and records where and in what direction the lights were hit; this info is then used to shoot more photons in important directions.

Fix lighting inconsistency of photon-mapping with environment maps

When using with environment-map lighting, photon-mapping yields obviously different lighting than path tracing (for simple object lights, none is evident). Photon-mapping always yields somewhat brighter results in areas mainly lit by indirect lighting.

Make output-sample iteration order more configurable

Currently things are hardwired to just always render pixels in scanline order.

For using techniques like metropolis light transport however, we may want to use other orders (MLT renders in “priority order”, spending more time in areas of the image that are found to be more important).

Right now the scan order comes from the “RenderPattern” class, but changing the order is not as simple as just creating alternate versions of that class and allowing them to be selected – other parts of the system implicitly assume that the output will be done in scanline order. For instance, the snapshot/recovery code.

The snapshot/recovery code right now restores an interrupted render by reading in all scanlines from a partial image, and continuing from that point; the number of scanlines in the partial image is used to determine the point from which rendering should continue. However for non-scanline orders, snapshots will basically need to be dumped as entire images, with some sort of metadata indicating how to continue rendering. This can probably be done simply by using a separate “xxx.jpeg.snogray-snapshot” text file holding the metadata (this is also sort of nice because the presence of such a file clearly indicates an unfinished render).

Add some sort of search-path for finding scenes/images/textures/etc

Currently, there is code to keep track of the location of the current file being loaded, and relative scene/mesh/image/texture filenames are interpreted relative to the load-directory which is current when they are loaded.

Add some hack to reduce “speckling”

One of the most annoying rendering artifacts is “speckles” – rare but very noticeable bright pixels in the middle of otherwise average areas. Note that this is distinct from general rendering noise, which is generally well behaved, relatively unobjectionable (it looks a bit like film grain), and can be reasonably controlled by increased sampling. These bright speckles on the other hand, are reduced by increasing sampling, but because they’re so obvious, are still annoying even at quite high sample rate.

I think such speckles are not a bug, but just an artifact of unbiased monte-carlo methods – they show up mainly when using global illumination, and seem to represent very rare but valid lighting paths (remember, unlikely paths have their value greatly boosted by their very unlikeliness, due to dividing by PDF values). However they’re still very ugly.

To remove these, it seems reasonable to simply try and squash any single samples that are dramatically greater than their immediate neighborhood; this obviously makes the result biased, but I think will result in a much more attractive result without any obvious change.

[ok, the following doesn’t work: the resulting bias is actually quite obvious: My idea: calculate the average sample value for a pixel, and clamp all samples for the pixel to some multiple (10x, 20x?, something based on the number of samples?) of that average. Then any pixel with a reasonable number of very bright samples will still be obvious bright, because those bright samples will make the average pretty high, and even single bright samples will remain bright, but not enough to completely overwhelm the pixel brightness by themselves.]

Fix “double-checked locking” in Subspace

The Subspace class uses “double-checked locking”[1] to lazily initialize its acceleration structures. This is not guaranteed to be safe on modern multi-processor systems, although it falls into the “kinda sorta works in practice, usually” category (the [very rare] failure mode is reasonably benign as well – conflicting threads will create multiple redundant acceleration structures, and leak all but the last one).

To make it truly safe without using a mutex in the “already initialized” case, one needs memory barriers to ensure that changes properly propagate to other cores/processors; in traditional C++ this wasn’t possible to do portably, but it can be done using C++11’s atomic features. Unfortunately few compilers actually seem to implement those features yet (gcc is getting there recently)…

[1] http://en.wikipedia.org/wiki/Double-checked_locking

Add scene loader using “assimp” library

This library can load tons of formats, and it’s interface seems reasonable enough (generic scene descriptions are inevitably annoying, but this seems dealable).

It’s huge though, so maybe add this as a dynamically loadable module via Lua.

One annoying thing … it seems to use Blender definitions for materials, which are … weird (e.g., Blender’s Cook-Torrance model doesn’t seem to actually resemble Cook-Torrance at all).

Efficient Global Illumination

  • TODO Metropolis Light Transport

    (which does “genetic” selection of paths to trace)

    PBRTv2 has an implemention.

    Also, check out the (partial?) code in “Advanced Global Illumination 2nd ed” (Philip Dutre, Philippe Bekaert, Kavita Bala).

  • TODO Instant radiosity
  • TODO Irradiance Caching

Add support for area-lights in instances

Not so hard now that light-samplers are split from lights; the former acts as a level of indirection that can be used to add duplicate instances of lights.

All light-samplers are global, however, so light-samplers added in the model would have to be transformed. We’d need to add a XFORM argument to the Surface::add_light_samplers interface, and this is slightly annoying because currently light-samplers freely reference state such as locations in the light they came from.... Probably the best way is to bite the bullet and just duplicate all state (possibly transformed) in light-samplers, rather than keeping references to the original lights.

Add alternative types of sample generation

E.g., quasi Monte Carlo.

Write a C extension to Lua for quickly parsing large vertex/triangle arrays

This allow the caller to specify a very simplistic grammar – essentially requiring they being arrays of float/int constants in the same order, but with a little flexibility as to the strings separating the values. This should vastly speed up some Lua parsers (which can take a lot of time to read in large meshes).

Add scene loaders:

  • TODO RenderMan
  • TODO Cinema 4D

    There is a published spec for version 5 of the C4D format, which is not current, but apparently designated as a sort of “exchange format” for C4D. This is a binary format, but maybe parseable using Lua/lpeg/struct.

  • TODO 3DS MAX

    Note this is different from 3DS format, which is already supported.

  • TODO Povray

Implement “Lightcuts” algorithm for optimizing (quite dramatically)

huge numbers of lights:

http://www.cs.cornell.edu/~kb/publications/SIG05lightcuts.pdf

Also implement “Reconstruction cuts” (further optimizes lightcuts algorithm)

Better light management to handle huge numbers of lights.

  1. Keep list of lights ordered in terms of “apparent strength” (e.g. intensity * solid angle); this varies per pixel, but is a good candidate for caching as it will change slowly for nearby points.
  2. Use the “apparent strength” of lights to influence sample allocation
  3. The cached ordered list of lights can have two categories: nearby lights and far-away lights (based on given point/bounding-box). When moving to a new point, we (1) see if any far-away lights have become “nearby”, in which case we recalculate the whole list, and otherwise (2) reorder lights in the “nearby” list according to their current apparent strengths.

    Note that “nearby” lights are not necessarily stronger than faraway lights, merely more likely to change in strength (the sun for instance, is probably always at the front of the “apparent strength” list, yet always in the faraway list). For typical scenes, the number of nearby lights is probably much smaller than faraway lights.

Octree improvements

  • Make octree smarter

    See “voxtree-hacks-20050928-0.patch”

    Have different types of octree node for different situations:

    • Normal octree nodes
    • Leaf nodes (no children nodes, just objects)
    • Bounding-box-sub-nodes nodes: instead of a simple octree, children nodes can have an arbitrary bounding box – this is good for the case where the child(ren) has a dramatically different scale, as degenerate long single chains of octree nodes can be avoided (the “teapot in a stadium” problem)
  • Make octree searching directional

    Currently it will find object in random order; typically it would be much better to find closer objects first.

    For octree, put children node pointers in a length-8 array, where the 3 bits in the array index correspond to x-hi/lo, y-hi/lo, z-hi/lo respectively. Then when starting the search at the top-level, we compute a quick “first check” index according to the rough direction of the ray we’re searching for. When iterating over sub-nodes in the octree, we can simply do:

    for (index_mask = 0; index_mask < 8; index_mask)

search_subnode (subnodes[ray_start_index ^ index_mask]);

  • Maybe try to use ray-marching instead of tree-searching

Implement alternative acceleration structures

  • TODO Bounding-Interval Hierarchy. This can be modified to support objects in non-leaf nodes, which should help with some of the object-size issues.
  • TODO KD-trees

Implement new BRDFs

  • DONE Cook-Torrance
  • TODO Ashikhmin
  • TODO Ward
  • TODO Schick
  • TODO LaFortune
  • TODO Oren-Nayar

Implement more elegant material framework

Pbrt has a nice system:

  • A generalization of the BRDF, the BxDF (“Bidirectional reflection/transmission Distribution Function”) handles either hemisphere (ordinary BRDFs only handle the reflection hemisphere).
  • BxDFs are separable, and a container, the BSDF (“Bidirectional Sphere Distribution Function”) can hold a set of them, which are combined to form the entire lighting calculation for a point. [This seems to be done dynamically, at intersection time?!?]

    For instance, instead of every BRDF calculating its own diffuse term, one can just add a standard Lambertian BRDF to whatever more specialized BxDFs are being used for a material.

  • Ideal specular reflection/transmission are defined as BxDFs (in terms of the Dirac delta function), not special materials.
  • Various types of microfacet materials are combined, with subclasses only needing to override the “D()” method.

Fix UV handling at wrapped seams of tessellated surfaces

Mesh interpolation of UV parameters in meshes generated by tessellation screws when interpolating across seams. The reason is that currently only a single vertex is generated where both “sides” of a seam meet, and obviously this single vertex can only have a single UV parameter. Thus at a seam where U wraps at 1, if a seam vertex has a U value of 1, then interpolation will act funny on the side where it’s “really” 0, and if that seam vertex has a U value of 0, then interpolation will act funny on the side where the U value should really be 1.

The way to fix this is to generate two vertices at seams, one for each side of the seams, at the same coordinates. If no precise normal information is available for the tessellation available (which might be true for a displacement-mapped tessellations even if the base tessellation has normal info), then this will require the tessellation code to calculate its own vertex normals, ignoring the vertex splitting at seams (treating seam vertices as singular), because otherwise the normals at seams will be slightly incorrect.

Implement system for generating (and preserving, in snogcvt) image meta-data

Simple EXIF support is added by exif-20061030-1.patch, but probably it’s better to use a more general abstraction (e.g., just keep meta-data in a ValTable until output time), so that other meta-data formats like XMP and IPTC can be supported.

Get rid of seams at corners of cube-maps.

This is caused by the fact that the face textures run from edge to edge, and there is no interpolation at the edge of two adjoining faces.

A simple fix is to simply add a 1-pixel border to each face image, and fill them with pixels from the adjacent face. Then, the mapping from cube coordinates into texture coordinate can be slightly perturbed such that the face parameter range [-1, 1] maps into the texture parameter range [PIXW/2, 1 - PIXW/2], where “PIXW” is the width of one pixel in the texture’s source image.

Maybe to maintain the texture abstraction, a texture “adjoin” method can take care of filling in the pixels and tweaking the texture’s own scaling factors so that the texture input parameters are still in the range [0, 1].

[We don’t really use cube-maps anymore, so maybe better just remove them?]


Completed items:

[2012-03-24] Rewrite driver layer in Lua

It really just makes sense…

I had this odd idea that it should be able to work without any Lua installed, but that’s just silly – it cripples the input of scene files, and Lua’s so small and portable it’s fine just to include a Lua source distribution to avoid adding an external dependency.

An interesting question, though, is: should I (1) restructure everything as a pure module which can be loaded by an external Lua, or (2) just keep a thin driver and compile in Lua.

(1) might be a bit more elegant, but maybe requires more install-time magic, so that Lua can find the module, etc. Also, (2) would avoid any efficiency hit from PIC compilation (required for shared objects on linux), and maybe other shared-object-specific goo; not sure if this is really a big deal on common archs or not…

[2012-02-26] Make snogray installable!

The main issue is just having some sort of search path so that it can find any resources it needs.

[2012-02-16] Add “fake hdr” utility

Basically converts a LDR (low-dynamic-range) image into a “fake” HDR image by artificially boosting clamped highlights. With some tweaking this often makes LDR environment maps useful for lighting (whereas they’re normally not so useful because “highlights” in a LDR image are typically severely clamped compared to the original scene).

Basic algorithm:

for_each (pixel) if (brightness (pixel) > THRESHOLD) pixel *= BOOST;

where THRESHOLD and BOOST are user-tweakable parameters. THRESHOLD = 0.98 and BOOST = 5 have proved reasonable for me in the past.

[2011-12-30] Finish bloom-filter (aka “veiling luminance”) app

Not so hard using FFTW3 for FFT, but some annoying details and lots of tweaking to look good…

Use PSF from “Physically-Based Glare Effects for Digital Images” (Spencer/Shirley/Zimmerman/Greenberg 1995, aka SSZG95).

bloom-20110527-0.patch

[2011-12-11] Make snogcvt work for upscaling too

Doesn’t work properly now because the inner loop always loops over incoming pixels and adds output samples, which leaves gaps when upscaling.

The basic idea is that folr upscaling, we need to loop over the output grid instead of the input grid (which will end up using input pixels multiple times).

Implementing this should be faily easy: Use floating-point input image coordinates, and when an image dimension is bigger in the output, make the increment for that dimension’s coordinate input_size / output_size instead of 1. A nested loop inside the current “fetch next input row” should increment the y-coordinate and re-use the current row until it is 1 greater than the current row’s base coordinate. The inner-most loop can work similarly for pixels, but that’s even easier because we can just address them directly using the floor of the x-coordinate.

[2011-12-05] Rip out weird specialized functionality from snogcvt

E.g., “–underlay” … I’m not even sure this is really very useful at all, but at the least, it should be in some small standalone app, not in snogcvt.

Add scene loaders:

  • DONE PBRT

Add a more general area-light type

Currently there are individual light classes for each type of area-light – e.g. SphereLight and TriparLight. This might be desirable for commonly used surface types, for efficiency, but to allow other types of surfaces to be used for lighting, it would be nice to have a general AreaLight class that takes care of the lighting and calls Surface methods to handle shape-specific details.

Reorganize the Surface abstraction

The Surface type currently contains geometry, and a material pointer.

We would also like to add light information to primitives (see also “TODO Make surfaces lights”). Most likely, this means adding a light pointer to surfaces (which would usually be NULL).

Because Surface is the basic object used for search acceleration, and meshes contain many, many, surfaces, this means that there’s lots of space consumed by redundant material pointers, and potentially by NULL light pointers, in mesh triangle objects. [This is especially bad on a 64-bit architecture, as each pointer takes 8 bytes! Even on machines with lots of memory, the cache footprint of all those pointers is a big problem.]

To allow expanding information stored in primitives, while reducing the size of meshes, we can do:

  1. Add a new class Primitive, which is a subclass of Surface
  2. Move the Surface::material field to Primitive::_material
  3. Most primitives (spheres, cylinders, entire meshes, etc), will be subclasses of Primitive instead of Surface, but as most external users will still refer to Surface, mesh triangles and other special cases can just be subclasses of that instead, to save space.
  4. Add a virtual Surface::material() method for getting the material for a surface; most rendering primitives will subclasses of Primitive, and Primitive::material() will just return its _material field directly, but for instance, mesh triangles will look up the material in their enclosing data to save space in the triangle object.
  5. Add a virtual Surface::light() method for obtaining a surface’s associated light; like for materials, Primitive will implement this via an explicit field and mesh triangles will look it up in the parent mesh.
  6. The Intersect constructor will call the surfaces material() and light() fields to fill in the Intersect::material and (the new) Intersect::light fields. This is not only for efficiency, but to also give, for instance, Instance::IsecInfo::make_intersect a chance to replace the light in a new intersection with a globally valid light (so the light actually stored in the primitive would have to be some sort of proxy light, and the instance would need to have a mapping between proxy lights inside that instance and real lights outside of it [repeat as necessary for nested instances].

Allow any surface to be a light

Currently lights are entirely separate from surfaces, so if a user wants to for instance, make a “light bulb”, he has to create both a sphere-light and add a sphere surface (with a special “glowing” material) to the scene at the same point. Furthemore, there’s a duplication of knowledge between source files implementing a “foo surface” and a “foo light”.

A more convenient way would be for lights to automatically be associated with surfaces, although from the user’s point of view, the “lightness” of a surface is actually related to the material of that surface. Users could simply add a sphere with a “glowing material”, and the code would automatically figure out there should be a sphere-light at that location. [There would still need to non-associated lights, for special light types like point lights and environment-map lights.]

The rendering code still needs to have a global list of light objects, so there basically should be a step before rendering where the scene is walked, and a light created for any surface with a “light material”.

For more details on how this is stored into surfaces, see “TODO Reorganize the Surface abstraction”.

Clean up environment-map region-splitting code.

Instead of weird iterative searching for the best split point for splitting regions, just add a LightMap method to return a split point (the current code ends up more or less just splitting as evenly as possible taking mapping distortion into account).

Efficient Global Illumination

Implement better integrators

“OldInteg” is overly complicated, hard to control, and probably wrong. It should be removed once there’s a decent replacement.

  • DONE Simple direct integrator
  • DONE Path-tracing integrator
  • DONE Photon-mapping integrator

Figure out the whole damn cos-wo thing

Some BRDF formulations include an implicit cos-wo term (to correct for projected area; “wo” is the angle between the surface normal and the light source), whereas others don’t, and only include the raw BRDF function, but it’s very confusing which is which, as sometimes this term is actually hidden deep within the BRDF formulas for efficiency.

Ugh. I’m totally confused by it, as I don’t really understand the math behind half the BRDFs out there, but it has to be sorted out so everything’s correct.

Plan: BRDF sampling and evaluation functions should only use the raw function, and let the integrators deal with adding the cos-wo term.

Add system for sharing sample generation info among different levels of sampling

For instance, if using 3x3 eye-rays and 25 light samples, the final stratification of light samples should be done using 225 (3 x 3 x 25) total samples, and the light-sample stratification state maintained across multiple eye-rays. This should allow better stratification even when using a high eye-ray:light-sample ratio (e.g. for DOF).

[DONE, as part of the new “integrator” rewrite.]

Restructure current “Trace” datatype

Currently this serves two purposes:

  1. It provides a path upwards which can be used to do things like finding the proper index of refraction when existing a glass surface.
  2. It provides a place to hang caches which persist beyond the current ray; this is a “downward” operation.

The first use, might be better served by a simple linked list of current Intersect objects. The second, by an explicit “IsecCache” datatype, which would only be “downwards” (and which would hang off Intersect objects during tracing).

[DONE, by moving most of the state out of Trace: various global info just moved into RenderContext, and the cache stuff just eliminated (it didn’t actually help that much, and removing it makes the code a lot simpler).]

Texture support

Add a “TextureMap” data type which maps between a surface point and texture coordinates.

New class TexCoords to hold two texture coordinates?

Add a “Pigment” data type, which contains a Color and a Texture pointer; most places where Color is now used in materials, should use Pigment instead (have constructors so that it can be initialized just like a color, resulting in a 0 Texture pointer).

Pigment has a “color(…)” method, which takes an a texture-map and a location, and returns a color by using the texture-map to map the location into texture coordinates, and looking them in the texture (if the pigment’s texture is 0 of course, return its color).

Surface class gets a “texture_coords” method that returns Texture coordinates for a given point on the surface. PrimarySurface gets a new field to explicitly store a texture map, and uses that to implement texture_coords; mesh triangles use something else, e.g., explicit per-vertex texture-coordinate tables with interpolation.

Intersect constructor should look up and cache texture-coordinates for the intersection point.

Anyplace where a Color has been replaced by a Pigment should now call `pigment.color (isec.tex_coords)’ instead of just returning the color directly (when the pigment’s texture pointers is 0, this will be almost as fast).

Similar techniques are needed for looking up normals; shared superclass with textures?

[DONE, more generally, by rewriting illumination to use the IllumMgr and subclasses of Illum]

Split illumination into DirectIllum and IndirIllum superclasses

The current SampleIllum class should probably just be renamed to DirectIllum. IndirIllum would be a superclass for photon-mapping or hemisphere-sampling indirect illumination algorithms.

Make mirror reflections use light-model

Importance sampling + indirect illumination

Implement instancing

Encapsulate space/octree abstraction inside a surface interface

… so that there can be multiple nested search spaces. This might help speed (or hurt it…) by consolidating logically related surfaces into their own search spaces, but more importantly, it allows easy object instancing by allowing us to wrap a search-space with a coordinate transform.

Use single-precision floating-point uniformly

Currently double-precision is used widely (e.g. the Vec and Pos classes). Just switching completely to single-precision mostly works fine, but causes problems with reflected/transmitted rays after sphere-intersection – so debug those errors, and make sphere intersection work with single-precision floating point. Probably just needs some kind of error margin when tracing recursive rays.

Add a real scene definition language

[DONE, by adding a Lua interpreter.]

Importance sampling

Add method to Material class to supply a jittered set of rays (given a desired number of rays as input) corresponding to the Material’s BRDF. The output should be efficient for large number of rays (e.g. ~1000) so it can be re-sampled to combine it with another importance function, so each element in the set should just be something like <reflectance, direction> (the origin is implied).

Lighting from environment map

Use importance resampling to combine the BRDF importance function with an importance function for the whole environment map (one hemisphere?).

See “Bidirectional Importance Sampling for Direct Illumination”, http://www.cs.ubc.ca/~dburke/downloads/EGSR05-Bidir_Importance.pdf

Switch to proper filtering for anti-aliasing, taking jitter etc into account:

Add a Filter class, which takes a simple x,y offset and returns a filter value at that point (relative to origin). Add a “SampleGen” class will will generate a series of samples (stratified etc) in a unit square. For each output pixel, generate samples, render them, and then get the filter value for the samples offset from the pixel center, and combine them using the formula

Sum(filt(samp) light(samp)) / Sum(filt(samp))

Get rid of wacky scan-line-based filtering I guess.

For the problem of doing post-processing of HDR images, an idea is:

When converting from an HDR representation to a limited-gamut representation, use filtering even if the image resolution is the same; make sure the filter support is wide enough so that a high-intensity pixel adjacent to a low-intensity pixel will “bleed” into it via the tail of the filter (but make the tail amplitude small enough so that the bleeding in normal cases is negligible).

Get rid of weird error handling in image code and just use exceptions

Get rid of “Image{Source,Sink}Params” classes and instead just pass

user-supplied “parameter strings” to image backends.

The parameter strings can be appended to the “image type” command-line options, separated by “:”, with “=” mean assign value. So for instance, for jpeg, the user could specify “-Ojpeg:quality=98” (as the user will usually specify the image type indirectly via the output file extension, it could assume a “=” before the first “:” meant that the first element was a parameter rather than a type).

Use Fresnel equations to calculate reflectance/transmittance

http://scienceworld.wolfram.com/physics/FresnelEquations.html

Abstract octree into a more generic “space division” data-type

Allow experimenting with different implementations, e.g., KD-trees. [This might fit well with the octree generalization below.]

Soft shadows

[DONE, RectLight, FarLight.]

Optimize triangle intersection calculation.

(Accounts for a huge proportion of tracing time)

Simple support for shadows of transparent/translucent objects

Real support (bending of light rays) is too hard, but for a simple straight light ray, it should be easy and better than nothing. [For real support, caustics etc., use photon transport]

;; arch-tag: 87fcbf10-c76f-43d6-9b09-469aba284b80