Mohammad Ali Qorbani, Farhad Dalirani, Mohammad Rahmati, Mohammad Reza Hafezi
Currently, we are working on an application with a user-friendly UI to make this method available to architects, which will be released at https://github.com/maqorbani/DCNU_Lighting. However, in the meanwhile, you can access our code and dataset in this repository.
Studying annual luminance maps during the design process provides architects with insight into the space's spatial quality and occupants' visual comfort. Simulating annual luminance maps is computationally expensive, especially if the objective is to render the scene for multiple viewpoints. This repository is a method based on deep learning that accelerates these simulations by predicting the annual luminance maps using only a limited number of rendered high-dynamic-range images. The proposed model predicts HDR images that are comparable to the rendered ones. Using the transfer learning approach, this model can robustly predict HDR images from other viewpoints in the space with less rendered images (up to one-tenth) and less training required. This method was evaluated using various evaluation metrics, such as MSE, RER, PSNR, SSIM, and runtime duration and it shows improvements in all metrics compared to the previous work, especially 33% better MSE loss, 48% more accurate DGP values, and 50% faster runtime.
In a standard image simulation procedure, the rendering program simulates the luminance-based HDR image by first acquiring the desired scene's information as input, i.e., the scene geometry, materials, and light sources. It starts ray tracing to determine how the emitted light rays illuminate each object in the scene by striking them, bouncing off, and eventually reaching the camera's sensor. However, in this method, a deep neural network is trained to predict luminance maps using only a limited number of Radiance simulated synthetic HDR images and then predicts the rest of the annual luminance maps without ray-tracing by providing each hour's corresponding lighting condition and the scene information.
- Scene modeling
- Annual lighting conditions extraction
- Sparse samples selection
- Data set generation
- Training the neural network
An overview of our proposed method is presented in the following figure.
Modeling the scene could take place in any 3D CAD software. We recommend Rhinoceros since using the Honeybee you can easily assign Radiance materials to the geometry and export a Radiance scene description. A Radiance scene desription with assigned materials is all you need for this step.
In this repository, we used the room presented in the following figure as our scene.
In this step, annual sky (daylighting) condition is needed. You should extract sun's altitude and azimuth and direct irradiance, and the sky's diffuse irradiance from your desired region climate (epw) file. We also recommend using the Ladybug for this step. Then, a Radiance sky description should be created using Radiance GENDAYLIT command out of the extracted parameters.
Selecting sparse samples throughout the year could be carried out using the k-means.py script provided in the K-means directory.
After this step, the files are ready to be rendered using Radiance. Though, you could use this method on images created by any synthetic image rendering.
Creating the data set for the neural network training process is done using the TensorMaker.py script provided in V2DataAnalysis directory given the corresponding scene references which should be put in the SceneRefrences directory in their corresponding name.
The PyTorchConvReg.py script in V2DataAnalysis directory does the neural network training.
The following figure depicts the neural network's architecture used in this method.
You can download the dataset here
Additionally, you can create your own dataset to employ this method in order to predict annual luminance maps. The scripts provided in TheRender folder will help you achieve this goal. Moreover, using RadianceEquirectangular repository, you can render equirectangular images using Radiance.