A method for multispectral texture synthesis using sparse image modeling. Proof of concept.
Input texture | Synthesized texture |
---|---|
Here, similarly to inpainting applications, texture images are represented by overlapping image patches, where each patch is a sparse linear additive model. In addition to small-scale (single patch-level) traits of the texture that sparse modeling is able to capture by itself, we attempt to bring out large-scale texture traits in the synthesized image, ones which express how the patches connect to each other. More precisely, we propose to minimize the difference between pixel values of the overlapping patches with the aim to synthesize an entirely new, visually convincing texture with minimally prononunced blocky artifacts.
We formalize the described requirements as the following convex optimization problem: min \sum_{i,j} D[ S1_{ij}Tv_{i}, S2_{ij}Tv_{j}] ], where Euclidean distance is represented by D[.], i, j are indeces of the patches composing the texture in a predefined order, T is dictionary with elements organised columnwise and v_{i} vector of sparse activation coefficients. For a pair of patches {i, j}, the overlapping region is described by two indicator matrices matrices S1_{ij} S2_{ij}. Also, note that sparsity penalization is used to promote sparsity of v_{i}.
The objective function is minimized by multiplicative updates under which nonnegativity of the activation coefficients is preserved.
Run the scripts in the following order:
- extract_patches.m, which breaks down the input images into patches
- sparseNMF_uberscript.m, which trains sparse dictionaries by a variant of sparse NMF
- pattern_generation_uberscript, which does the smart stuff
The implementation currently iterates through the set of patches, but the updates can be parallelized; patches which do not interact can be updated simultenaously. Wrap the code in a function.
If you have used this code in a publication, please mention this repository.
Published under GPL-3.0 License.