Metappearance: Meta-Learning for Visual Appearance Reproduction
SIGGRAPH Asia 2022, Journal Track
Michael Fischer, Tobias Ritschel
University College London
Our work has been featured in the ACM Showcase.
TLDR:We use meta-learning to encode visual appearance. Metappearance can be trained in less than a second, while providing similar quality to conventionally trained networks that train orders of magnitude longer. We show results on a wide variety of applications and analyze important properties, such as convergence & efficiency.
Abstract
There currently exist two main approaches to reproducing visual appearance using Machine Learning (ML): The first is training models that generalize over different instances of a problem, e.g., different images of a dataset. As one-shot approaches, these offer fast inference, but often fall short in quality. The second approach does not train models that generalize across tasks, but rather over-fit a single instance of a problem, e.g., a flash image of a material. These methods offer high quality, but take long to train. We suggest to combine both techniques end-to-end using meta-learning: We over-fit onto a single problem instance in an inner loop, while also learning how to do so efficiently in an outer-loop across many exemplars. To this end, we derive the required formalism that allows applying meta-learning to a wide range of visual appearance reproduction problems: textures, Bi-directional Reflectance Distribution Functions (BRDFs), spatially-varying BRDFs (svBRDFs), illumination or the entire light transport of a scene. The effects of meta-learning parameters on several different aspects of visual appearance are analyzed in our framework, and specific guidance for different tasks is provided. Metapperance enables visual quality that is similar to over-fit approaches in only a fraction of their runtime while keeping the adaptivity of general models.
Exemplary Results
BibTeX
If you find our work useful and use parts or ideas of our paper or code, please cite:
@article{fischer2022metappearance,
title={Metappearance: Meta-learning for visual appearance reproduction},
author={Fischer, Michael and Ritschel, Tobias},
journal={ACM Transactions on Graphics (TOG)},
volume={41},
number={6},
pages={1--13},
year={2022},
publisher={ACM New York, NY, USA}
}
Acknowledgements
We thank the reviewers for their constructive comments. We also thank Philipp Henzler, Alejandro Sztrajman, Valentin Deschaintre and Gilles Rainer for open-sourcing their code and insightful discussions. Lastly, we thank Meta Reality Labs for supporting this work.