Unity (Texture Delighting)

  • Followers
About this project

This project is heavy on image processing and graphics. In order to make video games convincing simulations of the real world, it's imperative that the visual feel of the environment is as realistic as possible, accounting for visual subtleties such as shadows and texture. The main process by which convincing texture is graphically designed is called photonometry, and entails taking multiple photographs of an object from multiple angles in order to stitch together an animated complete representation of the object. The programmatic model of the object's texture can then be deployed in the final game, but before this can be done the texture must be delighted. This essentially means converting the texture to a canonical form representing the object as though it were in an environment with no directed light, by removing shadows and other visual cues of directed light from the source photographs. Only after this phase is completed can the 3-D representation of the object be used in a simulated environment with artificially added shadows and the like.

Texture delighting is very difficult to due with conventional image processing techniques, because there don't exist ways of capturing all the information about the light hitting an object through photographs. An ML model which could implement supervised learning algorithms to learn on its own how to delight images of textures would be very useful in industry, with applications not just in game development but also film special effects, CGI, marketing, etc. Currently, the delighting process must occur manually by trained artists, which is time consuming and expensive.

To build such a model, the first phase of the project would be data generation, due to the lack of data available. This would begin with the artificial generation of images that already don't have shadows or directed light, then use normal mapping to artificially add lighting. In this way a labelled dataset can be generated. This is a non-trivial phase because the normal mapping process must be refined until photorealistic images are produced. Once the dataset has been generated, a supervised model will be designed and implemented to learn how to delight the texture in the images.

Project Members

No Team Members