Skip to main content

Researchers Advance Texture Mapping by Utilizing Diffusion

Researchers from Tel Aviv introduce a new method for 3D texture mapping that improves upon previous methods in both efficiency and accuracy.
Created on February 8|Last edited on February 9
With the rise of Stable Diffusion, researchers now are on to applying similar strategies to synthesize three-dimensional scenes that would previously require explicit surface-to-surface mappings for applying textures to the 3D objects.
The process of texture mapping is where a 2D image is projected onto a plain 3D object, thus making it look like a real-life object. Models like Stable Diffusion are able to paint a 2-dimensional image directly from a text prompt, however, 3D dimensional painting methods are still lacking in terms of quality.
Last week, the University of Tel Aviv published a method for texture mapping directly from a text prompt, called TEXTure. This method improves upon existing methods in both quality and efficiency.
In addition, the method allows for editing of 3D textures directly from a text prompt, applying textures based on existing images, and even applying a full texture purely from a text prompt. These features add flexibility for users that may want to incrementally make changes to existing textures.
From a high level, the method renders the untextured object from different viewpoints, applies a depth-based painting scheme, and then finally projects it back to mesh vertices.

Why it matters

The flexibility of this method is promising for the future of AI-aided design, allowing designers to use the technology for making changes directly from a text prompt. In addition, this technology could be extremely valuable in AR and VR domains, allowing for the creation of 3D environments that are both realistic, and completely original.

The paper:



Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.