We explore 2 unique approaches to generate and stylize a 3D object from a 2D set of sketches. To our knowledge, no prior work has been done on directly stylizing 3D models generated from sketches using neural radiance fields (NeRF).
-
We have stylization before rendering images from NeRF wherein we obtain the sketches, pass them through the neural style transfer model (MSG-Net), mask out the background, and then train NeRF on these stylized images. Here stylization is pixel-consistent and the results are in the style of the painting given. We also test a few-shot model called Info-NeRF with 4 and 10 images to test the ability of NeRF to reproduce reliable results with fewer volumes of input data.

-
The second approach deals with stylization using text prompts while outputs are rendered by CLIP-NeRF. Here, we use transfer learning to substitute the NeRF in CLIP-NeRF with our model trained on the pencil sketches. This specifies a single color/textual information in the prompt rather than conditioning on a style image/painting.
The drive link to download different datasets and pretrained models is available here: https://drive.google.com/drive/folders/1mT1Hh4cCeEca2rt1IW_Ug241jBbsKGEd?usp=share_link
Style Image 1:
Training set of images consisting of sketches conditioned on Style 1 (painting):
lego_style1_40k_rgb.mp4
Style Image 2:
Training set of images consisting of sketches conditioned on Style 2 (painting):
lego_style2_40k_rgb.mp4
4-shot with detailed sketch:
10-shot with detailed sketch:
4-shot with sparse sketch:
10-shot with sparse sketch:





