Description
Techniques combining Neural Radiance Fields (NeRFs) and diffusion models have shown great promise for generating novel scenes and objects from text input. Most advancements in the generative 3D space have focused on fine tuning diffusion models to improve performance, but have neglected to investigate other aspects, such as the underlying NeRF architecture or noise sampling schedule. Because this is a novel approach, there are still many open questions in this research space that have not been properly explored. In this work, we explore some of these questions using the open-source platform Nerfstudio. By implementing these models and conducting experiments in this library, we are able to easily conduct training and evaluation, while also providing code for others to build on for their own experiments in this space. The following work shows that the choice of underlying NeRF model and noise sampling schedule does have an impact on generated outputs, and lays the groundwork for further experiments in this line of investigation.