pwd # print the current directory
ls # list all files and folders
nvidia-smi # check GPU status
Most of these commands were taken from nerfstudio installation page. I found putting them in this order works on lightning ai GPUs.
pip uninstall torch torchvision functorch tinycudann
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
pip install nerfstudio
This will download a dataset with the following location: https://drive.google.com/file/d/1FceQ5DX7bbTbHeL26t0x6ku56cwsRs6t/view?usp=sharing
The folders are:
poster/images
images with the original resolutionposter/images2
images with 50% of the original resolutionposter/images4
images with 25% of the original resolutionposter/images8
images with 12.5% of the original resolutionposter/colmap/sparse/0
colmap output filesposter/transforms.json
holds the transform matrices for all imagesns-download-data nerfstudio --capture-name=poster
This command trains the nerfacto algorithm with default parameters:
ns-train nerfacto --data data/nerfstudio/poster
The output looks like this:
Step (% Done)
--------------------
29910 (99.70%) 40.633 ms 3 s, 656.933 ms 103.17 K
29920 (99.73%) 40.605 ms 3 s, 248.366 ms 103.20 K
29930 (99.77%) 41.887 ms 2 s, 932.074 ms 100.67 K
29940 (99.80%) 40.978 ms 2 s, 458.696 ms 102.32 K
29950 (99.83%) 41.026 ms 2 s, 51.290 ms 102.18 K
29960 (99.87%) 42.086 ms 1 s, 683.453 ms 100.18 K
29970 (99.90%) 40.978 ms 1 s, 229.333 ms 102.25 K
29980 (99.93%) 41.012 ms 820.242 ms 102.19 K
29990 (99.97%) 41.987 ms 419.868 ms 100.42 K
29999 (100.00%)
----------------------------------------------------------------------------------------------------
Viewer running locally at: http://localhost:7007 (listening on 0.0.0.0)
╭─────────────────────────────── 🎉 Training Finished 🎉 ────────────────────────────────╮
│ ╷ │
│ Config File │ outputs/poster/nerfacto/2024-12-15_032229/config.yml │
│ Checkpoint Directory │ outputs/poster/nerfacto/2024-12-15_032229/nerfstudio_models │
│ ╵ │
╰────────────────────────────────────────────────────────────────────────────────────────╯
Use ctrl+c to quit
The web viewer runs on http://0.0.0.0:7007
. If you’re running nerfstudio locally, it is just a matter of opening the web browser. But if you’re on the cloud, that could be tricky. Lightning AI provides a web port with the VM, but other cloud providers don’t. For that you need another service, ngrok for example.
Use this command to load a specific checkpoint to the viewer
ns-viewer --load-config outputs/poster/nerfacto/2024-12-15_032229/config.yml
Make sure that you have ffmpeg
installed. I installed using conda, but it could be different depending on the machine setup.
conda install ffmpeg
You can render a video using ns-render camera-path
where you manually pick camera poses that will then be saved to cameras.json
. But if you just want a quick video, just use ns-render interpolate
:
ns-render interpolate \
--load-config outputs/poster/nerfacto/2024-12-15_032229/config.yml \
--output-path renders/output.mp4
Here’s a complete list of ns-render
options:
usage: ns-render [-h] {camera-path,interpolate,spiral,dataset}
╭─ options ────────────────────────────────────────────────────────────────────────────────────────╮
│ -h, --help show this help message and exit │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ subcommands ────────────────────────────────────────────────────────────────────────────────────╮
│ {camera-path,interpolate,spiral,dataset} │
│ camera-path Render a camera path generated by the viewer or blender add-on. │
│ interpolate Render a trajectory that interpolates between training or eval dataset images. │
│ spiral Render a spiral trajectory (often not great). │
│ dataset Render all images in the dataset. │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯