How to Install and Use EasyVolcap for Enhanced Visual Experiences ๐Ÿš€

Wednesday, Dec 18, 2024 | 5 minute read

GitHub Trend
How to Install and Use EasyVolcap for Enhanced Visual Experiences ๐Ÿš€

Revolutionizing immersive visual experiences, this innovative open-source project harnesses Python and PyTorch for dynamic event capture. With limitless potential across various fields, it promises seamless multi-angle viewing and simplifies volumetric video processes. ๐Ÿš€โœจ

“In today’s rapidly evolving audio-visual technology landscape, enhancing audience engagement and immersion has become a hot topic in the industry.”

1. The Trendsetter: EasyVolcap ๐ŸŒŸ

EasyVolcap is an extremely cool open-source project specifically designed to elevate neural volumetric video research! Built on Python and PyTorch, it primarily focuses on dynamic event capture technology. This technology holds limitless potential and can be widely applied across various fields such as sports broadcasting๐Ÿ“บ, video conferencing, gaming๐ŸŽฎ, and film production๐ŸŽฌ, definitely allowing you to experience immersive multi-angle viewing like never before! EasyVolcap aims to simplify the processes of capturing, reconstructing, and rendering volumetric videosโ€”making it a major boon for researchers and everyday users alike! โœจ

EasyVolcap is not just another open-source library; it’s a next-generation innovation in volumetric video processing, aspiring to deliver unprecedented visual experiences and a convenient development environment to its users. ๐Ÿ’ก

2. Easy Installation and Usage of EasyVolcap ๐Ÿ’ป

Installing EasyVolcap

To get started with EasyVolcap, you’ll need to install it on your local machine! The simplest way to install the core dependencies is to use the pip command. Open your terminal and enter the following command:

pip install -v -e .

Here, the -v shows detailed installation information, while -e installs in editable mode, allowing you to develop and modify locally at any time!

If you need to enable a CUDA version of PyTorch, you can add a search link to the installation command like so:

pip install -v -e . -f https://download.pytorch.org/whl/torch_stable.html

This command will install the necessary packages for EasyVolcap and ensure you have the CUDA-enabled version of PyTorch for accelerated computation!

Developers who want to add new features or modify the project should remember to install additional dependencies, which can be done using the following command:

pip install -v -e .
pip install -v -r requirements-devel.txt

The second command installs all the development dependencies from the requirements-devel.txt file, ensuring you have a perfect development environment!

Each time you use git pull to update EasyVolcap, please remember to re-register the command line and code path with this command:

pip install -e . --no-build-isolation --no-deps

This command avoids rebuilding the package, so you wonโ€™t need to set up a new environment for dependencies again!

Examples of Using EasyVolcap ๐Ÿฟ

In this section, we’ll demonstrate how to utilize EasyVolcap efficiently with a small multi-view video dataset. This dataset includes various algorithms like Instant-NGP+T, 3DGS+T, and ENeRFi (an improved version of ENeRF). You can download the example dataset from this Google Drive link. After downloading, unzip the file and place it in the data/enerf_outdoor directory, and get ready for the challenge!

First, set up the following environment variables:

expname=actor1_4_subseq
data_root=data/enerf_outdoor/actor1_4_subseq

These variables will help you specify the experiment name and data root directory, making subsequent calls more convenient!

Inference with ENeRFi ๐Ÿ–ผ๏ธ

To use the ENeRFi algorithm, first, you need to download the provided pre-trained model. The link is here: Pre-trained Model; rename it to latest.npz.

Next, you can render the ENeRFi model using the following command:

# Render ENeRFi using the pre-trained model
evc-test -c configs/exps/enerfi/enerfi_${expname}.yaml,configs/specs/spiral.yaml,configs/specs/ibr.yaml runner_cfg.visualizer_cfg.save_tag=${expname} exp_name=enerfi_dtu

In this command, -c specifies the configuration file, and ${expname} will automatically fill in with the previously set environment variable for output naming!

Alternatively, you can render using the graphical user interface by running:

# Render ENeRFi using the GUI
evc-gui -c configs/exps/enerfi/enerfi_${expname}.yaml exp_name=enerfi_dtu val_dataloader_cfg.dataset_cfg.ratio=0.5

Running Instant-NGP+T ๐ŸŒ

For the Instant-NGP+T algorithm, you need to prepare a configuration file and run the following commands for training and testing:

evc-train -c configs/exps/l3mhet/l3mhet_${expname}.yaml
evc-test -c configs/exps/l3mhet/l3mhet_${expname}.yaml,configs/specs/spiral.yaml

The first command trains the model, while the second command tests it, so be sure to input the correct configuration files!

Running 3DGS+T ๐ŸŽฅ

For those interested in the 3DGS+T algorithm, you’ll first need to extract the geometry for initialization, then execute the training and rendering commands:

python scripts/fusion/volume_fusion.py -- -c configs/exps/l3mhet/l3mhet_${expname}.yaml val_dataloader_cfg.dataset_cfg.ratio=0.15
evc-train -c configs/exps/gaussiant/gaussiant_${expname}.yaml
evc-test -c configs/exps/gaussiant/gaussiant_${expname}.yaml,configs/specs/superm.yaml,configs/specs/spiral.yaml

These commands ensure the geometry is initialized before proceeding with model training and testing for optimal performance!

How to Work with EasyVolcap ๐Ÿ› ๏ธ

Customizing EasyVolcap ๐ŸŽจ

If you want to use EasyVolcap to build new algorithms, itโ€™s crucial to understand the input and output structure of the networks. The batch variable stores the network input, while the output key should contain the network’s output.

For example, for a simple custom module, your code could look like this:

from easyvolcap.engine import SAMPLERS
from easyvolcap.utils.net_utils import VolumetricVideoModule

@SAMPLERS.register_module()  # Register the new module into SAMPLERS
class CustomVolumetricVideoModule(VolumetricVideoModule):  # Extend the VolumetricVideoModule class
    def forward(self, batch):  # Forward method to process input batch
        batch.output.rgb_map = ...  # Generate the output rgb_map

Dataset Structure ๐Ÿ“‚

When preparing a dataset for EasyVolcap, make sure to follow this simple structure:

data/dataset/sequence # data_root & data_root
โ”œโ”€โ”€ intri.yml # Required
โ”œโ”€โ”€ extri.yml # Required
โ””โ”€โ”€ images # Required
 ย ย  โ”œโ”€โ”€ 000000 # Camera / Frame
 ย ย  โ”‚   โ”œโ”€โ”€ 000000.jpg # Image

This structure allows EasyVolcap to read and process your dataset smoothly, and the intri.yml and extri.yml files are essential, as they contain information about internal and external camera parameters!

Importing EasyVolcap ๐Ÿ“ฅ

To utilize the tool modules provided by EasyVolcap, you can import them directly into your project:

from easyvolcap.utils.console_utils import *  # Import console utilities
from easyvolcap.runners.volumetric_video_viewer import VolumetricVideoViewer  # Import viewer class

class CustomViewer(VolumetricVideoViewer):  # Custom viewer class extending VolumetricVideoViewer
    ...

By inheriting from the existing VolumetricVideoViewer class, you can flexibly extend functionality and add necessary interaction or display configurations for the best experience!

Starting a New Project with EasyVolcap ๐Ÿš€

If you plan to embark on a project with EasyVolcap, the first step is to fork the repository and clone it:

git clone https://github.com/zju3dv/EasyVolcap ${project}  # Clone EasyVolcap repository
git remote add upstream https://github.com/zju3dv/EasyVolcap  # Set upstream for updates

This way, you can start development locally while keeping up-to-date with the latest changes in EasyVolcap. Go challenge yourself!

ยฉ 2024 - 2025 GitHub Trend

๐Ÿ“ˆ Fun Projects ๐Ÿ”