+
Skip to content

wgsxm/omniphysgs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[ICLR 2025] OmniPhysGS

OmniPhysGS: 3D Constitutive Gaussians for General Physics-based Dynamics Generation

Yuchen Lin, Chenguo Lin, Jianjin Xu, Yadong Mu

arXiv Project page License: MIT

pipeline

This repository contains the official implementation of the paper: OmniPhysGS: 3D Constitutive Gaussians for General Physics-based Dynamics Generation, which is accepted by ICLR 2025. OmniPhysGS is a novel framework for general physics-based 3D dynamic scene synthesis, which can automatically and flexibly model various materials with domain-expert constitutive models in a physics-guided network. Here is our Project Page.

Feel free to contact me (linyuchen@stu.pku.edu.cn) or open an issue if you have any questions or suggestions.

📢 News

  • 2025-03-26: Upload several trained Gaussian models and the corresponding config files to this link.
  • 2025-03-19: A clean version of our PyTorch MPM solver is released here.
  • 2025-02-03: The source code and preprocessed dataset are released.
  • 2025-01-22: OmniPhysGS is accepted by ICLR 2025.

🔧 Installation

Our code uses Gaussian Splatting as an important submodule. If problem occurs during the installation, please refer to the official repository for more details.

Clone the repository and submodules

git clone --recurse-submodules https://github.com/wgsxm/omniphysgs.git
cd omniphysgs

Change the version of Gaussian Splatting code (optional)

We use a specific version of Gaussian Splatting code in our project. Later versions may cause compatibility issues. Please check the hash of the Gaussian Splatting repository in the third_party/gaussian-splatting folder and make sure it is as follows:

cd third_party/gaussian-splatting
git checkout 472689c0dc70417448fb451bf529ae532d32c095
cd ../..

Setup the environment

conda create -n omniphysgs python=3.11.9
conda activate omniphysgs
bash settings/setup.sh

These commands should install all the dependencies required to run the code.

🌎 Environment

You may need to modify the specific version of torch in settings/setup.sh according to your CUDA version. We provide our environment configuration here for reference. We recommend using the same environment to reproduce our results. See settings/requirements.txt for more details.

  • Ubuntu 22.04.1 LTS
  • Python 3.11.9
  • CUDA 11.8
  • torch==2.0.1
  • warp-lang==0.6.1
  • diffusers==0.30.3

📊 Dataset

We provide an example data in the dataset folder and its corresonding configuration file in the configs folder. The dataset is organized as follows (the same as the training results of Gaussian Splatting):

omniphysgs
├── dataset
│   ├── bear
│   │   ├── point_cloud
│   │   │   ├── iteration_30000
│   │   │   │   ├── point_cloud.ply
│   │   ├── cameras.json
│   ├── ... # other scenes
|   |   ├── point_cloud
|   |   │   ├── iteration_*  # the number of iterations depends on the training process
|   |   │   │   ├── point_cloud.ply
|   |   ├── cameras.json

More data will be released soon in this Google Drive link.

🚀 Usage

Note that given the feature of SDS guidance, loss cannot explicitly reflect the quality of the generated results. We recommend checking the intermediate video results in the output folder.

Training

We provide two training config files in the configs folder for the same example scene (a bear). They are almost identical except for the prompt field. To train the model, simply run:

python main.py --config configs/bear_sand.yaml --tag bear_sand

The argument --tag is used to specify the name of the subfolder in the output directory. In this case, the output will be saved in outputs/bear_rubber. If not specified, the default tag will be a timestamp.

If you want to specify the GPU device, you can add the --gpu argument:

python main.py --config configs/bear_sand.yaml --tag bear_sand --gpu 0

Inference

After training, you can generate the dynamic scene by running:

python main.py --config configs/bear_sand.yaml --tag bear_sand --test

This will recover the trained physics-guided network and generate the dynamic scene. Gradient is disabled during inference to speed up the process. By changing boundary_condition in the config file, you can generate different dynamic scenes with the same learned material.

Results

Using the provided config files, you can train and generate the results similar to the following videos:

bear_rubber

"A rubber bear bouncing on a surface"

bear_sand

"A sand bear collapsing"

😊 Acknowledgement

We would like to thank the authors of PhysGaussian, PhysDreamer, Physics3D, DreamPhysics, and NCLaw for their great work and generously providing source codes, which inspired our work and helped us a lot in the implementation.

📚 Citation

If you find our work helpful, please consider citing:

@inproceedings{
  lin2025omniphysgs,
  title={OmniPhys{GS}: 3D Constitutive Gaussians for General Physics-Based Dynamics Generation},
  author={Yuchen Lin and Chenguo Lin and Jianjin Xu and Yadong MU},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025},
  url={https://openreview.net/forum?id=9HZtP6I5lv}
}

About

[ICLR 2025] OmniPhysGS: 3D Constitutive Gaussians for General Physics-based Dynamics Generation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载