Yufan Deng* Yuhao Zhang* Chen Geng Shangzhe Wu† Jiajun Wu†
We present the Anymate Dataset, a large-scale dataset of 230K 3D assets paired with expert-crafted rigging and skinning information around 70 times larger than existing datasets. Using this dataset, we develop a scalable learning-based auto-rigging framework with three sequential modules for joint, connectivity, and skinning weight prediction. We experiment with various architectures for each module and conduct comprehensive evaluations on our dataset to compare their performance.
Check out our Project Page for more videos and demos!
- 2024.5: 🔥 Paper & Code available!
- Setup environment
conda env create -f environment.yaml
conda activate anymate
pip install -e ./ThirdParty/PointLLM- Download weights
bash Anymate/get_checkpoints.sh
# if needs to use shade encoder from michelangelo
# bash ThirdParty/michelangelo/get_checkpoints.sh - Start the UI
python Anymate_ui.py- Download the dataset
bash Anymate/get_datasets.sh- Train the Model (After downloading the datset)
You can modify the training configuration through config file atAnymate/configs
# The --split argument is used for distributed training, where the dataset is divided into *n* partitions (one per GPU).
# By default, it is configured for 8-GPU training.
# If you want to train on N gpu, you need to split the Anymate_train into N partitions and rename them as Anymate_train_{i}, where i range from 1 to N.
#Train Joints Prediction Model
python Train.py --config joints --split
#Train Diffusion-based Joints Weight Prediction Model
python Train.py --config diffusion --split
#Train Connectivity Prediction Model
python Train.py --config conn --split
#Train Skinning Weight Prediction Model
python Train.py --config skin --split
- Evaluate the Model (After downloading the weight and the dataset)
# You can evaluate different model by changing checkpoints' path in Evaluate.py
python Evaluate.pybash Anymate/get_datasets.shThe Anymate_test.pt is the test set.
The Anymate_train_0.pt to Anymate_train_7.pt are the splited train set.
They can be loaded by dataset = torch.load('Anymate_xxx.pt'). The dataset is a list of data asset, with each element being a dictionary. Each dictionary contains the following keys:
| key | type | shape | description |
|---|---|---|---|
| name | str | 1 | unique id of the asset |
| pc | float32 | 8196x6 | points cloud sampled from the 3D mesh: [position, normal] for 8196 points |
| vox | bool | 64^3 | voxelized version of the 3D mesh: resolusion is 64 |
| joints | float32 | 96x3 | joints position: position of each joint. 96 is the maximum number of joints. the matrix is padded with -3 |
| joints_num | int | 1 | number of joints |
| joints_mask | bool | (<96) | mask for padded joints: 1 for valid joints and 0 for padded joints |
| conns | int8 | 96x(<96) | connectivity matrix: 1 indicates joint i is connected with joint j. 96 is the maximum number of joints. the matrix is padded with zeros |
| bones | float32 | 64x6 | start and end position of bones: [head position, tail position] for each bone. 64 is the maximum number of bones. the matrix is padded with -3 |
| bones_num | int | 1 | number of bones |
| bones_mask | bool | (<64) | mask for padded bones: 1 for valid bones and 0 for padded bones |
| skins | float16 | 8192x(<64) | skinning weights: 8192 points' skinning weights w.r.t. at most 64 bones |
| key | type | shape | description |
|---|---|---|---|
| mesh_skins | float16 | Vx(<64) | skinning weights for each vertex w.r.t. at most 64 bones |
| mesh_pc | float32 | Vx6 | [position, normal] of each vertex |
| mesh_face | int | Fx3 | face index of each triangle |
The script that processes the object id in Objaverse-XL into the provided pytorch tensor dataset can be run by:
python Dataset_process.pyBefore running the processing script, please download blender at https://download.blender.org/release/Blender4.0/blender-4.0.0-linux-x64.tar.xz and unzip it to ThirdParty/blender-4.0.0-linux-x64
Third party codebases include Objaverse XL, Michelangelo, PointLLM, EG3D, RigNet
If you find Anymate useful for your work please cite:
@inproceedings{deng2025anymate,
author = {Yufan Deng and Yuhao Zhang and Chen Geng and Shangzhe Wu and Jiajun Wu},
title = {Anymate: A Dataset and Baselines for Learning 3D Object Rigging},
booktitle = {Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers (SIGGRAPH Conference Papers '25)},
year = {2025},
month = aug,
address = {Vancouver, BC, Canada},
publisher = {Association for Computing Machinery},
location = {Vancouver, BC, Canada},
note = {August 10–14, 2025}
}