-
Gwangju Institute of Science and Technology
- Gwangju, Korea
-
14:24
(UTC -12:00)
Stars
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
A Paper List for Humanoid Robot Learning.
openvla / openvla
Forked from TRI-ML/prismatic-vlmsOpenVLA: An open-source vision-language-action model for robotic manipulation.
Retarget from Human Mesh Descriptions (SMPL, SMPL-X, etc) to Humanoid Poses
A Versatile Teleoperation framework for Robotic Manipulation using Meta Quest3
[Humanoids 2025] Learning from Massive Human Videos for Universal Humanoid Pose Control
H-Net: Hierarchical Network with Dynamic Chunking
GPU-optimized version of the MuJoCo physics simulator, designed for NVIDIA hardware.
🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.
Official Implementation of Real2Gen: Imitation Learning from a Single Human Demonstration with Generative Foundational Models
[ICLR 2025] 6D Object Pose Tracking in Internet Videos for Robotic Manipulation
[CoRL 2024] Im2Flow2Act: Flow as the Cross-domain Manipulation Interface
Reference PyTorch implementation and models for DINOv3
Isaaclab-based grasp learning test bench
DexWild: Dexterous Human Interactions for In-the-Wild Robot Policies
Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"
Implementation of the SoftAdapt paper (techniques for adaptive loss balancing of multi-tasking neural networks)
F3RM: Feature Fields for Robotic Manipulation. Official repo for the paper "Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation" (CoRL 2023).
Geometric Retargeting A Principled, Ultrafast Neural Hand Retargeting Algorithm
Simplifying diffusion/flow policies by treating action trajectories as flow trajectories
[CVPR 2025 Best Paper Award] VGGT: Visual Geometry Grounded Transformer
Official repo for GraspGen: A Diffusion-based Framework for 6-DOF Grasping