EmbodiedGen is a generative engine to create diverse and interactive 3D worlds composed of high-quality 3D assets(mesh & 3DGS) with plausible physics, leveraging generative AI to address the challenges of generalization in embodied intelligence related research. It composed of six key modules:
Image-to-3D
,Text-to-3D
,Texture Generation
,Articulated Object Generation
,Scene Generation
andLayout Generation
.
- 🖼️ Image-to-3D
- 📝 Text-to-3D
- 🎨 Texture Generation
- 🌍 3D Scene Generation
- ⚙️ Articulated Object Generation
- 🏞️ Layout (Interactive 3D Worlds) Generation
git clone https://github.com/HorizonRobotics/EmbodiedGen.git
cd EmbodiedGen
git checkout v0.1.2
git submodule update --init --recursive --progress
conda create -n embodiedgen python=3.10.13 -y # recommended to use a new env.
conda activate embodiedgen
bash install.sh basic
We provide a pre-built Docker image on Docker Hub with a configured environment for your convenience. For more details, please refer to Docker documentation.
Note: Model checkpoints are not included in the image, they will be automatically downloaded on first run. You still need to set up the GPT Agent manually.
IMAGE=wangxinjie/embodiedgen:env_v0.1.x
CONTAINER=EmbodiedGen-docker-${USER}
docker pull ${IMAGE}
docker run -itd --shm-size="64g" --gpus all --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --privileged --net=host --name ${CONTAINER} ${IMAGE}
docker exec -it ${CONTAINER} bash
Update the API key in file: embodied_gen/utils/gpt_config.yaml
.
You can choose between two backends for the GPT agent:
gpt-4o
(Recommended) – Use this if you have access to Azure OpenAI.qwen2.5-vl
– An alternative with free usage via OpenRouter, apply a free key here and updateapi_key
inembodied_gen/utils/gpt_config.yaml
(50 free requests per day)
Generate physically plausible 3D asset URDF from single input image, offering high-quality support for digital twin systems.
(HF space is a simplified demonstration. For the full functionality, please refer to
img3d-cli
.)
Run the image-to-3D generation service locally. Models downloaded automatically on first run, please be patient.
# Run in foreground
python apps/image_to_3d.py
# Or run in the background
CUDA_VISIBLE_DEVICES=0 nohup python apps/image_to_3d.py > /dev/null 2>&1 &
Generate physically plausible 3D assets from image input via the command-line API.
img3d-cli --image_path apps/assets/example_image/sample_04.jpg apps/assets/example_image/sample_19.jpg \
--n_retry 2 --output_root outputs/imageto3d
# See result(.urdf/mesh.obj/mesh.glb/gs.ply) in ${output_root}/sample_xx/result
Create 3D assets from text descriptions for a wide range of geometry and styles. (HF space is a simplified demonstration. For the full functionality, please refer to
text3d-cli
.)
Deploy the text-to-3D generation service locally.
Text-to-image model based on the Kolors model, supporting Chinese and English prompts. Models downloaded automatically on first run, please be patient.
python apps/text_to_3d.py
Text-to-image model based on SD3.5 Medium, English prompts only. Usage requires agreement to the model license(click accept), models downloaded automatically.
For large-scale 3D assets generation, set --n_pipe_retry=2
to ensure high end-to-end 3D asset usability through automatic quality check and retries. For more diverse results, do not set --seed_img
.
text3d-cli --prompts "small bronze figurine of a lion" "A globe with wooden base" "wooden table with embroidery" \
--n_image_retry 2 --n_asset_retry 2 --n_pipe_retry 1 --seed_img 0 \
--output_root outputs/textto3d
Text-to-image model based on the Kolors model.
bash embodied_gen/scripts/textto3d.sh \
--prompts "small bronze figurine of a lion" "A globe with wooden base and latitude and longitude lines" "橙色电动手钻,有磨损细节" \
--output_root outputs/textto3d_k
ps: models with more permissive licenses found in embodied_gen/models/image_comm_model.py
Generate visually rich textures for 3D mesh.
Run the texture generation service locally.
Models downloaded automatically on first run, see download_kolors_weights
, geo_cond_mv
.
python apps/texture_edit.py
Support Chinese and English prompts.
bash embodied_gen/scripts/texture_gen.sh \
--mesh_path "apps/assets/example_texture/meshes/robot_text.obj" \
--prompt "举着牌子的写实风格机器人,大眼睛,牌子上写着“Hello”的文字" \
--output_root "outputs/texture_gen/robot_text"
bash embodied_gen/scripts/texture_gen.sh \
--mesh_path "apps/assets/example_texture/meshes/horse.obj" \
--prompt "A gray horse head with flying mane and brown eyes" \
--output_root "outputs/texture_gen/gray_horse"
Run
bash install.sh extra
to install additional requirements if you need to usescene3d-cli
.
It takes ~30mins to generate a color mesh and 3DGS per scene.
CUDA_VISIBLE_DEVICES=0 scene3d-cli \
--prompts "Art studio with easel and canvas" \
--output_dir outputs/bg_scenes/ \
--seed 0 \
--gs3d.max_steps 4000 \
--disable_pano_check
🚧 Coming Soon
🚧 Coming Soon
![]() |
![]() |
![]() |
![]() |
pip install -e .[dev] && pre-commit install
python -m pytest # Pass all unit-test are required.
If you use EmbodiedGen in your research or projects, please cite:
@misc{wang2025embodiedgengenerative3dworld,
title={EmbodiedGen: Towards a Generative 3D World Engine for Embodied Intelligence},
author={Xinjie Wang and Liu Liu and Yu Cao and Ruiqi Wu and Wenkang Qin and Dehui Wang and Wei Sui and Zhizhong Su},
year={2025},
eprint={2506.10600},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2506.10600},
}
EmbodiedGen builds upon the following amazing projects and models: 🌟 Trellis | 🌟 Hunyuan-Delight | 🌟 Segment Anything | 🌟 Rembg | 🌟 RMBG-1.4 | 🌟 Stable Diffusion x4 | 🌟 Real-ESRGAN | 🌟 Kolors | 🌟 ChatGLM3 | 🌟 Aesthetic Score | 🌟 Pano2Room | 🌟 Diffusion360 | 🌟 Kaolin | 🌟 diffusers | 🌟 gsplat | 🌟 QWEN-2.5VL | 🌟 GPT4o | 🌟 SD3.5
This project is licensed under the Apache License 2.0. See the LICENSE
file for details.