git clone https://github.com/SerendipitysX/TypeDance.git
cd TypeDance
conda create --name <environment_name> --file requirements.txt
In this work, we benefit from some excellent pretrained models, including Segment Anything for user to point out specific visual representation in an image and DIS for background removal. For generation, we use Diffusers, and the model we choose is runwayml/stable-diffusion-v1-5
.
To use these models, please follow the steps below:
- Download the background removal model from here.
- Download the segment anything model from here
- Unzip the folder and save them to
models/
.
python TypeDance.py
-
Go to
TypeDance/frontend
path -
Install all the needed packages through npm
npm install
-
Compiles and hot-reloads for development
npm run dev
By specifying the typeface and object in the image, users are allowed to combine them to generate a harmonious blend. See the figure below for details.
We are glad to hear from you. If you have any questions, please feel free to contact xrakexss@gmail.com or open issues on this repository.
This project is open sourced under GNU Affero General Public License v3.0.
If this work helps your research or if you use any resources in this repository, please consider to cite:
@inproceedings{xiao2024typedance,
title={TypeDance: Creating Semantic Typographic Logos from Image throughPersonalized Generation},
author={Xiao, Shishi and Wang, Liangwei and Ma, Xiaojuan and Zeng, Wei},
booktitle={Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems},
year={2024}
}