This is the official pytorch implementation of:
DeepVecFont-v2: Exploiting Transformers to Synthesize Vector Fonts with Higher Quality. CVPR 2023.
conda create -n dvf_v2 python=3.9
conda activate dvf_v2
pip install .
The dataset used can be found in Onedrive or Baiduyun (Password: pmr2). Put the data
directory in the root path. This directory contains:
(1) char_set
: the character set used for Chinese and English.
(2) font_ttfs
: the TTF/OTF files of fonts.
(3) font_sfds
: the sfd files extracted by FontForge.
(4) vecfont_dataset
: the processed files ready for training/testing.
Note: The train/test split in this released dataset is slightly different to what was used in our paper. The original train/test split in our paper can be found in v1_train_font_ids.txt and v1_test_font_ids.txt. These ids are corresponding to the ttf/off files in data/font_ttfs
.
Our trained checkpoints (English and Chinese) can be found in Onedrive or Baiduyun (Password: pih5). We provided 3 checkpoints on epochs 500, 550, 600.
If you use our trained checkpoints, you can directly go to the Testing
section.
English Dataset:
CUDA_VISIBLE_DEVICES=0 python -m deepvecfont.train
Chinese Dataset:
CUDA_VISIBLE_DEVICES=0 python -m deepvecfont.train --lang chn
To generate a target glyph from a TTF file - e.g. to predict the glyphs "xyz" given the reference glyphs "ABab" - run:
English:
CUDA_VISIBLE_DEVICES=0 python -m deepvecfont.test_few_shot --name_ckpt {name_ckpt} ExistingFont.ttf "xyz" --ref_chars ABab
(Note that the number of reference characters must be the same as was used to train
the model. By default this is 4, but you can also set a different number of
reference characters by passing the --ref_chars
arg to -m deepvecfont.train
;
the contents are not significant, only the string length is important.)
The synthesized candidates are in ./experiments/{exp_name}/results/{font_id}/svgs_single
, and the selected results (by IOU) is in ./experiments/{exp_name}/results/{font_id}/svgs_merge
.
In the testing phase, we run the model for n_samples
times to generate multiple candidates, and in each time a random noise is injected (see code).
Currently we use IOU as the metric to pick the candidate, which sometimes cannot find the best result. You can manually check all the candidates.
First, determine your character set, by placing a txt file in data/char_set/
echo "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" > data/char_set/eng.txt
echo "कखगघङ..." > data/char_set/deva.txt
Taking --language 'eng' (English) as an example:
Modify MAX_SEQ_LEN
(the maximum sequence length) in svg_utils.py
. We set MAX_SEQ_LEN
to 50
for English and 70
for Chinese. You can also change the number according to your need.
python -m deepvecfont.data_utils.make_dataset --split train --language eng
python -m deepvecfont.data_utils.make_dataset --split test --language eng
python -m deepvecfont.data_utils.augment --split train --language chn
Please note that all the Chinese fonts are collected from Founder, and the fonts CANNOT be used for any commercial uses without permission from Founder.
If you use this code or find our work is helpful, please consider citing our work:
@inproceedings{wang2023deepvecfont,
title={DeepVecFont-v2: Exploiting Transformers to Synthesize Vector Fonts with Higher Quality},
author={Wang, Yuqing and Wang, Yizhi and Yu, Longhui and Zhu, Yuesheng and Lian, Zhouhui},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={18320--18328},
year={2023}
}