This is the recognizer used in the TVCG2023 paper HotGestures using Python 3.8
and Pytorch 1.7.0
.
Two separate models, one for each hand. To train a recognizer:
- Download the HotGestures dataset here.
- Change the data paths in
train_two_hands.py
to the local path on your computer. - Change the
model_fold
paths intrain_two_hands.py
andtrain.py
to a local directory - Run
train_two_hands.py
The best models should be saved in model_fold
.
Run test_two_hands.py
. The script uses the unsegmented data from /online_seq
to generate predictions using the specified models. Errors are calculated in terms of Levenshtain distances.
If you find this work useful please kindly cite us at:
@ARTICLE{10269004,
author={Song, Zhaomou and Dudley, John J. and Kristensson, Per Ola},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={HotGestures: Complementing Command Selection and Use with Delimiter-Free Gesture-Based Shortcuts in Virtual Reality},
year={2023},
volume={29},
number={11},
pages={4600-4610},
doi={10.1109/TVCG.2023.3320257}}