Code for the paper 'LMTformer: Facial Depression Recognition with Lightweight Multi-Scale Transformer from Videos'
You can directly execute the main.py
scripts with your own dataset.
To proceed:
- Change the
load
andPath
on line 30 and 31 ofmain.py
.load
is thecsv_load
folder in root directory,Path
is your AVEC datasets. - Change the
device
on line 39 ofmain.py
to your own device - We've also passed the parameters of our trained model for you if you only want to test out model. You can use code:
model.load_state_dict(torch.load('best.pt',map_location='cuda:0'))
to load our parameters for AVEC2013.
Before running, ensure the videos are preprocessed to extract the required images.
Kindly note that due to authorization constraints, we are unable to share the AVEC datasets here. Therefore, it is necessary for you to independently extract, crop, align and pretreat the facial data.