This is code and dataset of paper "Entropy-Based Guidance and Predictive Modelling of Pedestrians’ Visual Attention in Urban Environment".
The paper has been published on Building Simulation: https://doi.org/10.1007/s12273-024-1165-y
If you have any comments or queries regarding our data, code, or paper, please feel free to contact me via email: xieqixu@mail.tsinghua.edu.cn
Visualization of 120°-front-view model (top-ground truth, bottom-prediction)
Visualization of 360° model (top-ground truth, bottom-prediction)
A cnn-rnn architecture is used for both models.
The architecture of 120°-front-view model:
The architecture of 360° model:python>=3.8.16
torch >=2.0.1
opencv-python>=4.7.0.72
-
Dataset of 120°-front-view [Download]
-
Dataset of 360° [Download]
To train 120°-front-view model, run: main_train.py
To train 360° model, run: mian_train_360.py