8000 GitHub - CG0211/TLR-M: In this research, we present a problem of queuing time aware next POI recommendation and demonstrate how it is non-trivial to both recommend a next POI and simultaneously predict its queuing time. To solve this problem, we propose a multi-task, multi head attention transformer model called TLR-M. The model recommends next POIs to the target users and predicts queuing time to access the POIs simultaneously. By utilizing multi-head attention, the TLR-M model can integrate long range dependencies between any two POI visit efficiently and evaluate their contribution to select next POIs and to predict queuing time. To use this code in your research work please cite the following paper. Sajal Halder, Kwan Hui Lim, Jeffrey Chan, and Xiuzhen Zhang. Transformer-based multi-task learning for queuing time aware next poi recommendation. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 510–523. Springer, 2021, DOI: https://doi.org/10.1007/978-3-030-75765-6_41
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
/ TLR-M Public
forked from sajalhalder/TLR-M

In this research, we present a problem of queuing time aware next POI recommendation and demonstrate how it is non-trivial to both recommend a next POI and simultaneously predict its queuing time. To solve this problem, we propose a multi-task, multi head attention transformer model called TLR-M. The model recommends next POIs to the target user…

Notifications You must be signed in to change notification settings

CG0211/TLR-M

Open more actions menu
 
 

Repository files navigation

TLRM

In this research, we present a problem of queuing time aware next POI recommendation and demonstrate how it is non-trivial to both recommend a next POI and simultaneously predict its queuing time. To solve this problem, we propose a multi-task, multi head attention transformer model called TLR-M. The model recommends next POIs to the target users and predicts queuing time to access the POIs simultaneously. By utilizing multi-head attention, the TLR-M model can integrate long range dependencies between any two POI visit efficiently and evaluate their contribution to select next POIs and to predict queuing time.

To use this code in your research work please cite the following paper.

Sajal Halder, Kwan Hui Lim, Jeffrey Chan, and Xiuzhen Zhang. Transformer-based multi-task learning for queuing time aware next poi recommendation. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 510–523. Springer, 2021, DOI: https://doi.org/10.1007/978-3-030-75765-6_41

Implemtation Details

In this TLR-M model implemenation, we have used transformer based attention machanism that has been implemented in python programing language. We use tensorflow, keras and attention machanism.

Required Packages:

tensorflow: 2.4.1

pandas: 1.2.2

Here we added only one dataset (Melbourne). If you are interested to know about more datasets email at sajal.halder@student.rmit.edu.au or sajal.csedu01@gmail.com

About

In this research, we present a problem of queuing time aware next POI recommendation and demonstrate how it is non-trivial to both recommend a next POI and simultaneously predict its queuing time. To solve this problem, we propose a multi-task, multi head attention transformer model called TLR-M. The model recommends next POIs to the target user…

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%
0