Regarding the model: The first encoder is used to calculate the relationship aware embedding of the head entity h hh, while the input of the second encoder BERTt only contains the textual description of entity t. · Issue #38 · intfloat/SimKGC · GitHub
More Web Proxy on the site http://driver.im/
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Regarding the model: The first encoder is used to calculate the relationship aware embedding of the head entity h hh, while the input of the second encoder BERTt only contains the textual description of entity t.
#38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
Hello, I would like to know if the model uses two encoders to obtain the head entity relationship perception embedding and tail entity embedding, and finally performs normalization processing calculation. Can this model be implemented with one encoder?
The text was updated successfully, but these errors were encountered:
Thank you for your reply. Also, I would like to know if the way you proposed in your paper to add inverse triples to each triplet is to add all triples before negative sampling? Does this have any impact on subsequent negative sampling?
您也可以使用一个共享编码器实现,但我没有测试其结果。
This part of the code initializes two BERT model instances, hr_bert and tail_bert, in the custom BERT model. hr_bert is loaded from the pre trained model, while tail_bert is a deep copy of hr_bert. But what is mentioned in the paper is that the first encoder is used to calculate the relationship aware embedding of the head entity h, and the second encoder is used to calculate the L2 normalized embedding et of the tail entity t. So what is this copy?
Hello, I would like to know if the model uses two encoders to obtain the head entity relationship perception embedding and tail entity embedding, and finally performs normalization processing calculation. Can this model be implemented with one encoder?
The text was updated successfully, but these errors were encountered: