8000 Regarding the model: The first encoder is used to calculate the relationship aware embedding of the head entity h hh, while the input of the second encoder BERTt only contains the textual description of entity t. · Issue #38 · intfloat/SimKGC · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Regarding the model: The first encoder is used to calculate the relationship aware embedding of the head entity h hh, while the input of the second encoder BERTt only contains the textual description of entity t. #38

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
whistle9 opened this issue Feb 17, 2024 · 3 comments

Comments

@whistle9
Copy link

Hello, I would like to know if the model uses two encoders to obtain the head entity relationship perception embedding and tail entity embedding, and finally performs normalization processing calculation. Can this model be implemented with one encoder?

@intfloat
Copy link
Owner

Yes, we use two separate encoders as shown in https://github.com/intfloat/SimKGC/blob/97cc43e488f19ca5b0f6fbf60ffefd2ee56c0693/models.py#L43-L44

You can also implement with one shared encoder, but I did not test its results.

@whistle9
Copy link
Author

Thank you for your reply. Also, I would like to know if the way you proposed in your paper to add inverse triples to each triplet is to add all triples before negative sampling? Does this have any impact on subsequent negative sampling?

@whistle9
Copy link
Author

是的,我们使用两个独立的编码器,如https://github.com/intfloat/SimKGC/blob/97cc43e488f19ca5b0f6fbf60ffefd2ee56c0693/models.py#L43-L44

您也可以使用一个共享编码器实现,但我没有测试其结果。
This part of the code initializes two BERT model instances, hr_bert and tail_bert, in the custom BERT model. hr_bert is loaded from the pre trained model, while tail_bert is a deep copy of hr_bert. But what is mentioned in the paper is that the first encoder is used to calculate the relationship aware embedding of the head entity h, and the second encoder is used to calculate the L2 normalized embedding et of the tail entity t. So what is this copy?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0