8000 How to get the word representations · Issue #2 · doc-doc/NExT-OE · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

How to get the word representations #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
datar001 opened this issue Jun 18, 2021 · 3 comments
Open

How to get the word representations #2

datar001 opened this issue Jun 18, 2021 · 3 comments

Comments

@datar001
Copy link

Hi, I have a simple question. How to get the glove_embed.npy and vocab.pkl if there is a new dataset. To get the glove_embed.npy, can we need to train new word vectors on the vocabulary build by us? And if possible, can you release the pre-processing nlp code? Thanks very much.

@doc-doc
Copy link
Owner
doc-doc commented Jun 18, 2021

Hi, pls refer to build_vocab.py and word2vec.py.

@datar001
Copy link
Author

wow, maybe i am a blind man.... And, If the dim=-2 rather than -1 in Line 45 in networks/VQAModel/HGA.py, the performance will get a small improvement. This seems an implement mistake in HGA. There is only a value in the last dimension. If the softmax_dim=-1, the value will be all 1.

@doc-doc
Copy link
Owner
doc-doc commented Jun 18, 2021

Thx for pointing it out. The code follows the official repo, and yes that the attention-pooling is changed to sum-pooling here. I will fix it soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0