You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I sincerely appreciate your valuable research and the effort you put into the paper Evaluating and Inducing Personality in Pre-trained Language Models. I found it incredibly insightful.
I am currently working on reproducing the results using the code and data you generously provided. While doing so, I came across a few questions regarding the results on the MPI dataset (as shown in Table 2 of the paper).
Are the results reported in Table 2 from a single trial, or are they the average of multiple trials?
In the latest version of Transformers library, setting temperature = 0 causes the error Temperature needs to be > 0. Could you share an alternative solution or the generate_config settings you used in the experiment?
During the reproduction process, I used GPT-Neo 2.7B with a low temperature setting of 0.001, and I found the model generated biased responses. For instance, when counting the responses, I observed the following distribution: {'A': 9, 'B': 0, 'C': 109, 'D': 2, 'E': 0, 'UNK': 0}. Did you encounter a similar issue during your experiments? If possible, could you also share the response counts from your experiment?
Could you share the version of Transformers library you used in your experiments?
I have great respect for your work and understand you have a busy schedule, but if possible, I would appreciate your feedback.
Thank you again for your time and consideration. I look forward to hearing from you.
The text was updated successfully, but these errors were encountered:
Uh oh!
There was an error while loading. Please reload this page.
Hello,
I sincerely appreciate your valuable research and the effort you put into the paper Evaluating and Inducing Personality in Pre-trained Language Models. I found it incredibly insightful.
I am currently working on reproducing the results using the code and data you generously provided. While doing so, I came across a few questions regarding the results on the MPI dataset (as shown in Table 2 of the paper).
Are the results reported in Table 2 from a single trial, or are they the average of multiple trials?
In the latest version of Transformers library, setting
temperature = 0
causes the errorTemperature needs to be > 0.
Could you share an alternative solution or the generate_config settings you used in the experiment?During the reproduction process, I used GPT-Neo 2.7B with a low temperature setting of 0.001, and I found the model generated biased responses. For instance, when counting the responses, I observed the following distribution:
{'A': 9, 'B': 0, 'C': 109, 'D': 2, 'E': 0, 'UNK': 0}
. Did you encounter a similar issue during your experiments? If possible, could you also share the response counts from your experiment?Could you share the version of Transformers library you used in your experiments?
I have great respect for your work and understand you have a busy schedule, but if possible, I would appreciate your feedback.
Thank you again for your time and consideration. I look forward to hearing from you.
The text was updated successfully, but these errors were encountered: