You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Would there be any benefit in using the streaming API to cut down on generating unnecessary tokens? One could cancel the generation as soon as an invalid token appears.
I wonder how much better the ratio "wrong" tokens GPT produces with the new Function Calling API. In my short experience it still hallucinates rather a lot.
I also think OpenAI should give us an interface to modify the probabilities in the sampling process. I wonder if WASM would be the best choice or if there is not a simpler alternative that could be passed as text to the JSON.
Maybe some form of universal schema definition / parser syntax definition file.
The text was updated successfully, but these errors were encountered:
Love your approach and your description.
Would there be any benefit in using the streaming API to cut down on generating unnecessary tokens? One could cancel the generation as soon as an invalid token appears.
I wonder how much better the ratio "wrong" tokens GPT produces with the new Function Calling API. In my short experience it still hallucinates rather a lot.
I also think OpenAI should give us an interface to modify the probabilities in the sampling process. I wonder if WASM would be the best choice or if there is not a simpler alternative that could be passed as text to the JSON.
Maybe some form of universal schema definition / parser syntax definition file.
The text was updated successfully, but these errors were encountered: