You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A hacked together end-to-end test would be nice to confirm functionality. I don't think anything fancy is needed yet.
The quality or content of the input and output are arbitrary. The process is deterministic (as far as I can see), so asserting that the activations, weights, and outputs are the same given the same input is enough confirm there are no breaking changes.
The model used needs to be small. For quick tests (testing if simply the process runs and produces the same result), any transformer supported by TransformerLens should do. I suggest GPT2 or Qwen-1.5B.
Qwen 1.5B (not to be confused with Qwen1.5) was used in the Refusal Demo. That might allow us to check refusal consistency with at least one model too.
The text was updated successfully, but these errors were encountered:
A hacked together end-to-end test would be nice to confirm functionality. I don't think anything fancy is needed yet.
The quality or content of the input and output are arbitrary. The process is deterministic (as far as I can see), so asserting that the activations, weights, and outputs are the same given the same input is enough confirm there are no breaking changes.
The model used needs to be small. For quick tests (testing if simply the process runs and produces the same result), any transformer supported by TransformerLens should do. I suggest GPT2 or Qwen-1.5B.
Qwen 1.5B (not to be confused with Qwen1.5) was used in the Refusal Demo. That might allow us to check refusal consistency with at least one model too.
The text was updated successfully, but these errors were encountered: