-
Notifications
You must be signed in to change notification settings - Fork 32
feat(ai): langchain cached and reasoning tokens #258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
10000By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Summary
Enhanced LangChain callback token tracking by adding comprehensive monitoring for reasoning and cache operations in the PostHog Python SDK.
- Added cache token tracking in
posthog/ai/langchain/callbacks.py
to monitor both read and write operations across different LLM providers - Implemented reasoning token tracking with support for models like o1-mini and structured handling through a new ModelUsage dataclass
- Added detailed test cases in
test_callbacks.py
covering OpenAI and Anthropic token tracking scenarios - Updated minimum version requirements for LangChain dependencies in
pyproject.toml
to support new token tracking features
3 files reviewed, no comments
Edit PR Review Bot Settings | Greptile
@@ -683,13 +704,39 @@ def _parse_usage_model( | |||
|
|||
parsed_usage[type_key] = final_count |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick (not related to the change): Could captured_count
(in captured_count = usage[model_key]
) be None in any case? Should we add a null check here to prevent potential exceptions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can be None. The code won't crash even though it's None, or did I miss the spot?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, nvm, overthinking :)
3baccfe
to
2a6f4f8
Compare
Problem
The LangChain callback doesn't count reasoning and cache write/read tokens.
Changes