10000 feat(ai): langchain cached and reasoning tokens by skoob13 · Pull Request #258 · PostHog/posthog-python · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

feat(ai): langchain cached and reasoning tokens #258

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

10000

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jun 13, 2025

Conversation

skoob13
Copy link
Contributor
@skoob13 skoob13 commented Jun 12, 2025

Problem

The LangChain callback doesn't count reasoning and cache write/read tokens.

Changes

  • Add cache write and read tokens.
  • Add reasoning tokens.
  • Add unit tests.
  • Update langchain versions in dev.

@skoob13 skoob13 requested review from a team and k11kirky June 12, 2025 10:15
Copy link
Contributor
@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Summary

Enhanced LangChain callback token tracking by adding comprehensive monitoring for reasoning and cache operations in the PostHog Python SDK.

  • Added cache token tracking in posthog/ai/langchain/callbacks.py to monitor both read and write operations across different LLM providers
  • Implemented reasoning token tracking with support for models like o1-mini and structured handling through a new ModelUsage dataclass
  • Added detailed test cases in test_callbacks.py covering OpenAI and Anthropic token tracking scenarios
  • Updated minimum version requirements for LangChain dependencies in pyproject.toml to support new token tracking features

3 files reviewed, no comments
Edit PR Review Bot Settings | Greptile

@@ -683,13 +704,39 @@ def _parse_usage_model(

parsed_usage[type_key] = final_count

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick (not related to the change): Could captured_count (in captured_count = usage[model_key]) be None in any case? Should we add a null check here to prevent potential exceptions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can be None. The code won't crash even though it's None, or did I miss the spot?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, nvm, overthinking :)

@skoob13 skoob13 requested a review from Twixes June 12, 2025 10:47
@skoob13 skoob13 force-pushed the feat/reasoning-cached-tokens branch from 3baccfe to 2a6f4f8 Compare June 13, 2025 09:06
@skoob13 skoob13 merged commit 52df246 into master Jun 13, 2025
7 checks passed
@skoob13 skoob13 deleted the feat/reasoning-cached-tokens branch June 13, 2025 13:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0