Skip to content

feat(node): Add Anthropic AI integration #17348

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

RulaKhaled
Copy link
Member

This PR adds official support for instrumenting Anthropic AI SDK calls in Node with Sentry tracing, following OpenTelemetry semantic conventions for Generative AI.

We instrument the following Anthropic AI SDK methods:

  • messages.create
  • messages.countTokens
  • models.get
  • models.retrieve
  • completions.create

Supported in:

  • Node.js (ESM and CJS)
  • Will be added to cloudflare workers and vercel edge functions exports.

The anthropicAIIntegration() accepts the following options:

// The integration respects your sendDefaultPii client option
interface AnthropicAiOptions {
  recordInputs?: boolean;   // Whether to record prompt messages
  recordOutputs?: boolean;  // Whether to record response text
}

@RulaKhaled RulaKhaled force-pushed the anthropic-integration branch from 11ee820 to f88aef1 Compare August 7, 2025 11:58
Copy link
Contributor

github-actions bot commented Aug 7, 2025

size-limit report 📦

Path Size % Change Change
@sentry/browser 24.16 kB - -
@sentry/browser - with treeshaking flags 22.73 kB - -
@sentry/browser (incl. Tracing) 39.87 kB - -
@sentry/browser (incl. Tracing, Replay) 77.99 kB - -
@sentry/browser (incl. Tracing, Replay) - with treeshaking flags 67.83 kB - -
@sentry/browser (incl. Tracing, Replay with Canvas) 82.69 kB - -
@sentry/browser (incl. Tracing, Replay, Feedback) 94.79 kB - -
@sentry/browser (incl. Feedback) 40.83 kB - -
@sentry/browser (incl. sendFeedback) 28.81 kB - -
@sentry/browser (incl. FeedbackAsync) 33.7 kB - -
@sentry/react 25.88 kB - -
@sentry/react (incl. Tracing) 41.86 kB - -
@sentry/vue 28.64 kB - -
@sentry/vue (incl. Tracing) 41.69 kB - -
@sentry/svelte 24.18 kB - -
CDN Bundle 25.66 kB - -
CDN Bundle (incl. Tracing) 39.76 kB - -
CDN Bundle (incl. Tracing, Replay) 75.82 kB - -
CDN Bundle (incl. Tracing, Replay, Feedback) 81.27 kB - -
CDN Bundle - uncompressed 74.96 kB - -
CDN Bundle (incl. Tracing) - uncompressed 117.59 kB - -
CDN Bundle (incl. Tracing, Replay) - uncompressed 231.88 kB - -
CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed 244.48 kB - -
@sentry/nextjs (client) 43.89 kB - -
@sentry/sveltekit (client) 40.32 kB - -
@sentry/node-core 47.99 kB -0.01% -1 B 🔽
⛔️ @sentry/node (max: 148 kB) 148.16 kB +0.78% +1.13 kB 🔺
@sentry/node - without tracing 92.24 kB - -
@sentry/aws-serverless 103.67 kB +0.01% +1 B 🔺

View base workflow run

@RulaKhaled RulaKhaled force-pushed the anthropic-integration branch from 59dda8c to 7feef93 Compare August 7, 2025 12:09
@RulaKhaled RulaKhaled marked this pull request as ready for review August 11, 2025 07:36
cursor[bot]

This comment was marked as outdated.

@RulaKhaled RulaKhaled force-pushed the anthropic-integration branch from 77ee440 to fdb0059 Compare August 11, 2025 13:31
options?: AnthropicAiOptions,
): (...args: T) => Promise<R> {
return async function instrumentedMethod(...args: T): Promise<R> {
const finalOptions = options || getOptionsFromIntegration();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rename this a bit as it's not entirely clear from the function name what this function does. It's basically just checking if the options allow the inputs/outputs to be recorded.

Suggested change
const finalOptions = options || getOptionsFromIntegration();
const finalOptions = options || getRecordingOptionsFromIntegration();

And could it be that this is the same function?
https://github.com/getsentry/sentry-javascript/pull/17348/files#diff-c9e6a17a3ac925c55873d82ff00c973c66d21433aec89430112dc3f10ff849cfR27

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the new name! And yah only one in core and one in node, not worth exporting as they are not an exact match

response.usage.cache_creation_input_tokens,
response.usage.cache_read_input_tokens,
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Incorrect Token Count Due to Parameter Misuse

In addResponseAttributes, setTokenUsageAttributes receives response.usage.cache_read_input_tokens for its cachedOutputTokens parameter. Since cache_read_input_tokens are actually input tokens, this leads to an incorrect total token count. The cachedOutputTokens parameter name is also misleading.

Additional Locations (1)
Fix in Cursor Fix in Web

// Models.countTokens
if ('input_tokens' in response) {
span.setAttributes({ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: JSON.stringify(response.input_tokens) });
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Token Count Attribute Misuse

The GEN_AI_RESPONSE_TEXT_ATTRIBUTE is incorrectly set with input_tokens for Models.countTokens responses. This attribute is meant for actual response text, not numerical token counts, which are already captured by setTokenUsageAttributes.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants