-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
feat(node): Add Anthropic AI integration #17348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
11ee820
to
f88aef1
Compare
size-limit report 📦
|
59dda8c
to
7feef93
Compare
77ee440
to
fdb0059
Compare
options?: AnthropicAiOptions, | ||
): (...args: T) => Promise<R> { | ||
return async function instrumentedMethod(...args: T): Promise<R> { | ||
const finalOptions = options || getOptionsFromIntegration(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rename this a bit as it's not entirely clear from the function name what this function does. It's basically just checking if the options allow the inputs/outputs to be recorded.
const finalOptions = options || getOptionsFromIntegration(); | |
const finalOptions = options || getRecordingOptionsFromIntegration(); |
And could it be that this is the same function?
https://github.com/getsentry/sentry-javascript/pull/17348/files#diff-c9e6a17a3ac925c55873d82ff00c973c66d21433aec89430112dc3f10ff849cfR27
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the new name! And yah only one in core and one in node, not worth exporting as they are not an exact match
response.usage.cache_creation_input_tokens, | ||
response.usage.cache_read_input_tokens, | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Incorrect Token Count Due to Parameter Misuse
In addResponseAttributes
, setTokenUsageAttributes
receives response.usage.cache_read_input_tokens
for its cachedOutputTokens
parameter. Since cache_read_input_tokens
are actually input tokens, this leads to an incorrect total token count. The cachedOutputTokens
parameter name is also misleading.
Additional Locations (1)
// Models.countTokens | ||
if ('input_tokens' in response) { | ||
span.setAttributes({ [GEN_AI_RESPONSE_TEXT_ATTRIBUTE]: JSON.stringify(response.input_tokens) }); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR adds official support for instrumenting Anthropic AI SDK calls in Node with Sentry tracing, following OpenTelemetry semantic conventions for Generative AI.
We instrument the following Anthropic AI SDK methods:
Supported in:
The anthropicAIIntegration() accepts the following options: