Skip to content

LiteLLM incorrect embeddings classification #4908

@Zach10za

Description

@Zach10za

How do you use Sentry?

Sentry Saas (sentry.io)

Version

2.41.0

Steps to Reproduce

  1. Add the LiteLLMIntegration to your sentry init integrations list.
  2. Run an embedding using LiteLLM (v1.77.5 in my case)
from litellm import embedding

response = embedding(
    model='openai/text-embedding-3-small',
    input='Some text to test embeddings',
)

Expected Result

I expect the operation name to be "embedding".

Looking at the current implementation, it looks like you are checking for the value of a "messages" key in the keyword arguments passed to the input callback function. As you can see in my screenshot below, "messages" is also present in embedding calls.

It looks like there is a key "call_type" included in the input callback kwargs that is either "completion" or "embedding". That might be a more appropriate value to check.

Actual Result

In the span details in the Sentry dashboard, the call is classified as chat when it should be embedding.

Image

Metadata

Metadata

Labels

Projects

Status

Waiting for: Product Owner

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions