Skip to content

Conversation

@maorfr
Copy link
Collaborator

@maorfr maorfr commented Aug 10, 2025

part of https://issues.redhat.com/browse/MGMT-21414

follow up on #106

related to lightspeed-core/lightspeed-stack#370

a workaround for BerriAI/litellm#13447 (workaround for llamastack/llama-stack#3072)

Summary by CodeRabbit

  • Chores
    • Updated the installation process to use a specific version of the litellm[proxy] Python package for improved stability.

@openshift-ci openshift-ci bot requested review from carbonin and keitwb August 10, 2025 10:33
@openshift-ci
Copy link

openshift-ci bot commented Aug 10, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: maorfr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link

coderabbitai bot commented Aug 10, 2025

Walkthrough

The version of the litellm[proxy] Python package installed via Dockerfiles was changed from the latest available to a pinned version 1.75.3. No other modifications were made to the Dockerfile commands or package lists.

Changes

Cohort / File(s) Change Summary
Pin litellm[proxy] version in Dockerfiles
Containerfile.add_llama_to_lightspeed, Containerfile.assisted-chat
The installation of litellm[proxy] was updated to pin the version to 1.75.3 instead of using the latest available.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Possibly related PRs

  • Add litellm proxy extra #103: Also modifies the Dockerfile to adjust the installation of the litellm package, closely related to the current change by refining the installation command.

Suggested labels

lgtm

Suggested reviewers

  • maorfr
  • keitwb

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@maorfr maorfr force-pushed the pin-litellm-1.75.3 branch from aad8152 to a5fd3b2 Compare August 10, 2025 10:34
@maorfr maorfr changed the title ping litellm to 1.75.3 pin litellm to 1.75.3 Aug 10, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
Containerfile.add_llama_to_lightspeed (3)

16-16: Quote extras and avoid cache to prevent shell globbing and reduce image size.

Square brackets can be interpreted by the shell; quoting avoids surprises. Also add --no-cache-dir.

Apply:

-RUN python3.12 -m pip install litellm[proxy]==1.75.3
+RUN python3.12 -m pip install --no-cache-dir 'litellm[proxy]==1.75.3'

16-16: Single source of truth for the version across Dockerfiles.

Define an ARG and reuse it here and in Containerfile.assisted-chat to avoid drift.

Add near the top of the Dockerfile:

ARG LITELLM_VERSION=1.75.3

Then change the install line to:

RUN python3.12 -m pip install --no-cache-dir "litellm[proxy]==${LITELLM_VERSION}"

16-16: Document the rationale (link the upstream issue) for future maintainers.

Add a brief comment next to the pin so folks know when/why to unpin.

Example:

# Workaround: pin litellm due to llama-stack issue #3072
RUN python3.12 -m pip install --no-cache-dir 'litellm[proxy]==1.75.3'
Containerfile.assisted-chat (2)

7-7: Quote extras and avoid pip cache; consider centralizing the version.

Avoid shell globbing with brackets and keep images lean. Reuse a shared ARG to keep both Dockerfiles aligned.

Apply:

-RUN python3 -m pip install pyyaml pyaml litellm[proxy]==1.75.3 sqlalchemy mcp psycopg2-binary
+RUN python3 -m pip install --no-cache-dir pyyaml pyaml 'litellm[proxy]==1.75.3' sqlalchemy mcp psycopg2-binary

Optionally:

ARG LITELLM_VERSION=1.75.3
RUN python3 -m pip install --no-cache-dir pyyaml pyaml "litellm[proxy]==${LITELLM_VERSION}" sqlalchemy mcp psycopg2-binary

7-7: Add context comment for the pin (upstream issue link).

A short note helps future upgrades.

Example:

# Workaround: pin litellm due to llama-stack issue #3072
RUN python3 -m pip install --no-cache-dir pyyaml pyaml 'litellm[proxy]==1.75.3' sqlalchemy mcp psycopg2-binary
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4b04274 and a5fd3b2.

📒 Files selected for processing (2)
  • Containerfile.add_llama_to_lightspeed (1 hunks)
  • Containerfile.assisted-chat (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Red Hat Konflux / assisted-chat-saas-main-on-pull-request
🔇 Additional comments (3)
Containerfile.add_llama_to_lightspeed (1)

16-16: Good pin to litellm[proxy]==1.75.3 aligned with the PR’s workaround intent.

Keep it pinned until the upstream issue is resolved.

Containerfile.assisted-chat (2)

7-7: Pin looks good and consistent with the other Dockerfile.

This matches the intended workaround.


7-7: Dependency resolution sanity check complete

I’ve verified that in Containerfile.assisted-chat:

  • The base image is pinned by digest.
  • Line 7 installs all Python packages in one pip install invocation (pyyaml, pyaml, litellm[proxy]==1.75.3, sqlalchemy, mcp, psycopg2-binary), so litellm’s transitive dependencies are resolved alongside your top-level installs.
  • There are no other pip install calls in this layer that could introduce version skew.
  • litellm[proxy]==1.75.3 is consistently pinned across both Containerfiles.

No resolver conflicts detected—ready to merge.

RUN python3.12 -m pip install pyyaml pyaml

RUN python3.12 -m pip install litellm[proxy]
RUN python3.12 -m pip install litellm[proxy]==1.75.3
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify compatibility with Python 3.12 and ensure no other unpinned installs conflict.

The base uses python3.12 here; confirm litellm[proxy]==1.75.3 supports it and that transitive deps resolve cleanly with other global installs (sqlalchemy, psycopg2-binary, mcp).

Run:


🏁 Script executed:

#!/bin/bash
# Check for other litellm references and versions to ensure consistency
rg -n "litellm(\[[^]]*\])?"

# Optional: sanity-check Python version references in Dockerfiles
rg -n "python3(\.12)? -m pip install.*litellm"

# If you want to confirm upstream support via docs:
# (Provide results/links back in the PR)

Length of output: 514


🌐 Web query:

Does litellm version 1.75.3 officially support Python 3.12 and include the [proxy] extra?

💡 Result:

As of August 10, 2025, the latest available version of LiteLLM is 1.74.0, which officially supports Python 3.12. (docs.litellm.ai) The [proxy] extra is also supported in this version. To install LiteLLM with the [proxy] extra, you can use the following command:

pip install 'litellm[proxy]'

This command installs LiteLLM along with the additional dependencies required for the proxy functionality. (github.com)

Please note that the version 1.75.3 you mentioned is not listed among the available releases. It's possible that this version has not been released yet or is not publicly available. For the most accurate and up-to-date information, I recommend checking the official LiteLLM GitHub repository or the PyPI page.

Citations:


Pin LiteLLM to an existing release

The pinned litellm[proxy]==1.75.3 isn’t available on PyPI or in the official release notes. The latest public release is 1.74.0, which supports Python 3.12 and includes the [proxy] extra. Update both Dockerfiles accordingly:

  • Containerfile.assisted-chat (line 7):
- RUN python3 -m pip install pyyaml pyaml litellm[proxy]==1.75.3 sqlalchemy mcp psycopg2-binary
+ RUN python3 -m pip install pyyaml pyaml litellm[proxy]==1.74.0 sqlalchemy mcp psycopg2-binary
  • Containerfile.add_llama_to_lightspeed (line 16):
- RUN python3.12 -m pip install litellm[proxy]==1.75.3
+ RUN python3.12 -m pip install litellm[proxy]==1.74.0

Also review and pin versions for your other high-level installs (pyyaml, pyaml, sqlalchemy, mcp, psycopg2-binary) to ensure reproducible, conflict-free builds.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
RUN python3.12 -m pip install litellm[proxy]==1.75.3
RUN python3.12 -m pip install litellm[proxy]==1.74.0
🤖 Prompt for AI Agents
In Containerfile.add_llama_to_lightspeed at line 16, the LiteLLM package is
pinned to version 1.75.3, which is not available on PyPI. Change the version to
1.74.0, the latest public release supporting Python 3.12 and the [proxy] extra.
Additionally, review and pin specific versions for other high-level dependencies
like pyyaml, pyaml, sqlalchemy, mcp, and psycopg2-binary to ensure reproducible
and conflict-free builds.

@eranco74
Copy link
Collaborator

/lgtm

@openshift-ci openshift-ci bot added the lgtm label Aug 10, 2025
@maorfr maorfr merged commit 3dab15b into rh-ecosystem-edge:main Aug 10, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants