-
Notifications
You must be signed in to change notification settings - Fork 20
pin litellm to 1.75.3 #107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: maorfr The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
WalkthroughThe version of the Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Note 🔌 MCP (Model Context Protocol) integration is now available in Early Access!Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
aad8152 to
a5fd3b2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
Containerfile.add_llama_to_lightspeed (3)
16-16: Quote extras and avoid cache to prevent shell globbing and reduce image size.Square brackets can be interpreted by the shell; quoting avoids surprises. Also add --no-cache-dir.
Apply:
-RUN python3.12 -m pip install litellm[proxy]==1.75.3 +RUN python3.12 -m pip install --no-cache-dir 'litellm[proxy]==1.75.3'
16-16: Single source of truth for the version across Dockerfiles.Define an ARG and reuse it here and in Containerfile.assisted-chat to avoid drift.
Add near the top of the Dockerfile:
ARG LITELLM_VERSION=1.75.3Then change the install line to:
RUN python3.12 -m pip install --no-cache-dir "litellm[proxy]==${LITELLM_VERSION}"
16-16: Document the rationale (link the upstream issue) for future maintainers.Add a brief comment next to the pin so folks know when/why to unpin.
Example:
# Workaround: pin litellm due to llama-stack issue #3072 RUN python3.12 -m pip install --no-cache-dir 'litellm[proxy]==1.75.3'Containerfile.assisted-chat (2)
7-7: Quote extras and avoid pip cache; consider centralizing the version.Avoid shell globbing with brackets and keep images lean. Reuse a shared ARG to keep both Dockerfiles aligned.
Apply:
-RUN python3 -m pip install pyyaml pyaml litellm[proxy]==1.75.3 sqlalchemy mcp psycopg2-binary +RUN python3 -m pip install --no-cache-dir pyyaml pyaml 'litellm[proxy]==1.75.3' sqlalchemy mcp psycopg2-binaryOptionally:
ARG LITELLM_VERSION=1.75.3 RUN python3 -m pip install --no-cache-dir pyyaml pyaml "litellm[proxy]==${LITELLM_VERSION}" sqlalchemy mcp psycopg2-binary
7-7: Add context comment for the pin (upstream issue link).A short note helps future upgrades.
Example:
# Workaround: pin litellm due to llama-stack issue #3072 RUN python3 -m pip install --no-cache-dir pyyaml pyaml 'litellm[proxy]==1.75.3' sqlalchemy mcp psycopg2-binary
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
Containerfile.add_llama_to_lightspeed(1 hunks)Containerfile.assisted-chat(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Red Hat Konflux / assisted-chat-saas-main-on-pull-request
🔇 Additional comments (3)
Containerfile.add_llama_to_lightspeed (1)
16-16: Good pin to litellm[proxy]==1.75.3 aligned with the PR’s workaround intent.Keep it pinned until the upstream issue is resolved.
Containerfile.assisted-chat (2)
7-7: Pin looks good and consistent with the other Dockerfile.This matches the intended workaround.
7-7: Dependency resolution sanity check completeI’ve verified that in
Containerfile.assisted-chat:
- The base image is pinned by digest.
- Line 7 installs all Python packages in one
pip installinvocation (pyyaml,pyaml,litellm[proxy]==1.75.3,sqlalchemy,mcp,psycopg2-binary), so litellm’s transitive dependencies are resolved alongside your top-level installs.- There are no other
pip installcalls in this layer that could introduce version skew.litellm[proxy]==1.75.3is consistently pinned across both Containerfiles.No resolver conflicts detected—ready to merge.
| RUN python3.12 -m pip install pyyaml pyaml | ||
|
|
||
| RUN python3.12 -m pip install litellm[proxy] | ||
| RUN python3.12 -m pip install litellm[proxy]==1.75.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify compatibility with Python 3.12 and ensure no other unpinned installs conflict.
The base uses python3.12 here; confirm litellm[proxy]==1.75.3 supports it and that transitive deps resolve cleanly with other global installs (sqlalchemy, psycopg2-binary, mcp).
Run:
🏁 Script executed:
#!/bin/bash
# Check for other litellm references and versions to ensure consistency
rg -n "litellm(\[[^]]*\])?"
# Optional: sanity-check Python version references in Dockerfiles
rg -n "python3(\.12)? -m pip install.*litellm"
# If you want to confirm upstream support via docs:
# (Provide results/links back in the PR)Length of output: 514
🌐 Web query:
Does litellm version 1.75.3 officially support Python 3.12 and include the [proxy] extra?
💡 Result:
As of August 10, 2025, the latest available version of LiteLLM is 1.74.0, which officially supports Python 3.12. (docs.litellm.ai) The [proxy] extra is also supported in this version. To install LiteLLM with the [proxy] extra, you can use the following command:
pip install 'litellm[proxy]'This command installs LiteLLM along with the additional dependencies required for the proxy functionality. (github.com)
Please note that the version 1.75.3 you mentioned is not listed among the available releases. It's possible that this version has not been released yet or is not publicly available. For the most accurate and up-to-date information, I recommend checking the official LiteLLM GitHub repository or the PyPI page.
Citations:
- 1: https://docs.litellm.ai/release_notes?utm_source=openai
- 2: https://github.com/BerriAI/litellm?utm_source=openai
Pin LiteLLM to an existing release
The pinned litellm[proxy]==1.75.3 isn’t available on PyPI or in the official release notes. The latest public release is 1.74.0, which supports Python 3.12 and includes the [proxy] extra. Update both Dockerfiles accordingly:
- Containerfile.assisted-chat (line 7):
- RUN python3 -m pip install pyyaml pyaml litellm[proxy]==1.75.3 sqlalchemy mcp psycopg2-binary
+ RUN python3 -m pip install pyyaml pyaml litellm[proxy]==1.74.0 sqlalchemy mcp psycopg2-binary- Containerfile.add_llama_to_lightspeed (line 16):
- RUN python3.12 -m pip install litellm[proxy]==1.75.3
+ RUN python3.12 -m pip install litellm[proxy]==1.74.0Also review and pin versions for your other high-level installs (pyyaml, pyaml, sqlalchemy, mcp, psycopg2-binary) to ensure reproducible, conflict-free builds.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN python3.12 -m pip install litellm[proxy]==1.75.3 | |
| RUN python3.12 -m pip install litellm[proxy]==1.74.0 |
🤖 Prompt for AI Agents
In Containerfile.add_llama_to_lightspeed at line 16, the LiteLLM package is
pinned to version 1.75.3, which is not available on PyPI. Change the version to
1.74.0, the latest public release supporting Python 3.12 and the [proxy] extra.
Additionally, review and pin specific versions for other high-level dependencies
like pyyaml, pyaml, sqlalchemy, mcp, and psycopg2-binary to ensure reproducible
and conflict-free builds.
|
/lgtm |
part of https://issues.redhat.com/browse/MGMT-21414
follow up on #106
related to lightspeed-core/lightspeed-stack#370
a workaround for BerriAI/litellm#13447 (workaround for llamastack/llama-stack#3072)
Summary by CodeRabbit