-
Notifications
You must be signed in to change notification settings - Fork 20
Add litellm proxy extra #103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe Dockerfiles were updated to install the Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Suggested reviewers
Note 🔌 MCP (Model Context Protocol) integration is now available in Early Access!Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Without the extra, llama-stack runs into the following error: ``` Missing dependency No module named 'backoff'. Run `pip install 'litellm[proxy]'` ``` Not sure what changed, but now it's needed
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: carbonin, omertuc The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
Containerfile.add_llama_to_lightspeed (1)
16-16: Quote the extras and avoid caching to reduce image size.To prevent shell globbing on [] and keep layers lean, consider:
-RUN python3.12 -m pip install litellm[proxy] +RUN python3.12 -m pip install --no-cache-dir "litellm[proxy]"Optionally pin a known-good version for reproducible builds (e.g., via a build arg or constraints file).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
Containerfile.add_llama_to_lightspeed(1 hunks)
🔇 Additional comments (1)
Containerfile.add_llama_to_lightspeed (1)
16-16: LGTM: adding the proxy extra fixes the missing 'backoff' runtime error.Change matches the PR goal and should resolve llama-stack’s import issue.
bd661b0
into
rh-ecosystem-edge:main
Without the extra, llama-stack runs into the following error:
Not sure what changed, but now it's needed
Summary by CodeRabbit