- 
                Notifications
    
You must be signed in to change notification settings  - Fork 1.4k
 
Add integration with Nebius AI Studio #2454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| 
           @markbackman @mattieruth Dear team, could you please review? This MR only adds new functionality (compatibility with Nebius AI Studio provider) to enable more usage of the framework. Thank you!  | 
    
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Thanks for contributing. Just a few comments.
Also, a few other to dos:
- Update 
env.examplewithNEBIUS_API_KEY - Update the README with a link to the docs section
 - Add a docs entry following other LLM examples. Docs repo: https://github.com/pipecat-ai/docs
 - Lint the code. Install the pre-commit hook: 
uv run pre-commit installfrom the root of the repo. You can run scripts to clean up:./scripts/fix-ruff.sh. 
| # SPDX-License-Identifier: BSD 2-Clause License | ||
| # | ||
| 
               | 
          ||
| import sys | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep this file but remove all contents. The deprecation notice you see in others applies to older services only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you remove the contents of this file? This should be blank.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file should be blank.
          Codecov Report❌ Patch coverage is  
 
 🚀 New features to boost your workflow:
  | 
    
| 
           Mark, thank you for an extensive review! Will fix everything once get back from vacation  | 
    
| 
           @markbackman Hey again! Could you please approve the MR?  | 
    
| 
           Sorry for the delay. There are still a few to dos before this is ready.  | 
    
| 
           @markbackman thank you for the comments, fixed your suggestions. Could you please review?  | 
    
| 
           Hi @markbackman, kind ping here :)  | 
    
| </div></h1> | ||
| 
               | 
          ||
| [](https://pypi.org/project/pipecat-ai)  [](https://codecov.io/gh/pipecat-ai/pipecat) [](https://docs.pipecat.ai) [](https://discord.gg/pipecat) [](https://deepwiki.com/pipecat-ai/pipecat) | ||
| [](https://pypi.org/project/pipecat-ai)  [](https://codecov.io/gh/pipecat-ai/pipecat) [](https://docs.pipecat.ai) [](https://discord.gg/pipecat) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You probably need to rebase. The only change to the README should be to add Nebius to the LLM services list.
| @@ -0,0 +1,51 @@ | |||
| # | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Service implementation looks good!
| audio_in_enabled=True, | ||
| audio_out_enabled=True, | ||
| vad_analyzer=SileroVADAnalyzer(), | ||
| ), | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
transport_params should be:
transport_params = {
    "daily": lambda: DailyParams(
        audio_in_enabled=True,
        audio_out_enabled=True,
        vad_analyzer=SileroVADAnalyzer(params=VADParams(stop_secs=0.2)),
        turn_analyzer=LocalSmartTurnAnalyzerV3(params=SmartTurnParams()),
    ),
    "twilio": lambda: FastAPIWebsocketParams(
        audio_in_enabled=True,
        audio_out_enabled=True,
        vad_analyzer=SileroVADAnalyzer(params=VADParams(stop_secs=0.2)),
        turn_analyzer=LocalSmartTurnAnalyzerV3(params=SmartTurnParams()),
    ),
    "webrtc": lambda: TransportParams(
        audio_in_enabled=True,
        audio_out_enabled=True,
        vad_analyzer=SileroVADAnalyzer(params=VADParams(stop_secs=0.2)),
        turn_analyzer=LocalSmartTurnAnalyzerV3(params=SmartTurnParams()),
    ),
}
That will require new imports:
from pipecat.audio.turn.smart_turn.base_smart_turn import SmartTurnParams
from pipecat.audio.turn.smart_turn.local_smart_turn_v3 import LocalSmartTurnAnalyzerV3
from pipecat.audio.vad.vad_analyzer import VADParams
| }, | ||
| ] | ||
| 
               | 
          ||
| context = OpenAILLMContext(messages, tools) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update to match the new universal context pattern:
| context = OpenAILLMContext(messages, tools) | |
| context = LLMContext(messages, tools) | 
| ] | ||
| 
               | 
          ||
| context = OpenAILLMContext(messages, tools) | ||
| context_aggregator = llm.create_context_aggregator(context) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And:
| context_aggregator = llm.create_context_aggregator(context) | |
| context_aggregator = LLMContextAggregatorPair(context) | 
| async def on_client_connected(transport, client): | ||
| logger.info(f"Client connected") | ||
| # Kick off the conversation. | ||
| await task.queue_frames([context_aggregator.user().get_context_frame()]) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new pattern for initiating the conversation is:
| await task.queue_frames([context_aggregator.user().get_context_frame()]) | |
| await task.queue_frames([LLMRunFrame()]) | 
| 
           On track to be merged. Just fix up those last items and add a changelog. Then this should be good to go!  | 
    
| 
           Friendly ping to review these comments when you get a moment.  | 
    
Dear team,
Thank you for an amazing package. This MR adds Nebius AI Studio to the list of available providers.