- 
                Notifications
    You must be signed in to change notification settings 
- Fork 3.7k
feat: impl Responses API in oai-adapters #8417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add GPT-5 Codex to llm-info package with 500k context and 150k max tokens - Add model definition to models.ts for UI configuration - Include GPT-5 Codex in OpenAI provider packages list - Model supports chat and edit roles with tool_use capability
| ✅ Review Complete Code Review Summary | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues found across 8 files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, had several nitpicks but none are blocking. It would be nice to eliminate the >20 as any type casts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed changes from recent commits (found 1 issue).
1 issue found across 3 files
Prompt for AI agents (all 1 issues)
Understand the root cause of the following 1 issues and fix them.
<file name="packages/openai-adapters/src/apis/openaiResponses.ts">
<violation number="1" location="packages/openai-adapters/src/apis/openaiResponses.ts:191">
Please pass the original ResponseOutputText part into createOutputTextPart so that annotations/logprobs survive the round-trip back into Responses payloads.</violation>
</file>
React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.
| 🎉 This PR is included in version 1.31.0 🎉 The release is available on: Your semantic-release bot 📦🚀 | 
| 🎉 This PR is included in version 1.4.0 🎉 The release is available on: Your semantic-release bot 📦🚀 | 
| 🎉 This PR is included in version 1.28.0 🎉 The release is available on: Your semantic-release bot 📦🚀 | 
Overview
This PR patches over the logic from #7891 into openai-adapters. It converts Responses objects back into chat completion objects for streaming, implements ternary logic for streaming vs non-streaming defined in
packages/openai-adapters/src/apis/base.ts, is non-stateful, and generally tries to adhere to the logic from #7891Testing
Create a
gpt5-codex-openai.yamlsomewhere, e.g.continue/rootthen run
npm run start -- --config /path/to/continue//gpt5-codex-openai.yamland perform general smoke testing.