Skip to content

Issue with output_type and Custom LLM Providers in openai-agents-python #1390

@hlahany

Description

@hlahany

Describe the bug

When using the output_type parameter in the Agent class with a custom LLM provider, a validation error occurs with the returned JSON. This leads to the following error from Pydantic when trying to parse the result:

for TypeAdapter(MapperOutput); 1 validation error for MapperOutput
  Invalid JSON: expected value at line 1 column 1 [type=json_invalid, input_value='\```json\n{\n    "mapping...erage classes."\n}\n```', input_type=str]

Debug information

  • Agents SDK version: v0.2.3
  • Python version: v3.12.0

Repro steps

class MapperOutput(BaseModel):
    mapping_table: str = Field(..., description="The mapping table")
    additional_notes: str = Field(..., description="Any additional notes for the mapping.")

mapper_agent = Agent(
    name="mapper_agent",
    instructions=mapper_agent_instructions,
    output_type=MapperOutput,
    model=OpenAIChatCompletionsModel(model=model, openai_client=client)
)

Expected behavior

The output_type mechanism should be robust to code block markers and successfully parse the JSON, or should ensure that the model output is stripped of such markers before attempting to parse.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions