Releases: ExtensityAI/symbolicai
v0.18.3
v0.18.2
SymbolicAI v0.18.2 – Release Notes
- Groq Engine: Fixed reasoning_effort parameter handling to only exclude it for non-Qwen and non-OpenAI models
Full Changelog: v0.18.1...v0.18.2
v0.18.1
SymbolicAI v0.18.1 – Release Notes
- Fixed: Improved Groq JSON Object Mode handling; prevents tool conflicts with response_format.
- Fixed: UserWarning output now escapes HTML-style model text to avoid console parsing errors.
Full Changelog: v0.18.0...v0.18.1
v0.18.0
SymbolicAI v0.18.0 – Release Notes
- Expanded Groq backend support:
- Enabled
DynamicEngine
. - Docs: caveat on Groq JSON mode/tool choice bug.
- Token metadata: support in
MetadataTracker
andRuntimeInfo
.
- Enabled
- Misc:
- Minor style/escaping fixes in console printing.
- Internal refactors and warning clarity.
- Added placeholder comments for
rich
CLI migration.
Full Changelog: v0.17.7...v0.18.0
v0.17.7
SymbolicAI v0.17.7 – Release Notes
- Improved OpenAI search result citation handling:
- Citations now use sequential integer IDs (not strings/hashes).
- Output formatting is cleaner:
- Removes leftover markdown link artifacts and empty parentheses.
- Citation markers are inserted after cited text:
[n] (title)\n
- Citation start/end spans now point to marker locations.
- Citations provide normalized URLs (without utm_ parameters).
- Added robust citation extraction from OpenAI response segments.
- Updated test suite for stricter citation and formatting checks.
- Minor API and test code simplifications.
Full Changelog: v0.17.6...v0.17.7
v0.17.6
SymbolicAI v0.17.6 – Release Notes
- Improved citation handling in OpenAI Search Engine:
- Rewrote citation extraction and link replacement logic for robustness.
- URLs in results are now normalized (tracking params removed).
- Reduced citation duplicates; citation IDs assigned sequentially.
- Citation display format enhanced for clarity.
- Switched to official OpenAI Python SDK for API calls.
Full Changelog: v0.17.5...v0.17.6
v0.17.5
SymbolicAI v0.17.5 – Release Notes
.instruct_llm
's[[Definitions]]
section now lists all model fields, including root model fields.- Fields without
description
show a generic guidance note. - Field-level
examples
andexample
(fromField
) are rendered as bullet lists, even if no description is provided. - Preserves and displays provided field descriptions and examples in nested/complex schemas.
- Improves robustness for nested dict, list, tuple, and union types.
- Tests added to verify schema, definitions, and example rendering behaviors.
- Patch: no breaking changes.
Full Changelog: v0.17.4...v0.17.5
v0.17.4
SymbolicAI v0.17.4 – Release Notes
- Added support for OpenAI GPT-5 (
gpt-5
,gpt-5-mini
,gpt-5-nano
, andgpt-5-chat-latest
) throughout backend and mixin logic. - Added support for Anthropic Claude 4.1 models in ID detection logic.
- Improved handling of
size
parameter inengine_gpt_image
. - Refactored and simplified
LLMDataModel
:- Fixed and enhanced schema formatting, especially for
allOf
, nested refs, and const fields. - Improved example generation, now preferring non-null values for optionals in instruct examples.
- Removed dead code and deprecated methods.
- Fixed and enhanced schema formatting, especially for
- Updated tests to reflect changes; removed or updated outdated unit tests.
- Minor fixes and expanded model/vision handling.
Full Changelog: v0.17.3...v0.17.4
v0.17.3
SymbolicAI v0.17.3 – Release Notes
Bug Fixes
- Fixed recursive depth handling - Added proper recursion depth limiting (max 50) to prevent stack overflow in deeply nested structures
- Fixed circular reference detection - Improved detection in dictionaries and nested models with proper visited tracking
- Fixed JSON key type coercion - Removed manual string-to-int key conversion, now relies on Pydantic's built-in type coercion (strict=False)
- Fixed example generation format - Changed from Python to JSON format in examples for better LLM compatibility
Changes
- Removed
ValidationHandlingPrimitives
andConstraintHandlingPrimitives
classes (moved validation to Strategy) - Updated
strategy.py
to use Pydantic's type coercion for JSON validation - Improved
_resolve_allof_type
to handle edge cases better - Enhanced const field descriptions in schema generation
Testing
- Added comprehensive test coverage for circular references and depth limits
- Added negative path tests for validation edge cases
- Updated tests to use JSON format instead of Python format for examples
- Made test assertions more flexible to formatting changes
Full Changelog: v0.17.1...v0.17.3
v0.17.1
SymbolicAI v0.17.1 – Release Notes
🚀 New Features & Enhancements
Enhanced LLMDataModel Capabilities
-
Advanced Type Support: Expanded support for complex Python types including:
- Enum types with proper value serialization
- Dictionary keys with non-string types (int, float, bool, tuple, frozenset)
- Set and frozenset collections
- Tuple types with specific element types
- Deeply nested union types
-
Improved Schema Generation:
- More accurate and readable schema representations for complex types
- Better handling of
allOf
,anyOf
, andoneOf
JSON schema constructs - Enhanced enum type descriptions in schemas
- Clearer definitions for array, set, and tuple types
-
Python-First Example Generation:
- Examples now use Python syntax instead of JSON for better readability
- Improved example generation for union types with multiple variants
- Smart handling of dictionary keys based on their type annotations
Anthropic Claude Integration
- New Model Support: Added support for Claude Opus 4.1 (
claude-opus-4-1
) - Enhanced Streaming: Improved handling of raw streaming responses from Anthropic API
🔧 Refactoring & Improvements
LLMDataModel Architecture Overhaul
- Cleaner Method Organization: Refactored into smaller, focused static methods for better maintainability
- Circular Reference Protection: Added safeguards against infinite loops in recursive models
- Performance Optimizations:
- Added
@lru_cache
decorators for expensive operations - Improved visited set tracking to prevent redundant processing
- Added
Code Quality Improvements
- Type Validation: Added
@model_validator
for const field validation - JSON Serialization: Special handling for integer dictionary keys (JSON limitation workaround)
- Format Enhancements: Better string representation for complex nested structures
🐛 Bug Fixes
Critical Fixes
- Output Processing: Fixed issue where
raw_output
flag was not properly respected in result limiting - Dictionary Key Handling: Resolved JSON serialization issues with non-string dictionary keys
- Circular References: Fixed infinite recursion in deeply nested or self-referential models
Edge Case Handling
- Empty Collections: Proper formatting of empty lists and dictionaries
- None Values: Consistent handling of None values in string representations
- Special Characters: Improved handling of quotes, newlines, and unicode in field values
🧪 Test Coverage
Comprehensive Test Suite Addition
- New Test Files: Added 4 dedicated test files with 600+ lines of test coverage:
test_llmdatamodel_basic.py
: Core functionality teststest_llmdatamodel_advanced.py
: Complex scenarios and edge casestest_llmdatamodel_comprehensive.py
: Full feature coveragetest_llmdatamodel_integration.py
: Real-world use case simulations
Test Categories
- Type System Tests: Union types, optional fields, literals, enums
- Validation Tests: Custom validators, constraints, const fields
- Performance Tests: Large models, deep recursion, caching
- Integration Tests: API responses, GraphQL, database configs, ML model configs
- Edge Cases: Circular references, special characters, empty values
📝 Developer Experience
Improved Developer Workflow
- Better Error Messages: More descriptive validation errors with remedy suggestions
- Consistent Behavior: Unified handling of different field types across all operations
- Documentation: Enhanced docstrings and type hints throughout the codebase
Backward Compatibility
- All changes maintain backward compatibility with existing code
- Version bump to 0.17.1 indicates minor feature additions without breaking changes
🎯 Impact Summary
This release significantly enhances the LLMDataModel
framework's capability to handle complex, real-world data structures while maintaining clean, maintainable code. The extensive test coverage ensures reliability, and the performance optimizations make it suitable for production use cases. Developers will benefit from better type support, clearer examples, and more robust error handling.
Full Changelog: v0.16.3...v0.17.1