Releases: ExtensityAI/symbolicai
v0.17.3
SymbolicAI v0.17.3 – Release Notes
Bug Fixes
- Fixed recursive depth handling - Added proper recursion depth limiting (max 50) to prevent stack overflow in deeply nested structures
- Fixed circular reference detection - Improved detection in dictionaries and nested models with proper visited tracking
- Fixed JSON key type coercion - Removed manual string-to-int key conversion, now relies on Pydantic's built-in type coercion (strict=False)
- Fixed example generation format - Changed from Python to JSON format in examples for better LLM compatibility
Changes
- Removed
ValidationHandlingPrimitives
andConstraintHandlingPrimitives
classes (moved validation to Strategy) - Updated
strategy.py
to use Pydantic's type coercion for JSON validation - Improved
_resolve_allof_type
to handle edge cases better - Enhanced const field descriptions in schema generation
Testing
- Added comprehensive test coverage for circular references and depth limits
- Added negative path tests for validation edge cases
- Updated tests to use JSON format instead of Python format for examples
- Made test assertions more flexible to formatting changes
Full Changelog: v0.17.1...v0.17.3
v0.17.1
SymbolicAI v0.17.1 – Release Notes
🚀 New Features & Enhancements
Enhanced LLMDataModel Capabilities
-
Advanced Type Support: Expanded support for complex Python types including:
- Enum types with proper value serialization
- Dictionary keys with non-string types (int, float, bool, tuple, frozenset)
- Set and frozenset collections
- Tuple types with specific element types
- Deeply nested union types
-
Improved Schema Generation:
- More accurate and readable schema representations for complex types
- Better handling of
allOf
,anyOf
, andoneOf
JSON schema constructs - Enhanced enum type descriptions in schemas
- Clearer definitions for array, set, and tuple types
-
Python-First Example Generation:
- Examples now use Python syntax instead of JSON for better readability
- Improved example generation for union types with multiple variants
- Smart handling of dictionary keys based on their type annotations
Anthropic Claude Integration
- New Model Support: Added support for Claude Opus 4.1 (
claude-opus-4-1
) - Enhanced Streaming: Improved handling of raw streaming responses from Anthropic API
🔧 Refactoring & Improvements
LLMDataModel Architecture Overhaul
- Cleaner Method Organization: Refactored into smaller, focused static methods for better maintainability
- Circular Reference Protection: Added safeguards against infinite loops in recursive models
- Performance Optimizations:
- Added
@lru_cache
decorators for expensive operations - Improved visited set tracking to prevent redundant processing
- Added
Code Quality Improvements
- Type Validation: Added
@model_validator
for const field validation - JSON Serialization: Special handling for integer dictionary keys (JSON limitation workaround)
- Format Enhancements: Better string representation for complex nested structures
🐛 Bug Fixes
Critical Fixes
- Output Processing: Fixed issue where
raw_output
flag was not properly respected in result limiting - Dictionary Key Handling: Resolved JSON serialization issues with non-string dictionary keys
- Circular References: Fixed infinite recursion in deeply nested or self-referential models
Edge Case Handling
- Empty Collections: Proper formatting of empty lists and dictionaries
- None Values: Consistent handling of None values in string representations
- Special Characters: Improved handling of quotes, newlines, and unicode in field values
🧪 Test Coverage
Comprehensive Test Suite Addition
- New Test Files: Added 4 dedicated test files with 600+ lines of test coverage:
test_llmdatamodel_basic.py
: Core functionality teststest_llmdatamodel_advanced.py
: Complex scenarios and edge casestest_llmdatamodel_comprehensive.py
: Full feature coveragetest_llmdatamodel_integration.py
: Real-world use case simulations
Test Categories
- Type System Tests: Union types, optional fields, literals, enums
- Validation Tests: Custom validators, constraints, const fields
- Performance Tests: Large models, deep recursion, caching
- Integration Tests: API responses, GraphQL, database configs, ML model configs
- Edge Cases: Circular references, special characters, empty values
📝 Developer Experience
Improved Developer Workflow
- Better Error Messages: More descriptive validation errors with remedy suggestions
- Consistent Behavior: Unified handling of different field types across all operations
- Documentation: Enhanced docstrings and type hints throughout the codebase
Backward Compatibility
- All changes maintain backward compatibility with existing code
- Version bump to 0.17.1 indicates minor feature additions without breaking changes
🎯 Impact Summary
This release significantly enhances the LLMDataModel
framework's capability to handle complex, real-world data structures while maintaining clean, maintainable code. The extensive test coverage ensures reliability, and the performance optimizations make it suitable for production use cases. Developers will benefit from better type support, clearer examples, and more robust error handling.
Full Changelog: v0.16.3...v0.17.1
v0.16.3
SymbolicAI v0.16.3 – Release Notes
- Fixed example generation in
instruct_llm
for user-defined models and non-value
unions. - Added test coverage for union and optional-union cases.
Full Changelog: v0.16.2...v0.16.3
v0.16.2
SymbolicAI 0.16.2 – Release Notes
Core / LLMDataModel
• Schema simplification now recognises
– dict/mapping fields via additionalProperties
– primitive alternatives inside anyOf/oneOf/union blocks
• Definitions list includes non-object union variants.
• Union handling:
– Heuristic chooses the simplest subtype for examples.
– instruct_llm emits one example per union alternative and wraps them in [[Example N]] blocks.
• build_dynamic_llm_datamodel: removed cache; clearer field description.
Strategy
• contract:
– post_remedy default switched to False.
– Robust dynamic type inference: separate input/output paths, improved error messages.
– act() signature validation added.
Full Changelog: v0.16.1...v0.16.2
v0.16.1
SymbolicAI v0.16.1 – Release Notes
🆕 Added Developer Value
Dynamic Type Annotation in Contracts
- Native Python Types in Contracts:
The@contract
decorator and contract system now fully support native Python types (e.g.,str
,int
,list[int]
,dict[str, int]
,Optional[...]
,Union[...]
) as inputs and outputs forpre
,act
,post
, andforward
methods.- Automatic Model Wrapping:
These types are automatically wrapped/unwrapped into internalLLMDataModel
classes with a singlevalue
field for validation, reducing verbosity and enabling more concise contracts. - Hybrid Contracts:
Seamless mixing ofLLMDataModel
and native Python/typing types is now possible (e.g.,LLMDataModel
input and primitive list output).
- Automatic Model Wrapping:
Graceful Failure and Type Checking Enhancements
- Gracious Degradation (
graceful=True
):
Graceful mode suppresses exceptions and, if enabled, skips the final output type check. This allows flexible fallback behaviors inforward
and is particularly useful for advanced error-handling scenarios. - Improved Exception Handling Flow:
All contract step errors are now consistently captured inself.contract_exception
. Developers may choose to propagate these or handle as needed in theirforward
logic.
Documentation and Visualization
- Comprehensive Contract Flow Diagram:
An PNG flowchart visualizing contract execution steps has been added to the docs (assets/images/contract_flow.png
anddocs/source/FEATURES/contracts.md
) to aid developer understanding. - Expanded Documentation:
Updated thecontracts.md
with clear examples, explanation of contract result/exception/fallback handling, and dynamic type annotation details.
Improved Local Engine Usability
- llama.cpp Compatibility Note:
Included tested commit and build setup references for users in the local engine documentation, increasing reproducibility.
🔧 Refactoring & Internal Improvements
Codebase Refactoring
- Centralized Dynamic Model Creation:
Introducedbuild_dynamic_llm_datamodel
using Pydantic’screate_model
with LRU caching for efficient, reliable dynamic class creation. - Signature and Dynamic Model Checks:
Refactored contract internals (strategy.py
) to automatically infer and enforce input/output type models—significantly reducing boilerplate for simple contracts.
Retry Strategy Tuning
- Smarter Retry Defaults:
Lowered default retry delays (faster backoff, less jitter) for validation/remedy functions, resulting in speedier remediation attempts. - Improved Pause/Backoff Logic:
Retry backoff and jitter are now better parameterized per-attempt.
Robust Logging and Exception Surface
- Consistent use of
logger.exception
across contract paths for full stack traces and easier debugging.
🐛 Bug Fixes
- Type Coercion Oddities:
Fixed subtle type and identity propagation/validation bugs for contracts mixing LLMDataModels and primitives. - Output Model Handling:
Final output unwrapping and type error propagation behaves correctly in all graceful and non-graceful scenarios. - Constructor/Server Behavior:
Dual writes for SYMAI server configuration ensure consistency regardless of startup location. - Splash Screen Formatting:
UI cleanup in interactive menu for a more polished user experience.
🧪 Test Coverage
- Extensive Unit Testing for Dynamic Types:
Added robust tests for all dynamic annotation scenarios:str
,int
,list
,dict
,Optional
,Union
, and nested types- Hybrid contracts mixing LLMDataModels and primitives
- Graceful vs. strict output checking
- Remedy behavior and error propagation
- Multiple Test Cases Improved:
Expanded/cleaned up contract tests for state, error path, edge cases, and fallback handling. - Visualization Flow Invariant:
Flowchart confirms the correctness of the new, more generic contract machinery.
🚀 Upgrade Guide
- For Developers of Contract-Decorated Classes:
- You may now use standard Python/typing signatures directly anywhere in your contract methods!
- Older explicit LLMDataModel-based code remains fully compatible.
- For Library Extenders:
- Use
build_dynamic_llm_datamodel()
to support arbitrary Python/typing types in new symbolic contract-based features.
- Use
Full Changelog: v0.15.1...v0.16.1
v0.15.1
Release Notes – v0.15.1
- Fix: Contract pre- and post-validation now correctly propagate input/output when no validation methods are implemented.
- Fix: Return values restored for
_validate_input
and_validate_output
to ensure data flow. - Test: Added coverage for skip pre/post scenarios and post-validation without remedy.
Full Changelog: v0.15.0...v0.15.1
v0.15.0
SymbolicAI v0.15.0 – Release Notes
This release is almost entirely focused on developer-experience: easier installation, simpler APIs, tighter error handling, and a fresh batch of unit-tests.
Migration effort should be low (see “Breaking changes” at the end).
Highlights / Added value for developers
• Brand-new lightweight Web-Scraping stack
– New engine naive_webscraping
(requests + trafilatura) and matching interface
– Does not need a browser, works in headless environments and CI
– Pre-seeds common consent/age cookies and follows meta-refresh redirects
– symai.core.scrape
decorator supersedes fetch
– Extensive doc updates + tutorial (docs/ENGINES/webscraping_engine.md
)
• First-class local Vector DB
– Dedicated interface naive_vectordb
(wrapping the in-memory VectorDB)
– Supports add
, search
, config(save|load|purge)
out of the box
– Embedding model auto-switches: falls back to SentenceTransformers if no remote key is present
– New default index path: ~/.symai/localdb/<index>.pkl
• Chatbot quality bump
– SymbiaChat
now uses the new interfaces map (cfg_to_interface
)
– [CRAWLER] tag replaced by [SCRAPER], [RETRIEVAL] by [FILE]
– Automatic long-term memory persistence through the new vector-db interface
– Verbose/debug mode now prints classification tags and scratch-pads via loguru
• Whisper improvements
– Device auto-fallback with helpful warnings
– get_bins()
helper to chunk transcripts into N-minute windows
– More explicit error messages for unsupported models / missing install
• Infrastructure & packaging
– Project build now compatible with uv
(pyproject.toml [tool.uv]
, README snippets)
– Core dependency list trimmed (removed accelerate
), optional extras split into fine-grained groups (hf
, webscraping
, services
, …)
– Dev group (dependency-groups.dev
) with pytest
and isort
– .symai
folder is always created under the active debug dir (fixes path mix-ups)
Refactoring & code clean-up
• Crawler deprecation
– Removed selenium driver, driver helpers and old interface/engine (engine_selenium.py
, extended.crawler
, docs)
– All decorators, prompts, docs renamed from crawler to scraper
• VectorDB internals
– Large readability pass; duplicated code removed, warnings switched to CustomUserWarning
– VectorDBIndexEngine
refactored for clearer control-flow, explicit error paths, deep-copied config, consistent naming
• Chat subsystem
– Switched to loguru
logger
– Memory window size, top-k and index name configurable via constructor
– Redundant functions & unused variables removed
• Setup wizard slimmed down – now only shows an intro and creates an empty config; the previous CLI questions were outdated and are gone.
Bug fixes
✓ Fixed .symai/config
discovery when running in SYMAI_DEBUG=1
mode
✓ WhisperEngine
no longer crashes if whisper
is missing – clear ImportError with pip hint
✓ VectorDB.purge()
cleared in-memory state and removes stale files
✓ semassert()
returns boolean and no longer raises by default (tests rely on it)
✓ Various doc links, typos and outdated references (speech_engine → speech_to_text_engine, …)
Test coverage
tests/engines/webscraping/…
– crawls an online page in all output formats and extracts text from a PDFtests/engines/index/test_naive_vectordb_engine.py
– add/search/save/load/purge happy pathtests/engines/speech-to-text/test_whisper_engine.py
– transcription + language detection pathstests/engines/symbolic/test_wolframalpha_engine.py
– basic factual and math queries
→ Total lines under test up significantly; CI runtime down because Selenium was dropped.
Breaking changes
-
Interface("selenium")
,core.fetch()
and[CRAWLER]
prompt tags were removed.
→ UseInterface("naive_webscraping")
and[SCRAPER]
instead. -
Interface("vectordb")
is replaced byInterface("naive_vectordb")
, and operations now expect/return plain strings instead of zipped tuples. -
Optional extra names changed (
selenium
→webscraping
; new granular groups). Update yourpip install symbolicai[…]
commands. -
Version bumped to 0.15.0 – pin accordingly in your environment files.
Enjoy the cleaner, lighter stack! Report issues or PRs on GitHub.
Full Changelog: v0.14.0...v0.15.0
v0.14.0
SymbolicAI v0.14.0 – Release Notes
High-level Summary
• New cloud provider support: Groq Cloud (Qwen-3, Llama-3.3, etc.)
• Cleaner, more consistent error messages across all LLM engines
• Significantly shorter “Getting Started” instructions—configuration is now managed via symai.config.json
instead of a long list of environment variables
• Miscellaneous bug fixes, test improvements, and documentation pruning
────────────────────────────────────────────────────────
- New Features
• Groq Engine (symai.backend.engines.neurosymbolic.engine_groq
)
– Full neuro-symbolic wrapper for Groq Cloud’s OpenAI-compatible API
– Supports reasoning models that expose<think>
trace tags
– Activate with:
"NEUROSYMBOLIC_ENGINE_MODEL": "groq:qwen/qwen3-32b"
– Token counting, vision, and parallel-tool calls are explicitly marked “Not Implemented” with clear warnings
• Thinking Trace Support Expansion
– Added support for Groq reasoning models alongside Claude, Gemini, and DeepSeek
- Bug Fixes & Reliability
• All LLM engines now emit a uniform error message
Error during generation. Caused by: <original-exception>
(replaces the provider-specific messages such as “Anthropic API request failed…”)
• Search Engines (OpenAI & Perplexity) wrapped API calls in try/except to avoid unhandled crashes
- Developer / DX Improvements
• Installation & Configuration Simplification
– Removed long “export XYZ_API_KEY” snippets from README and docs
– Single commandsymconfig
bootstraps the config file and prints helpful hints
– All optional dependencies (ffmpeg, selenium, etc.) now have a single warning block instead of per-OS commands
• Testing & CI
– Added semassert()
helper for “weak assertions” that warn instead of fail when the model is flaky
– New marker @pytest.mark.skipif(NEUROSYMBOLIC.startswith('groq'), …)
to skip tests that are not yet supported (token counting, vision)
- Documentation & House-keeping
• Removed the “Summary” section at the end of the neuro-symbolic engine doc (it was redundant)
• Pruned the large “Pattern Matching & Intelligence Operations” block from the primitives doc—moved remaining examples into relevant sections
• Updated code examples to useprint(res)
instead ofprint(res.value)
for consistency with new return types
Full Changelog: v0.13.2...v0.14.0
v0.13.2
Bug Fixes
-
Validation Retry Logic:
The retry logic for validation instrategy.py
has been improved. Previously, semantic validation errors could cause incorrect behavior, potentially skipping the final allowed attempt. The code now ensures that the maximum number of retries are respected (withN+1
total tries), and if semantic validation fails on the last allowed attempt, a clear error is propagated and logged. -
README Example Correction:
Fixed a bug in thevalidate_some_field
function in the README example (valid_sizes
→valid_opts
) to correctly match the list variable name.
Improvements
- Documentation Notes:
- Added note in the README: The project name credits Allen Newell and Herbert Simon.
- Added clearer information regarding enabling/disabling user warnings (
SYMAI_WARNINGS=0
) and community data collection features. - Cleaned up configuration documentation for clarity.
Full Changelog: v0.13.1...v0.13.2
v0.13.1
This is a patch release focused on refining the documentation to make SymbolicAI easier to use and extend.
✨ New & Improved
- Clearer
Symbol
Concepts: The README and documentation have been significantly improved to better explain the core concept of Syntactic (literal) vs. Semantic (neuro-symbolic) modes forSymbol
objects. This should make it easier for new users to get started. - Updated Custom Engine Guide: The guide for creating a custom engine has been updated to use the more modern
GPTXChatEngine
, reflecting current best practices.
🔧 Fixes & Changes
- Updated Search Engine Model: The default model for the Perplexity search engine (
SEARCH_ENGINE_MODEL
) has been updated fromllama-3.1-sonar-small-128k-online
tosonar
.
📖 Documentation
- The main
README.md
has been rewritten to be more concise and welcoming, with better examples forSymbol
andContract
. - The large table of primitives in the README has been condensed, linking to the full documentation for a cleaner first impression.
Full Changelog: v0.13.0...v0.13.1