This is a plugin for LLM that adds support for the Cerebras inference API.
Install this plugin in the same environment as LLM.
pip install llm-cerebras
You'll need to provide an API key for Cerebras.
llm keys set cerebras
The plugin automatically fetches the latest available models from the Cerebras API and caches them for 24 hours.
llm models list | grep cerebras
# CerebrasModel: cerebras-llama3.1-8b
# CerebrasModel: cerebras-llama3.3-70b
# CerebrasModel: cerebras-llama-4-scout-17b-16e-instruct
# CerebrasModel: cerebras-qwen-3-32b
To get the latest models from the Cerebras API and update the cache:
llm cerebras refresh
This will fetch the current list of available models and save them to the cache. The models are automatically cached for 24 hours, so you typically don't need to refresh manually unless you want to check for newly released models.
The llm-cerebras plugin supports schemas for structured output. You can use either compact schema syntax or full JSON Schema:
# Using compact schema syntax
llm -m cerebras-llama3.3-70b 'invent a dog' --schema 'name, age int, breed'
# Using multi-item schema for lists
llm -m cerebras-llama3.3-70b 'invent three dogs' --schema-multi 'name, age int, breed'
# Using full JSON Schema
llm -m cerebras-llama3.3-70b 'invent a dog' --schema '{
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"breed": {"type": "string"}
},
"required": ["name", "age", "breed"]
}'
You can add descriptions to your schema fields to guide the model:
llm -m cerebras-llama3.3-70b 'invent a famous scientist' --schema '
name: the full name including any titles
field: their primary field of study
year_born int: year of birth
year_died int: year of death, can be null if still alive
achievements: a list of their major achievements
'
You can save schemas as templates for reuse:
# Create a template
llm -m cerebras-llama3.3-70b --schema 'title, director, year int, genre' --save movie_template
# Use the template
llm -t movie_template 'suggest a sci-fi movie from the 1980s'
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-cerebras
python -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the unit tests:
pytest tests/test_cerebras.py tests/test_schema_support.py
To run integration tests (requires a valid API key):
pytest tests/test_integration.py
To run automated user workflow tests:
pytest tests/test_automated_user.py
You can run specific test types using markers:
pytest -m "integration" # Run only integration tests
pytest -m "user" # Run only user workflow tests
Apache 2.0