-
Notifications
You must be signed in to change notification settings - Fork 0
small ruff #170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
small ruff #170
Conversation
WalkthroughAdds and refactors repository templates, CI/workflow definitions, Task runner taskfiles, linting/formatting configs (Ruff, pre-commit), repository scaffolding docs (CONTRIBUTING, CODE_OF_CONDUCT), Makefile/Taskfile targets, and a small documentation demo change across template files and workflows. Changes
Sequence Diagram(s)sequenceDiagram
participant Maintainer
participant GitHub as GH
participant TagJob as Tag
participant BuildJob as Build
participant ReleaseJob as Release
participant PyPIJob as PyPI
participant ArtifactStorage as Artifacts
Maintainer->>GH: push tag (or trigger release)
GH->>Tag: run tag job (create & push git tag)
Tag-->>GH: tag created
GH->>BuildJob: run build (needs: tag)
BuildJob->>ArtifactStorage: upload dist/
ArtifactStorage-->>BuildJob: upload OK
BuildJob-->>GH: build complete
GH->>ReleaseJob: run release (needs: tag, build)
ReleaseJob->>ArtifactStorage: download dist/
ReleaseJob->>GH: create GitHub Release with dist/**
GH->>ReleaseJob: release published
GH->>PyPIJob: run pypi (needs: build, tag)
PyPIJob->>ArtifactStorage: download dist/
PyPIJob->>PyPI: publish packages (if artifacts present)
PyPI-->>PyPIJob: publish result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~35 minutes Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
Caution An error occurred while searching for functions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
🧹 Nitpick comments (42)
template/.github/CONTRIBUTING.md (8)
20-23
: Tighten phrasing and fix capitalization (“GitHub”).Minor clarity/grammar improvements and correct brand capitalization.
-You'll need to build the project locally in order to start editing code. -To install from source, clone the Github -repository, navigate to its root, and run the following command: +You'll need to build the project locally to start editing code. +To install from source, clone the GitHub +repository, navigate to its root, and run:
30-33
: Use “GitHub” capitalization consistently.Brand capitalization nit.
-For those new to contributing, check out Github's +For those new to contributing, check out GitHub's
41-44
: Streamline CI failure guidance.More direct guidance and grammar fix.
-Python versions and different platforms. If these tests start failing, -please -fix your code and send another commit, which will re-trigger the tests. +Python versions and platforms. If any test fails, +please amend your changes and push another commit; CI will re-run automatically.
45-48
: Clarify feature proposal process and fix capitalization.Reduce verbiage and correct “GitHub.”
-If you'd like to add a new feature, please do propose your -change on a Github issue, to make sure -that your priorities align with ours. +If you'd like to add a new feature, please open a GitHub issue first +to ensure that priorities align.
60-66
: Name “Ruff” consistently and clarify what “make fmt” does.Minor clarity: “Ruff” capitalization and purpose of the target.
-We use ruff to enforce our Python coding style. +We use Ruff to enforce our Python coding style. ... -to make sure that your changes abide by our style conventions. +to format and lint code according to our style conventions.Also applies to: 68-71
74-79
: Polish the unit test placement guidance.Smoother phrasing.
-When adding tests, try to find a file in which your tests should belong; -if you're testing a new feature, you might want to create a new test file. +When adding tests, try to place them in an existing, relevant test module. +For a new feature, create a new test module.
85-90
: Add comma for readability and shorten wording.Minor style improvement.
-To run all unit tests run the following command: +To run all unit tests, run:
92-94
: Strengthen language (“ensure” vs “make sure”).Concise, assertive wording.
-Please make sure that your change doesn't cause any -of the unit tests to fail. +Please ensure your changes do not cause any tests to fail.template/.github/taskfiles/cleanup.yml (1)
9-9
: Broaden cleanup to common caches (ruff, mypy, coverage HTML).Optional: remove additional generated artifacts frequently present in Python projects.
- - rm -rf dist build *.egg-info .coverage .pytest_cache + - rm -rf dist build *.egg-info .coverage .pytest_cache .mypy_cache .ruff_cache htmlcov .coverage.*template/.pre-commit-config.yaml (2)
15-16
: Confirm intentional use of Ruff’s --unsafe-fixes.
--unsafe-fixes
can apply changes that alter semantics. If the goal is to run safe fixes on commit and only enforce checks in CI, consider splitting hooks by stage (commit vs push) or dropping--unsafe-fixes
.If you want a staged approach, I can draft a commit-only “fix” hook and a push-only “check” hook for Ruff so CI never mutates code. Do you want me to propose that configuration?
24-24
: MD013 disable is reasonable; consider documenting rationale.Optional: add a short comment why MD013 (line length) is disabled (e.g., readability of long URLs). Helps future maintainers.
template/book/marimo/demo.py (2)
24-24
: Avoid re-importing marimo; define the alias once.You import
marimo
at module scope andmarimo as mo
again withinapp.setup
. Import once and reuse the alias for consistency and to aid static analysis.- import marimo as mo + # mo is already imported at module scopeAdditionally, update the top-level import to establish the alias and use it for app creation (outside the selected lines):
Replace at line 17:
import marimo
with:
import marimo as moReplace at line 20:
app = marimo.App()
with:
app = mo.App()
19-19
: Align __generated_with with dependency pin.Minor nit:
__generated_with = "0.13.11-dev14"
doesn’t match the header dependencymarimo==0.13.15
. Consider updating for consistency or removing the variable if it’s not used.template/.github/renovate.json (1)
17-20
: Timezone and schedule look reasonable.“Asia/Dubai” is a valid tz. If you expect contributors across multiple regions, consider aligning schedule with working hours in that timezone or using the default. Otherwise, this is fine.
template/ruff.toml (2)
78-79
: Fix inaccurate comment for N803N803 refers to argument names (not “variable names”). Adjust the comment to avoid confusion.
-"book/marimo/*.py" = ["N803", "S101"] # Allow non-lowercase variable names (N803) - # and assert statements in marimo files +"book/marimo/*.py" = ["N803", "S101"] # Allow non-lowercase argument names (N803) + # and assert statements in marimo files
9-10
: Jinja brace exclude pattern readabilityYour pattern works, but it’s hard to read. Consider using simpler, symmetric patterns for literal braces to reduce cognitive load.
-exclude = ["**/[{][{]*", "**/*[}][}]*"] +exclude = ["**/[{][{]*", "**/[}][}]**", "**/*[{][{]*", "**/*[}][}]*"]Alternatively, keep as-is if you prefer minimalism; behavior is correct.
template/.github/workflows/deptry.yml (3)
21-27
: Simplify pyproject detection to avoid a separate jobYou can drop the first job and use hashFiles to gate the second job, reducing workflow complexity and runtime.
-jobs: - check_pyproject: - name: Check pyproject.toml presence - runs-on: ubuntu-latest - outputs: - pyproject_exists: ${{ steps.check_file.outputs.exists }} - steps: - - uses: actions/checkout@v5 - - - name: Check for pyproject.toml - id: check_file - run: | - if [ -f "pyproject.toml" ]; then - echo "exists=true" >> "$GITHUB_OUTPUT" - else - echo "exists=false" >> "$GITHUB_OUTPUT" - fi - - deptry_analysis: +jobs: + deptry_analysis: name: Run deptry if pyproject.toml exists - needs: check_pyproject runs-on: ubuntu-latest - if: needs.check_pyproject.outputs.pyproject_exists == 'true' + if: ${{ hashFiles('pyproject.toml') != '' }} steps: - - uses: actions/checkout@v5 + - uses: actions/checkout@v4Also applies to: 39-44
10-16
: Run on pull_request for all branches (not only main)Currently PRs are limited to main. If you also lint dependency issues on feature branches targeting other branches (common in monorepos or release branches), broaden the trigger.
on: push: pull_request: - # Only run on pull requests targeting the main branch - branches: [ main ] + # Run on pull requests to any branch (adjust as needed) + branches: + - '**'
21-21
: Add concurrency to cancel superseded runsThis keeps CI queues clean when pushing frequently to the same branch.
jobs: + concurrency: + group: deptry-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: truetemplate/.github/workflows/ci.yml (3)
65-68
: Guard tests when no test suite is present and improve discoveryCalling pytest on tests/ will fail if the repo doesn’t have that folder. Let pytest discover tests automatically, and skip gracefully if none are present.
- - name: "Run the tests" - run: | - uv pip install pytest - uv run pytest tests + - name: "Run the tests" + if: ${{ hashFiles('**/test_*.py', '**/*_test.py', 'tests/**') != '' }} + run: | + uv pip install pytest + uv run pytest -q
22-27
: Enable concurrency to cancel in-flight runs on the same refHelps keep CI responsive during rapid pushes.
jobs: + concurrency: + group: ci-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true test:
53-60
: Consider using uv sync to install pytest when a pyproject is presentYou already run uv sync for projects with pyproject; pytest is often a dev dependency there. Installing pytest via uv pip may be redundant. Optional simplification:
- - name: "Build the virtual environment for ${{ github.repository }}" + - name: "Build the virtual environment for ${{ github.repository }}" if: hashFiles('pyproject.toml') != '' run: uv sync --all-extras - - - name: "Run the tests" + - name: "Run the tests (with pyproject)" + if: ${{ hashFiles('pyproject.toml') != '' && hashFiles('**/test_*.py', '**/*_test.py', 'tests/**') != '' }} + run: | + uv run pytest -q + - name: "Run the tests (no pyproject)" + if: ${{ hashFiles('pyproject.toml') == '' && hashFiles('**/test_*.py', '**/*_test.py', 'tests/**') != '' }} run: | uv pip install pytest - uv run pytest tests + uv run pytest -qtemplate/.github/taskfiles/quality.yml (1)
10-16
: Color variables may be unset; harmless but noisyIf color vars (BLUE/YELLOW/GREEN/RESET) aren’t exported by an included taskfile, the messages print without colors. If you want consistent colors, set safe defaults.
tasks: + # Safe defaults for color variables (no color if not defined) + # shell:sh expands to empty when undefined, but define explicitly if desired: + # BLUE="${BLUE:-}"; YELLOW="${YELLOW:-}"; GREEN="${GREEN:-}"; RESET="${RESET:-}" + # You can set these once in build.yml to avoid repetition.Alternatively, keep as-is; behavior is functionally fine.
Also applies to: 17-23, 24-35
template/.github/workflows/book.yml (2)
24-35
: Consider adding concurrency to avoid overlapping book buildsReduces wasted minutes when pushing multiple times in quick succession.
jobs: book: runs-on: "ubuntu-latest" environment: name: github-pages # 👈 this is the critical missing piece + concurrency: + group: pages-${{ github.ref }} + cancel-in-progress: true permissions:
49-64
: Guard docs tasks when Taskfiles/docs aren’t presentAs this is a template, some consumers might not include docs tasks yet. Optional guard to fail gracefully:
- - name: "Make the docs" + - name: "Make the docs" + if: ${{ hashFiles('.github/taskfiles/docs.yml') != '' }} run: | task docs:docs - - name: "Make the tests" + - name: "Make the tests" + if: ${{ hashFiles('.github/taskfiles/docs.yml') != '' }} run: | task docs:test - - name: "Make the notebooks" + - name: "Make the notebooks" + if: ${{ hashFiles('.github/taskfiles/docs.yml') != '' }} run: | task docs:marimushka - - name: "Make the book" + - name: "Make the book" + if: ${{ hashFiles('.github/taskfiles/docs.yml') != '' }} run: | task docs:booktemplate/.github/workflows/sync.yml (2)
22-24
: Fix misleading permission commentThe comment says “Needed to create releases,” but this workflow pushes branches and opens PRs. Adjust to reflect contents: write for pushing commits/branches.
permissions: contents: write # Needed to create releases + # NOTE: contents:write is required to push changes to sync/update-configs + # and allow the PR action to update branches.
38-50
: PAT usage: ensure least-privilege and consider GITHUB_TOKEN where possibleOverriding the token is fine if you need to bypass branch protection or act cross-repo, but prefer least-privilege:
- Use a fine-grained PAT scoped to the single target repo with only contents:write and pull_requests:write.
- If you don’t need a PAT, the default GITHUB_TOKEN with proper permissions usually suffices (add pull-requests: write if not using PAT).
Optional hardening:
permissions: - contents: write # Needed to create releases + contents: write + pull-requests: writeAnd pin Actions to immutable SHAs to reduce supply-chain risk:
-uses: actions/checkout@v5 +uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac # v5 -uses: peter-evans/create-pull-request@v7 +uses: peter-evans/create-pull-request@5e914681df9a14ffc974215724a1b2c8b91352b3 # v7template/.github/workflows/marimo.yml (4)
7-9
: Broaden triggers to PRs and manual runsCurrently only runs on push. It’s useful to validate notebooks on PRs and via manual dispatch.
on: - push + push + pull_request: + workflow_dispatch:
43-53
: Guard matrix job more robustly against unset/empty outputsIf the producer step ever fails to set the output, the equality check may evaluate unexpectedly. Prefer checking both existence and non-empty, non-empty list.
- if: needs.list-notebooks.outputs.notebook-list != '[]' + if: > + ${{ + needs.list-notebooks.outputs.notebook-list + && needs.list-notebooks.outputs.notebook-list != '[]' + }}
68-73
: Step name doesn’t match what it doesThis step doesn’t “Install uv, uvx” (they’re installed above). Rename for clarity.
- - name: Install uv, uvx and the environment + - name: Create env and install project deps run: | task build:install -s
75-78
: Optional: add a timeout to notebooks executionGuard against indefinitely running notebooks.
- - name: Run notebook + - name: Run notebook (with timeout) run: | - uv run "${{ matrix.notebook }}" + timeout 15m uv run "${{ matrix.notebook }}"template/.github/taskfiles/build.yml (1)
41-47
: Prefer ephemeral runners for build tools to avoid polluting venvInstead of installing hatch into the venv, use uvx hatch (ephemeral) or rely on PEP 517 build (python -m build) with build backend defined in pyproject.
- uv pip install hatch - uv run hatch build + # Use ephemeral runner to avoid persisting hatch in the venv + uvx --from hatch hatch buildAdditional nit: color variables BLUE/YELLOW/RED/RESET are not defined in this context. Either remove color codes or define them in each block as done in the uv task above.
template/.github/workflows/pre-commit.yml (2)
17-19
: Run on PRs and allow manual triggerEnsure code quality checks run for pull requests too, and allow manual runs if needed.
on: push: + pull_request: + workflow_dispatch:
39-45
: Run format before lint to reduce churnFormatting first minimizes lint diffs/failures and shortens feedback cycles.
- - name: Lint - run: | - task quality:lint - - - name: Format - run: | - task quality:fmt + - name: Format + run: | + task quality:fmt + + - name: Lint + run: | + task quality:linttemplate/Makefile (2)
7-14
: Add an ‘all’ alias and ensure default goal works for checkmakeDefine a conventional all target (aliasing help to preserve current UX) to satisfy checkmake’s minphony rule.
.DEFAULT_GOAL := help +.PHONY: all +all: help + install: ## install task build:install
11-27
: Optional: align PHONY vs targets and keep things consistentConsider adding install/test/help to .PHONY (see previous diff) and removing “build” from .PHONY since there’s no build target here. This silences noise for tools and future contributors.
template/Taskfile.yml (2)
17-23
: BLUE and CYAN use the same ANSI code (36m); set BLUE to 34mCurrently BLUE and CYAN both resolve to cyan. Use 34m for blue to avoid confusion in logs.
- BLUE: '\x1b[36m' + BLUE: '\x1b[34m' GREEN: '\x1b[32m' RED: '\x1b[31m' YELLOW: '\x1b[33m' CYAN: '\x1b[36m' MAGENTA: '\x1b[35m'
10-13
: Verify if BOOK_TITLE can resolve .REPO_NAME from env at template timeTask’s templating order can trip when vars reference env defined below. If BOOK_TITLE renders literally as “{{.REPO_NAME}}”, move BOOK_TITLE to env, or compute it in tasks. Please confirm it resolves as intended in your environment.
template/.github/taskfiles/docs.yml (2)
50-63
: Handle filenames with spaces/newlines when exporting notebooksThe current loop splits on whitespace. Use null-delimited find to robustly process paths.
- py_files=$(find "{{.MARIMO_FOLDER}}" -name "*.py" | tr '\n' ' ') - if [ -z "$py_files" ]; then + mapfile -d '' py_files < <(find "{{.MARIMO_FOLDER}}" -name "*.py" -print0) + if [ "${#py_files[@]}" -eq 0 ]; then printf "${YELLOW}[WARN] No Python files found in '{{.MARIMO_FOLDER}}'.${RESET}\n" echo "<html><head><title>Marimo Notebooks</title></head><body><h1>Marimo Notebooks</h1><p>No notebooks found.</p></body></html>" > _marimushka/index.html else - printf "${BLUE}[INFO] Found Python files: %s${RESET}\n" "$py_files" - for py_file in $py_files; do + printf "${BLUE}[INFO] Found %s Python files${RESET}\n" "${#py_files[@]}" + for py_file in "${py_files[@]}"; do printf " ${BLUE}[INFO] Processing %s...${RESET}\n" "$py_file" rel_path=$(echo "$py_file" | sed "s|^{{.MARIMO_FOLDER}}/||") dir_path=$(dirname "$rel_path") base_name=$(basename "$rel_path" .py) mkdir -p "_marimushka/$dir_path" uvx marimo export html --include-code --sandbox --output "_marimushka/$dir_path/$base_name.html" "$py_file" done
94-110
: jq is assumed to be present; add a guard or fallbackLocal environments may not have jq, causing the task to fail. Gate usage or provide a fallback merge.
Example guard (repeat for each jq usage):
- jq '. + {"Coverage": "./tests/html-coverage/index.html"}' _book/links.json > _book/tmp && mv _book/tmp _book/links.json + if command -v jq >/dev/null 2>&1; then + jq '. + {"Coverage": "./tests/html-coverage/index.html"}' _book/links.json > _book/tmp && mv _book/tmp _book/links.json + else + # minimal fallback + python - <<'PY' +import json,sys +p=" _book/links.json" +data=json.load(open(p.strip())) +data["Coverage"]="./tests/html-coverage/index.html" +json.dump(data,open(p.strip(),"w")) +PY + fitemplate/.github/workflows/release.yml (2)
24-37
: Optional: Add workflow concurrency to prevent duplicate releases per tagAvoid accidental concurrent runs for the same tag.
name: Release Workflow + +concurrency: + group: release-${{ github.event.inputs.tag }} + cancel-in-progress: false
44-87
: Optional: Pin third-party actions by commit SHA for supply-chain safetyConsider pinning actions to SHAs instead of floating tags.
Examples:
- actions/checkout@v5 → actions/checkout@
- astral-sh/setup-uv@v6 → ...@
- arduino/setup-task@v2 → ...@
- actions/upload-artifact@v4 → ...@
- actions/download-artifact@v5 → ...@
- softprops/[email protected] → ...@
- pypa/gh-action-pypi-publish@release/v1 → ...@
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
.github/workflows/pre-commit.yml
(0 hunks)template/.github/CODE_OF_CONDUCT.md
(1 hunks)template/.github/CONTRIBUTING.md
(1 hunks)template/.github/renovate.json
(1 hunks)template/.github/taskfiles/build.yml
(1 hunks)template/.github/taskfiles/cleanup.yml
(1 hunks)template/.github/taskfiles/docs.yml
(1 hunks)template/.github/taskfiles/quality.yml
(1 hunks)template/.github/workflows/book.yml
(1 hunks)template/.github/workflows/ci.yml
(1 hunks)template/.github/workflows/deptry.yml
(1 hunks)template/.github/workflows/marimo.yml
(1 hunks)template/.github/workflows/pre-commit.yml
(1 hunks)template/.github/workflows/release.yml
(1 hunks)template/.github/workflows/sync.yml
(1 hunks)template/.pre-commit-config.yaml
(1 hunks)template/Makefile
(1 hunks)template/Taskfile.yml
(1 hunks)template/book/marimo/demo.py
(3 hunks)template/ruff.toml
(1 hunks)
💤 Files with no reviewable changes (1)
- .github/workflows/pre-commit.yml
✅ Files skipped from review due to trivial changes (1)
- template/.github/CODE_OF_CONDUCT.md
🧰 Additional context used
🪛 LanguageTool
template/.github/CONTRIBUTING.md
[grammar] ~5-~5: There might be a mistake here.
Context: ...ibutions. You don't need to be an expert to help out. ## Checklist Contribution...
(QB_NEW_EN)
[style] ~20-~20: Consider a more concise word here.
Context: ...ou'll need to build the project locally in order to start editing code. To install from sou...
(IN_ORDER_TO_PREMIUM)
[grammar] ~41-~41: There might be a mistake here.
Context: ...platforms. If these tests start failing, please fix your code and send another co...
(QB_NEW_EN)
[grammar] ~42-~42: There might be a mistake here.
Context: ...ms. If these tests start failing, please fix your code and send another commit, w...
(QB_NEW_EN)
[grammar] ~45-~45: There might be a mistake here.
Context: ...dd a new feature, please do propose your change on a Github issue, to make sure t...
(QB_NEW_EN)
[grammar] ~46-~46: There might be a mistake here.
Context: ...r change on a Github issue, to make sure that your priorities align with ours. I...
(QB_NEW_EN)
[grammar] ~49-~49: There might be a mistake here.
Context: ...bute code but don't know where to start, try one of the following: - Read the so...
(QB_NEW_EN)
[grammar] ~50-~50: There might be a mistake here.
Context: ...on't know where to start, try one of the following: - Read the source and enhanc...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ... a pull request, navigate to the project root and run bash make fmt
to m...
(QB_NEW_EN)
[grammar] ~69-~69: There might be a mistake here.
Context: ... errors that are reported before sending the pull request. ## Writing unit tests...
(QB_NEW_EN)
[grammar] ~77-~77: There might be a mistake here.
Context: ... file in which your tests should belong; if you're testing a new feature, you mig...
(QB_NEW_EN)
[grammar] ~80-~80: There might be a mistake here.
Context: ...//docs.pytest.org/en/) framework for our tests. ## Running unit tests We use `p...
(QB_NEW_EN)
[style] ~92-~92: Consider using a different verb to strengthen your wording.
Context: ...command: bash make test
Please make sure that your change doesn't cause any of t...
(MAKE_SURE_ENSURE)
🪛 checkmake (0.2.2)
template/Makefile
[warning] 9-9: Missing required phony target "all"
(minphony)
[warning] 9-9: Missing required phony target "test"
(minphony)
🔇 Additional comments (7)
template/book/marimo/demo.py (2)
12-15
: Docstring LGTM.Good addition; concise and helpful context for the demo.
103-112
: Function docstring addition looks good.Clear summary, arguments, and return value; matches implementation.
template/.github/renovate.json (1)
2-16
: Config update LGTM, with one sanity check.Enabling “pep621” and “devcontainer” managers aligns with the broader pyproject/devcontainer adoption in this PR.
Please verify that:
- A pyproject.toml exists using PEP 621 metadata (for pep621 manager).
- A .devcontainer directory/config exists (for devcontainer manager).
If these aren’t present in consumer repos using this template, Renovate will log non-fatal warnings. I can add conditional documentation or manager-specific notes if needed.
template/.github/workflows/deptry.yml (1)
28-28
: actions/checkout@v5 is the correct, latest stable major version—no change requiredv5.0.0 was released on August 11, 2025 and is now the newest stable major. You can safely keep:
- uses: actions/checkout@v5Just ensure your runner is updated to at least v2.327.1 for compatibility.
template/.github/workflows/ci.yml (1)
37-37
: No changes needed for actions/checkout version
The latest stable major release is v5 (v5.0.0 published Aug 11, 2025). Referencingactions/checkout@v5
is correct—no rollback to v4 required.Likely an incorrect or invalid review comment.
template/.github/workflows/book.yml (1)
38-38
: No change needed for actions/checkout@v5The current workflow reference (
actions/checkout@v5
) is already pointing at the latest stable major release (v5.0.0, published August 11, 2025). You can safely leave this as-is—no downgrade to v4 is necessary.Likely an incorrect or invalid review comment.
template/.github/workflows/sync.yml (1)
32-37
: Composite action reference is valid – no changes neededThe
tschm/.config-templates@main
invocation points to the root-levelaction.yml
(HTTP 200) which defines thebranch
andcommit-message
inputs, so the current step is correct as a composite action.
uv: | ||
desc: Install uv and uvx | ||
cmds: | ||
- curl -sSf https://astral.sh/uv/install.sh | sh > /dev/null 2>&1 || (echo "${RED}Installation failed!${RESET}" >&2; exit 1) | ||
- export PATH="$HOME/.cargo/bin:$PATH" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
uv PATH export is ineffective across commands; may cause uv not found
Each cmd runs in a separate shell. The PATH export won’t persist to subsequent commands/tasks. Use a single shell block or set env at the task level.
uv:
desc: Install uv and uvx
cmds:
- - curl -sSf https://astral.sh/uv/install.sh | sh > /dev/null 2>&1 || (echo "${RED}Installation failed!${RESET}" >&2; exit 1)
- - export PATH="$HOME/.cargo/bin:$PATH"
+ - |
+ set -euo pipefail
+ # ANSI colors (local to this block)
+ BLUE="\033[36m"; RED="\033[31m"; RESET="\033[0m"
+ curl -sSf https://astral.sh/uv/install.sh | sh > /dev/null 2>&1 \
+ || { echo -e "${RED}Installation failed!${RESET}" >&2; exit 1; }
+ export PATH="$HOME/.cargo/bin:$PATH"
+ command -v uv >/dev/null || { echo -e "${RED}uv not on PATH after install${RESET}" >&2; exit 1; }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
uv: | |
desc: Install uv and uvx | |
cmds: | |
- curl -sSf https://astral.sh/uv/install.sh | sh > /dev/null 2>&1 || (echo "${RED}Installation failed!${RESET}" >&2; exit 1) | |
- export PATH="$HOME/.cargo/bin:$PATH" | |
uv: | |
desc: Install uv and uvx | |
cmds: | |
- | | |
set -euo pipefail | |
# ANSI colors (local to this block) | |
BLUE="\033[36m"; RED="\033[31m"; RESET="\033[0m" | |
curl -sSf https://astral.sh/uv/install.sh | sh > /dev/null 2>&1 \ | |
|| { echo -e "${RED}Installation failed!${RESET}" >&2; exit 1; } | |
export PATH="$HOME/.cargo/bin:$PATH" | |
command -v uv >/dev/null || { echo -e "${RED}uv not on PATH after install${RESET}" >&2; exit 1; } |
🤖 Prompt for AI Agents
In template/.github/taskfiles/build.yml around lines 4 to 9, the PATH export is
ineffective because each cmds entry runs in its own shell so subsequent commands
won't see the updated PATH; fix by either (a) combining the install and PATH
export into a single shell block (e.g. join with && or use a multiline shell
block) so export persists for the next command, or (b) move the PATH addition
into the task-level env section (or global env) so all cmds inherit the updated
PATH; implement one of these options so uv is found by following commands.
# Check if .venv folder already exists | ||
if [ -d ".venv" ]; then | ||
printf "${BLUE}[INFO] Virtual environment already exists, skipping installation${RESET}\n" | ||
else | ||
# we need the virtual environment at least for the tests to work | ||
# even if we don't install anything | ||
# Create the virtual environment | ||
printf "${BLUE}[INFO] Creating virtual environment...${RESET}\n" | ||
# Create the virtual environment | ||
uv venv --python 3.12 || { printf "${RED}[ERROR] Failed to create virtual environment${RESET}\n"; exit 1; } | ||
if [ -f "pyproject.toml" ]; then | ||
printf "${BLUE}[INFO] Installing dependencies${RESET}\n" | ||
uv sync --all-extras --frozen || { printf "${RED}[ERROR] Failed to install dependencies${RESET}\n"; exit 1; } | ||
else | ||
printf "${YELLOW}[WARN] No pyproject.toml found, skipping install${RESET}\n" | ||
fi | ||
fi | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
install skips dependency sync if .venv exists
This skips uv sync entirely whenever .venv exists, which is surprising in CI and can leave stale deps. Create the venv if missing, but always run sync when pyproject.toml exists.
- # Check if .venv folder already exists
- if [ -d ".venv" ]; then
- printf "${BLUE}[INFO] Virtual environment already exists, skipping installation${RESET}\n"
- else
- # we need the virtual environment at least for the tests to work
- # even if we don't install anything
-
- # Create the virtual environment
- printf "${BLUE}[INFO] Creating virtual environment...${RESET}\n"
-
- # Create the virtual environment
- uv venv --python 3.12 || { printf "${RED}[ERROR] Failed to create virtual environment${RESET}\n"; exit 1; }
-
- if [ -f "pyproject.toml" ]; then
- printf "${BLUE}[INFO] Installing dependencies${RESET}\n"
- uv sync --all-extras --frozen || { printf "${RED}[ERROR] Failed to install dependencies${RESET}\n"; exit 1; }
- else
- printf "${YELLOW}[WARN] No pyproject.toml found, skipping install${RESET}\n"
- fi
- fi
+ # Ensure virtual environment exists
+ if [ ! -d ".venv" ]; then
+ printf "${BLUE}[INFO] Creating virtual environment...${RESET}\n"
+ uv venv --python 3.12 || { printf "${RED}[ERROR] Failed to create virtual environment${RESET}\n"; exit 1; }
+ fi
+
+ # Sync dependencies if project file exists
+ if [ -f "pyproject.toml" ]; then
+ printf "${BLUE}[INFO] Installing dependencies${RESET}\n"
+ uv sync --all-extras --frozen || { printf "${RED}[ERROR] Failed to install dependencies${RESET}\n"; exit 1; }
+ else
+ printf "${YELLOW}[WARN] No pyproject.toml found, skipping install${RESET}\n"
+ fi
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Check if .venv folder already exists | |
if [ -d ".venv" ]; then | |
printf "${BLUE}[INFO] Virtual environment already exists, skipping installation${RESET}\n" | |
else | |
# we need the virtual environment at least for the tests to work | |
# even if we don't install anything | |
# Create the virtual environment | |
printf "${BLUE}[INFO] Creating virtual environment...${RESET}\n" | |
# Create the virtual environment | |
uv venv --python 3.12 || { printf "${RED}[ERROR] Failed to create virtual environment${RESET}\n"; exit 1; } | |
if [ -f "pyproject.toml" ]; then | |
printf "${BLUE}[INFO] Installing dependencies${RESET}\n" | |
uv sync --all-extras --frozen || { printf "${RED}[ERROR] Failed to install dependencies${RESET}\n"; exit 1; } | |
else | |
printf "${YELLOW}[WARN] No pyproject.toml found, skipping install${RESET}\n" | |
fi | |
fi | |
# Ensure virtual environment exists | |
if [ ! -d ".venv" ]; then | |
printf "${BLUE}[INFO] Creating virtual environment...${RESET}\n" | |
uv venv --python 3.12 || { printf "${RED}[ERROR] Failed to create virtual environment${RESET}\n"; exit 1; } | |
fi | |
# Sync dependencies if project file exists | |
if [ -f "pyproject.toml" ]; then | |
printf "${BLUE}[INFO] Installing dependencies${RESET}\n" | |
uv sync --all-extras --frozen || { printf "${RED}[ERROR] Failed to install dependencies${RESET}\n"; exit 1; } | |
else | |
printf "${YELLOW}[WARN] No pyproject.toml found, skipping install${RESET}\n" | |
fi |
🤖 Prompt for AI Agents
In template/.github/taskfiles/build.yml around lines 15 to 35, the script
currently skips running "uv sync" whenever the .venv directory exists which can
leave stale dependencies in CI; change the flow so the script creates the
virtualenv only if missing but always checks for pyproject.toml and runs "uv
sync --all-extras --frozen" (exiting with an error message on failure)
regardless of whether .venv was just created or already present, preserving the
existing log messages and exit handling.
- printf " ${BLUE}[INFO] Removing local branches with no remote counterpart...${RESET}\n" | ||
- git fetch -p | ||
- | | ||
git branch -vv | grep ': gone]' | awk '{print $$1}' | xargs -r git branch -D |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
macOS incompatibility: xargs -r isn’t portable (breaks on BSD/macOS).
xargs -r
is GNU-only and will fail on macOS (“illegal option -- r”). Replace with a POSIX-compliant loop to keep the task cross-platform.
- - |
- git branch -vv | grep ': gone]' | awk '{print $$1}' | xargs -r git branch -D
+ - |
+ # Delete local branches whose upstream has been pruned.
+ # Portable across GNU/BSD (no reliance on xargs -r).
+ git branch -vv \
+ | awk '/: gone]/{print $$1}' \
+ | while read -r branch; do
+ [ -n "$branch" ] && git branch -D "$branch";
+ done
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
git branch -vv | grep ': gone]' | awk '{print $$1}' | xargs -r git branch -D | |
- | | |
# Delete local branches whose upstream has been pruned. | |
# Portable across GNU/BSD (no reliance on xargs -r). | |
git branch -vv \ | |
| awk '/: gone]/{print $$1}' \ | |
| while read -r branch; do | |
[ -n "$branch" ] && git branch -D "$branch"; | |
done |
🤖 Prompt for AI Agents
In template/.github/taskfiles/cleanup.yml around line 13, the command uses
GNU-only `xargs -r` which breaks on macOS; replace it with a POSIX-compliant
loop: capture the output of `git branch -vv | grep ': gone]' | awk '{print $1}'`
and iterate with a `while read -r branch` (or `for` over the list) to call `git
branch -D "$branch"` only for non-empty names so the task works on both GNU and
BSD/macOS environments.
SOURCE_FOLDER="src/$(find src -mindepth 1 -maxdepth 1 -type d -not -path '*/\.*' | head -1 | sed 's|^src/||')" | ||
if [ -z "$SOURCE_FOLDER" ] || [ -z "{{.TESTS_FOLDER}}" ]; then | ||
printf "${YELLOW}[WARN] No valid source folder structure found, skipping tests${RESET}\n" | ||
else | ||
uv pip install pytest pytest-cov pytest-html | ||
mkdir -p _tests/html-coverage _tests/html-report | ||
uv run pytest {{.TESTS_FOLDER}} --cov=$SOURCE_FOLDER --cov-report=term --cov-report=html:_tests/html-coverage --html=_tests/html-report/report.html | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix SOURCE_FOLDER detection and missing tests-dir check to avoid false positives
When src is absent, SOURCE_FOLDER becomes “src/” (non-empty), so tests still run and fail. Also, we don’t verify the tests folder exists.
- SOURCE_FOLDER="src/$(find src -mindepth 1 -maxdepth 1 -type d -not -path '*/\.*' | head -1 | sed 's|^src/||')"
-
- if [ -z "$SOURCE_FOLDER" ] || [ -z "{{.TESTS_FOLDER}}" ]; then
+ SOURCE_FOLDER="$(find src -mindepth 1 -maxdepth 1 -type d -not -path '*/\.*' | head -1)"
+
+ if [ -z "$SOURCE_FOLDER" ] || [ ! -d "$SOURCE_FOLDER" ] || [ ! -d "{{.TESTS_FOLDER}}" ]; then
printf "${YELLOW}[WARN] No valid source folder structure found, skipping tests${RESET}\n"
else
uv pip install pytest pytest-cov pytest-html
mkdir -p _tests/html-coverage _tests/html-report
- uv run pytest {{.TESTS_FOLDER}} --cov=$SOURCE_FOLDER --cov-report=term --cov-report=html:_tests/html-coverage --html=_tests/html-report/report.html
+ uv run pytest "{{.TESTS_FOLDER}}" \
+ --cov="$SOURCE_FOLDER" \
+ --cov-report=term \
+ --cov-report=html:_tests/html-coverage \
+ --html=_tests/html-report/report.html
fi
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
SOURCE_FOLDER="src/$(find src -mindepth 1 -maxdepth 1 -type d -not -path '*/\.*' | head -1 | sed 's|^src/||')" | |
if [ -z "$SOURCE_FOLDER" ] || [ -z "{{.TESTS_FOLDER}}" ]; then | |
printf "${YELLOW}[WARN] No valid source folder structure found, skipping tests${RESET}\n" | |
else | |
uv pip install pytest pytest-cov pytest-html | |
mkdir -p _tests/html-coverage _tests/html-report | |
uv run pytest {{.TESTS_FOLDER}} --cov=$SOURCE_FOLDER --cov-report=term --cov-report=html:_tests/html-coverage --html=_tests/html-report/report.html | |
fi | |
SOURCE_FOLDER="$(find src -mindepth 1 -maxdepth 1 -type d -not -path '*/\.*' | head -1)" | |
if [ -z "$SOURCE_FOLDER" ] || [ ! -d "$SOURCE_FOLDER" ] || [ ! -d "{{.TESTS_FOLDER}}" ]; then | |
printf "${YELLOW}[WARN] No valid source folder structure found, skipping tests${RESET}\n" | |
else | |
uv pip install pytest pytest-cov pytest-html | |
mkdir -p _tests/html-coverage _tests/html-report | |
uv run pytest "{{.TESTS_FOLDER}}" \ | |
--cov="$SOURCE_FOLDER" \ | |
--cov-report=term \ | |
--cov-report=html:_tests/html-coverage \ | |
--html=_tests/html-report/report.html | |
fi |
🤖 Prompt for AI Agents
In template/.github/taskfiles/docs.yml around lines 16-24, the SOURCE_FOLDER
assignment and tests-dir existence checks are too permissive (SOURCE_FOLDER can
be non-empty when src is absent and we don't verify the tests folder exists);
update the SOURCE_FOLDER assignment to produce an empty value when no
subdirectory is found (e.g. set SOURCE_FOLDER="$(find src -mindepth 1 -maxdepth
1 -type d -not -path '*/\.*' -printf '%f\n' | head -1)" or wrap the find result
to set empty if none), then change the conditional to check both that
SOURCE_FOLDER is non-empty and that the tests folder actually exists (use [ -n
"$SOURCE_FOLDER" ] && [ -d "{{.TESTS_FOLDER}}" ] or [ ! -z "$SOURCE_FOLDER" ] &&
[ -d "{{.TESTS_FOLDER}}" ]), and only run pytest when both checks pass;
otherwise print the same warning and skip tests.
if [ -f "pyproject.toml" ]; then | ||
printf "${BLUE}[INFO] Building documentation...${RESET}\n" | ||
SOURCE_FOLDER=src/$(find src -mindepth 1 -maxdepth 1 -type d -not -path '*/\.*' | head -1) | ||
uv pip install pdoc | ||
uv run pdoc -o _pdoc "$SOURCE_FOLDER" | ||
else | ||
printf "${YELLOW}[WARN] No pyproject.toml found, skipping docs${RESET}\n" | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent double “src/src/...” path; validate source before invoking pdoc
As written, SOURCE_FOLDER becomes “src/src/” and will break pdoc. Also ensure the folder actually exists before running.
- if [ -f "pyproject.toml" ]; then
+ if [ -f "pyproject.toml" ]; then
printf "${BLUE}[INFO] Building documentation...${RESET}\n"
- SOURCE_FOLDER=src/$(find src -mindepth 1 -maxdepth 1 -type d -not -path '*/\.*' | head -1)
- uv pip install pdoc
- uv run pdoc -o _pdoc "$SOURCE_FOLDER"
+ SOURCE_FOLDER="$(find src -mindepth 1 -maxdepth 1 -type d -not -path '*/\.*' | head -1)"
+ if [ -n "$SOURCE_FOLDER" ] && [ -d "$SOURCE_FOLDER" ]; then
+ uv pip install pdoc
+ uv run pdoc -o _pdoc "$SOURCE_FOLDER"
+ else
+ printf "${YELLOW}[WARN] No package under src/, skipping docs${RESET}\n"
+ fi
else
printf "${YELLOW}[WARN] No pyproject.toml found, skipping docs${RESET}\n"
fi
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if [ -f "pyproject.toml" ]; then | |
printf "${BLUE}[INFO] Building documentation...${RESET}\n" | |
SOURCE_FOLDER=src/$(find src -mindepth 1 -maxdepth 1 -type d -not -path '*/\.*' | head -1) | |
uv pip install pdoc | |
uv run pdoc -o _pdoc "$SOURCE_FOLDER" | |
else | |
printf "${YELLOW}[WARN] No pyproject.toml found, skipping docs${RESET}\n" | |
fi | |
if [ -f "pyproject.toml" ]; then | |
printf "${BLUE}[INFO] Building documentation...${RESET}\n" | |
SOURCE_FOLDER="$(find src -mindepth 1 -maxdepth 1 -type d -not -path '*/\.*' | head -1)" | |
if [ -n "$SOURCE_FOLDER" ] && [ -d "$SOURCE_FOLDER" ]; then | |
uv pip install pdoc | |
uv run pdoc -o _pdoc "$SOURCE_FOLDER" | |
else | |
printf "${YELLOW}[WARN] No package under src/, skipping docs${RESET}\n" | |
fi | |
else | |
printf "${YELLOW}[WARN] No pyproject.toml found, skipping docs${RESET}\n" | |
fi |
🤖 Prompt for AI Agents
template/.github/taskfiles/docs.yml around lines 31-38: the find command returns
a path already rooted at src (e.g. "src/pkg"), so prefixing it with "src/"
produces "src/src/pkg" and breaks pdoc; change the assignment to capture the
find result without re-prefixing (e.g. use the find output directly or strip
with basename), then check that the resulting directory exists before running
pdoc and if it does not, print a warning and skip the pdoc step.
- name: Check if dist contains artifacts | ||
id: check_dist | ||
run: | | ||
if [[ ! -d dist ]]; then | ||
echo "::warning::No folder dist/. Skipping PyPI publish." | ||
echo "should_publish=false" >> $GITHUB_OUTPUT | ||
else | ||
echo "should_publish=true" >> $GITHUB_OUTPUT | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure dist contains files before publishing
Checking only for the directory can still publish an empty dist.
- name: Check if dist contains artifacts
id: check_dist
run: |
- if [[ ! -d dist ]]; then
+ shopt -s nullglob
+ files=(dist/*)
+ if [[ ! -d dist || ${#files[@]} -eq 0 ]]; then
echo "::warning::No folder dist/. Skipping PyPI publish."
echo "should_publish=false" >> $GITHUB_OUTPUT
else
echo "should_publish=true" >> $GITHUB_OUTPUT
fi
🤖 Prompt for AI Agents
In template/.github/workflows/release.yml around lines 126 to 134, the current
check only verifies the dist directory exists and can still publish when dist is
empty; change the run step to check that dist contains build artifacts (e.g.
wheel or sdist files) and set should_publish accordingly — use a shell test or
find/glob that looks for files like dist/* or specific patterns (dist/*.whl or
dist/*.tar.gz / dist/*.zip) and if no matching files are found echo
"::warning::No files in dist/. Skipping PyPI publish." and write
"should_publish=false" to $GITHUB_OUTPUT, otherwise write "should_publish=true".
on: | ||
workflow_dispatch: | ||
schedule: | ||
- cron: '0 0 * * 1' # Weekly on Monday at midnight | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add workflow-level concurrency to avoid overlapping runs
Weekly schedules and manual triggers can overlap with long runs, creating conflicting branch updates. Add a concurrency group to serialize executions and cancel inflight runs.
on:
workflow_dispatch:
schedule:
- cron: '0 0 * * 1' # Weekly on Monday at midnight
+
+concurrency:
+ group: sync-configs-${{ github.ref || github.run_id }}
+ cancel-in-progress: true
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
on: | |
workflow_dispatch: | |
schedule: | |
- cron: '0 0 * * 1' # Weekly on Monday at midnight | |
on: | |
workflow_dispatch: | |
schedule: | |
- cron: '0 0 * * 1' # Weekly on Monday at midnight | |
concurrency: | |
group: sync-configs-${{ github.ref || github.run_id }} | |
cancel-in-progress: true |
🤖 Prompt for AI Agents
In template/.github/workflows/sync.yml around lines 17 to 21, the workflow lacks
a workflow-level concurrency setting so scheduled and manual runs can overlap;
add a top-level concurrency block (placed under the root of the workflow,
alongside "on") that defines a unique group name (e.g., based on workflow or
repository) and sets cancel-in-progress: true so new runs cancel inflight ones
and executions are serialized.
- repo: https://github.com/rhysd/actionlint | ||
rev: v1.7.7 | ||
hooks: | ||
- id: actionlint | ||
args: [ -ignore, SC ] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Actionlint is currently ignoring all ShellCheck diagnostics (too broad).
args: [ -ignore, SC ]
matches any message containing “SC”, effectively suppressing all ShellCheck findings. If disabling ShellCheck is intended, prefer -no-shellcheck
for clarity; otherwise, scope the ignore to specific patterns.
Option A (disable ShellCheck explicitly):
- - id: actionlint
- args: [ -ignore, SC ]
+ - id: actionlint
+ args: [ "-no-shellcheck" ]
Option B (narrow ignores to known false positives; example placeholder):
- - id: actionlint
- args: [ -ignore, SC ]
+ - id: actionlint
+ args:
+ # Ignore only specific false positives (example pattern).
+ - "-ignore"
+ - "SC[0-9]+: allow unquoted $GITHUB_* env in matrix expansions"
Provide the exact messages/codes you intend to ignore, and I can tailor the regex.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
- repo: https://github.com/rhysd/actionlint | |
rev: v1.7.7 | |
hooks: | |
- id: actionlint | |
args: [ -ignore, SC ] | |
- repo: https://github.com/rhysd/actionlint | |
rev: v1.7.7 | |
hooks: | |
- id: actionlint | |
args: [ "-no-shellcheck" ] |
# Create a Python virtual environment using uv (faster alternative to venv) | ||
uv: | ||
@curl -LsSf https://astral.sh/uv/install.sh | sh # Install uv if not already installed | ||
.PHONY: build clean book check |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Expand .PHONY to include all public targets (and remove missing one)
Static analysis flagged missing PHONY for test, and many public targets aren’t listed. Also “build” is declared but no target exists.
-.PHONY: build clean book check
+.PHONY: all install clean test book check help
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
.PHONY: build clean book check | |
.PHONY: all install clean test book check help |
🧰 Tools
🪛 checkmake (0.2.2)
[warning] 9-9: Missing required phony target "all"
(minphony)
[warning] 9-9: Missing required phony target "test"
(minphony)
🤖 Prompt for AI Agents
In template/Makefile around line 9, the .PHONY declaration only lists "build
clean book check" and is missing public targets (like test) while also
referencing a non-existent "build" target; update the .PHONY to include every
public target defined in the Makefile (add test and any other top-level targets)
and remove "build" from .PHONY if there is no corresponding target (or
alternatively add a proper build target implementation if intended), ensuring
.PHONY exactly matches the file's exported targets.
# N: pep8-naming - Check PEP 8 naming conventions | ||
# W: pycodestyle warnings - PEP 8 style guide warnings | ||
# UP: pyupgrade - Upgrade syntax for newer Python | ||
select = ["D", "E", "F", "I", "N", "W", "UP"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Security rule ignore (S101) is ineffective without enabling Bandit (S) rules
You ignore S101 in tests and marimo files, but the S ruleset isn’t selected. This dead config can confuse future maintainers. Either enable S or drop the ignores.
Option A — enable Bandit and keep the ignores (recommended):
-select = ["D", "E", "F", "I", "N", "W", "UP"]
+select = ["D", "E", "F", "I", "N", "S", "W", "UP"]
Option B — keep current selection and remove the unusable ignores:
[lint.per-file-ignores]
-"tests/**/*.py" = ["S101"] # Allow assert statements in tests
-"book/marimo/*.py" = ["N803", "S101"] # Allow non-lowercase variable names (N803)
- # and assert statements in marimo files
+"book/marimo/*.py" = ["N803"]
If you’re unsure which path you want, I can open a follow-up PR either to enable Bandit across templates or to remove the unused ignores.
Also applies to: 77-79
🏁 Script executed:
#!/bin/bash
# Check selected ruleset in template/ruff.toml
grep -n '^select' template/ruff.toml
# Locate per-file-ignores block and list S101 entries
grep -n '\[lint.per-file-ignores\]' template/ruff.toml -A5
# Find all references to S101 in the config
grep -n 'S101' template/ruff.toml
Length of output: 604
Ineffective S101 ignores due to missing ‘S’ ruleset
The current select
list at line 61 doesn’t include S
, so all per-file ignores for S101 are no-ops and may confuse future maintainers. Please choose one of the following paths:
Option A — enable Bandit (recommended):
-select = ["D", "E", "F", "I", "N", "W", "UP"]
+select = ["D", "E", "F", "I", "N", "S", "W", "UP"]
Option B — drop the unused ignores:
[lint.per-file-ignores]
-"tests/**/*.py" = ["S101"] # Allow assert statements in tests
-"book/marimo/*.py" = ["N803", "S101"] # Allow non-lowercase variable names (N803) and asserts
+"book/marimo/*.py" = ["N803"] # Allow non-lowercase variable names (N803)
Let me know if you’d like a follow-up PR to implement either change.
Applies to lines 61 and 77–79.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
select = ["D", "E", "F", "I", "N", "W", "UP"] | |
# template/ruff.toml | |
# … around line 61 … | |
-select = ["D", "E", "F", "I", "N", "W", "UP"] | |
+select = ["D", "E", "F", "I", "N", "S", "W", "UP"] | |
# … around lines 77–79 … | |
[lint.per-file-ignores] | |
-book/marimo/*.py = ["N803", "S101"] # Allow non-lowercase variable names (N803) and asserts | |
+book/marimo/*.py = ["N803"] # Allow non-lowercase variable names (N803) |
🤖 Prompt for AI Agents
In template/ruff.toml around line 61 (and related per-file ignores at lines
77–79): the select list omits the 'S' ruleset so any per-file S101 ignores are
ineffective; either add "S" to the select array to enable Bandit checks
(recommended) or remove the unused S101 per-file ignores at lines 77–79; update
the select line to include "S" if you want Bandit enabled, or delete the S101
ignores to avoid confusing no-op ignores.
Summary by CodeRabbit
Chores
Documentation
New Features