From 5e68f0ecc7b83a46af11c09c8fc6e19926a03701 Mon Sep 17 00:00:00 2001 From: John Lambert Date: Mon, 3 Nov 2025 15:05:41 -0500 Subject: [PATCH 1/7] Spec kit and AI docs, tasks and instructions Refine AI onboarding and workflows: * Update copilot-instructions.md with agentic workflow links and clearer pointers to src-catalog and per-folder guidance (COPILOT.md). * Tune native and installer instructions for mixed C++/CLI, WiX, and build nuances (interop, versioning, upgrade behavior, build gotchas). Spec kit improvements: * Refresh spec.md and plan.md to align with the feature-spec and bugfix agent workflows and FieldWorks conventions. Inner-loop productivity: * Extend tasks.json with quick checks for whitespace and commit message linting to mirror CI and shorten feedback loops. CI hardening for docs and future agent flows: * Add lint-docs.yml to verify COPILOT.md presence per Src/ and ensure folders are referenced in .github/src-catalog.md. * Add agent-analysis-stub.yml (disabled-by-default) to document how we will run prompts/test-failure analysis in CI later. Locally run CI checks in Powershell * Refactor scripts and add whitespace fixing algorithm * Add system to keep track of changes needed to be reflected in COPILOT.md files. git prune task --- .../chatmodes/installer-engineer.chatmode.md | 19 + .../chatmodes/managed-engineer.chatmode.md | 23 + .github/chatmodes/native-engineer.chatmode.md | 22 + .../chatmodes/technical-writer.chatmode.md | 19 + .github/check_copilot_docs.py | 381 +++++++++++++++ .github/commit-guidelines.md | 34 ++ .github/context/codebase.context.md | 16 + .github/copilot-framework-tasks.md | 67 +++ .github/copilot-instructions.md | 208 ++++++++ .github/copilot_tree_hash.py | 89 ++++ .github/detect_copilot_needed.py | 245 ++++++++++ .github/fill_copilot_frontmatter.py | 143 ++++++ .github/instructions/build.instructions.md | 24 + .../instructions/installer.instructions.md | 23 + .github/instructions/managed.instructions.md | 26 + .github/instructions/native.instructions.md | 25 + .github/instructions/testing.instructions.md | 22 + .github/memory.md | 10 + .github/option3-plan.md | 49 ++ .github/prompts/bugfix.prompt.md | 37 ++ .github/prompts/copilot-docs-update.prompt.md | 45 ++ .github/prompts/feature-spec.prompt.md | 40 ++ .github/prompts/speckit.analyze.prompt.md | 184 ++++++++ .github/prompts/speckit.checklist.prompt.md | 294 ++++++++++++ .github/prompts/speckit.clarify.prompt.md | 177 +++++++ .../prompts/speckit.constitution.prompt.md | 78 +++ .github/prompts/speckit.implement.prompt.md | 134 ++++++ .github/prompts/speckit.plan.prompt.md | 81 ++++ .github/prompts/speckit.specify.prompt.md | 249 ++++++++++ .github/prompts/speckit.tasks.prompt.md | 128 +++++ .github/prompts/test-failure-debug.prompt.md | 21 + .github/pull_request_template.md | 17 + .github/recipes/add-dialog-xworks.md | 17 + .github/recipes/extend-cellar-schema.md | 17 + .github/scaffold_copilot_markdown.py | 372 +++++++++++++++ .github/spec-templates/plan.md | 19 + .github/spec-templates/spec.md | 32 ++ .github/src-catalog.md | 201 ++++++++ .github/update-copilot-summaries.md | 329 +++++++++++++ .github/workflows/CommitMessage.yml | 94 ++-- .github/workflows/agent-analysis-stub.yml | 50 ++ .github/workflows/check-whitespace.yml | 68 +-- .github/workflows/copilot-docs-detect.yml | 34 ++ .github/workflows/link-check.yml | 23 + .github/workflows/lint-docs.yml | 62 +++ .gitignore | 1 + .specify/memory/constitution.md | 80 ++++ .../powershell/check-prerequisites.ps1 | 150 ++++++ .specify/scripts/powershell/common.ps1 | 141 ++++++ .../scripts/powershell/create-new-feature.ps1 | 303 ++++++++++++ .specify/scripts/powershell/setup-plan.ps1 | 63 +++ .../powershell/update-agent-context.ps1 | 444 ++++++++++++++++++ .specify/templates/agent-file-template.md | 28 ++ .specify/templates/checklist-template.md | 40 ++ .specify/templates/plan-template.md | 114 +++++ .specify/templates/spec-template.md | 124 +++++ .specify/templates/tasks-template.md | 258 ++++++++++ .vscode/settings.json | 13 + .vscode/tasks.json | 142 +++++- Build/Agent/GitHelpers.ps1 | 15 + Build/Agent/check-and-fix-whitespace.ps1 | 9 + Build/Agent/check-and-fix-whitespace.sh | 6 + Build/Agent/check-whitespace.ps1 | 95 ++++ Build/Agent/check-whitespace.sh | 72 +++ Build/Agent/commit-messages.ps1 | 35 ++ Build/Agent/commit-messages.sh | 32 ++ Build/Agent/fix-whitespace.ps1 | 60 +++ Build/Agent/fix-whitespace.sh | 36 ++ Build/Agent/lib_git.sh | 19 + Src/AppCore/COPILOT.md | 223 +++++++++ Src/CacheLight/COPILOT.md | 173 +++++++ Src/Cellar/COPILOT.md | 120 +++++ Src/Common/COPILOT.md | 106 +++++ Src/Common/Controls/COPILOT.md | 91 ++++ Src/Common/FieldWorks/COPILOT.md | 223 +++++++++ Src/Common/Filters/COPILOT.md | 228 +++++++++ Src/Common/Framework/COPILOT.md | 197 ++++++++ Src/Common/FwUtils/COPILOT.md | 104 ++++ Src/Common/RootSite/COPILOT.md | 135 ++++++ Src/Common/ScriptureUtils/COPILOT.md | 162 +++++++ Src/Common/SimpleRootSite/COPILOT.md | 192 ++++++++ Src/Common/UIAdapterInterfaces/COPILOT.md | 138 ++++++ Src/Common/ViewsInterfaces/COPILOT.md | 162 +++++++ Src/DbExtend/COPILOT.md | 120 +++++ Src/DebugProcs/COPILOT.md | 160 +++++++ Src/DocConvert/COPILOT.md | 60 +++ Src/FXT/COPILOT.md | 157 +++++++ Src/FdoUi/COPILOT.md | 159 +++++++ Src/FwCoreDlgs/COPILOT.md | 143 ++++++ Src/FwParatextLexiconPlugin/COPILOT.md | 160 +++++++ Src/FwResources/COPILOT.md | 141 ++++++ Src/GenerateHCConfig/COPILOT.md | 135 ++++++ Src/Generic/COPILOT.md | 153 ++++++ Src/InstallValidator/COPILOT.md | 123 +++++ Src/Kernel/COPILOT.md | 131 ++++++ Src/LCMBrowser/COPILOT.md | 164 +++++++ Src/LexText/COPILOT.md | 221 +++++++++ Src/LexText/Discourse/COPILOT.md | 193 ++++++++ Src/LexText/FlexPathwayPlugin/COPILOT.md | 133 ++++++ Src/LexText/Interlinear/COPILOT.md | 192 ++++++++ Src/LexText/LexTextControls/COPILOT.md | 196 ++++++++ Src/LexText/LexTextDll/COPILOT.md | 152 ++++++ Src/LexText/LexTextExe/COPILOT.md | 106 +++++ Src/LexText/Lexicon/COPILOT.md | 169 +++++++ Src/LexText/Morphology/COPILOT.md | 168 +++++++ Src/LexText/ParserCore/COPILOT.md | 331 +++++++++++++ Src/LexText/ParserUI/COPILOT.md | 342 ++++++++++++++ Src/ManagedLgIcuCollator/COPILOT.md | 231 +++++++++ Src/ManagedVwDrawRootBuffered/COPILOT.md | 214 +++++++++ Src/ManagedVwWindow/COPILOT.md | 199 ++++++++ Src/MigrateSqlDbs/COPILOT.md | 282 +++++++++++ Src/Paratext8Plugin/COPILOT.md | 270 +++++++++++ Src/ParatextImport/COPILOT.md | 296 ++++++++++++ Src/ProjectUnpacker/COPILOT.md | 247 ++++++++++ Src/Transforms/COPILOT.md | 300 ++++++++++++ Src/UnicodeCharEditor/COPILOT.md | 299 ++++++++++++ Src/Utilities/COPILOT.md | 235 +++++++++ Src/Utilities/FixFwData/COPILOT.md | 182 +++++++ Src/Utilities/FixFwDataDll/COPILOT.md | 226 +++++++++ Src/Utilities/MessageBoxExLib/COPILOT.md | 183 ++++++++ Src/Utilities/Reporting/COPILOT.md | 120 +++++ Src/Utilities/SfmStats/COPILOT.md | 102 ++++ Src/Utilities/SfmToXml/COPILOT.md | 241 ++++++++++ Src/Utilities/XMLUtils/COPILOT.md | 144 ++++++ Src/XCore/COPILOT.md | 197 ++++++++ Src/XCore/FlexUIAdapter/COPILOT.md | 147 ++++++ Src/XCore/SilSidePane/COPILOT.md | 143 ++++++ Src/XCore/xCoreInterfaces/COPILOT.md | 152 ++++++ Src/XCore/xCoreTests/COPILOT.md | 111 +++++ Src/views/COPILOT.md | 195 ++++++++ Src/xWorks/COPILOT.md | 381 +++++++++++++++ 131 files changed, 17780 insertions(+), 108 deletions(-) create mode 100644 .github/chatmodes/installer-engineer.chatmode.md create mode 100644 .github/chatmodes/managed-engineer.chatmode.md create mode 100644 .github/chatmodes/native-engineer.chatmode.md create mode 100644 .github/chatmodes/technical-writer.chatmode.md create mode 100644 .github/check_copilot_docs.py create mode 100644 .github/commit-guidelines.md create mode 100644 .github/context/codebase.context.md create mode 100644 .github/copilot-framework-tasks.md create mode 100644 .github/copilot-instructions.md create mode 100644 .github/copilot_tree_hash.py create mode 100644 .github/detect_copilot_needed.py create mode 100644 .github/fill_copilot_frontmatter.py create mode 100644 .github/instructions/build.instructions.md create mode 100644 .github/instructions/installer.instructions.md create mode 100644 .github/instructions/managed.instructions.md create mode 100644 .github/instructions/native.instructions.md create mode 100644 .github/instructions/testing.instructions.md create mode 100644 .github/memory.md create mode 100644 .github/option3-plan.md create mode 100644 .github/prompts/bugfix.prompt.md create mode 100644 .github/prompts/copilot-docs-update.prompt.md create mode 100644 .github/prompts/feature-spec.prompt.md create mode 100644 .github/prompts/speckit.analyze.prompt.md create mode 100644 .github/prompts/speckit.checklist.prompt.md create mode 100644 .github/prompts/speckit.clarify.prompt.md create mode 100644 .github/prompts/speckit.constitution.prompt.md create mode 100644 .github/prompts/speckit.implement.prompt.md create mode 100644 .github/prompts/speckit.plan.prompt.md create mode 100644 .github/prompts/speckit.specify.prompt.md create mode 100644 .github/prompts/speckit.tasks.prompt.md create mode 100644 .github/prompts/test-failure-debug.prompt.md create mode 100644 .github/pull_request_template.md create mode 100644 .github/recipes/add-dialog-xworks.md create mode 100644 .github/recipes/extend-cellar-schema.md create mode 100644 .github/scaffold_copilot_markdown.py create mode 100644 .github/spec-templates/plan.md create mode 100644 .github/spec-templates/spec.md create mode 100644 .github/src-catalog.md create mode 100644 .github/update-copilot-summaries.md create mode 100644 .github/workflows/agent-analysis-stub.yml create mode 100644 .github/workflows/copilot-docs-detect.yml create mode 100644 .github/workflows/link-check.yml create mode 100644 .github/workflows/lint-docs.yml create mode 100644 .specify/memory/constitution.md create mode 100644 .specify/scripts/powershell/check-prerequisites.ps1 create mode 100644 .specify/scripts/powershell/common.ps1 create mode 100644 .specify/scripts/powershell/create-new-feature.ps1 create mode 100644 .specify/scripts/powershell/setup-plan.ps1 create mode 100644 .specify/scripts/powershell/update-agent-context.ps1 create mode 100644 .specify/templates/agent-file-template.md create mode 100644 .specify/templates/checklist-template.md create mode 100644 .specify/templates/plan-template.md create mode 100644 .specify/templates/spec-template.md create mode 100644 .specify/templates/tasks-template.md create mode 100644 .vscode/settings.json create mode 100644 Build/Agent/GitHelpers.ps1 create mode 100644 Build/Agent/check-and-fix-whitespace.ps1 create mode 100644 Build/Agent/check-and-fix-whitespace.sh create mode 100644 Build/Agent/check-whitespace.ps1 create mode 100644 Build/Agent/check-whitespace.sh create mode 100644 Build/Agent/commit-messages.ps1 create mode 100644 Build/Agent/commit-messages.sh create mode 100644 Build/Agent/fix-whitespace.ps1 create mode 100644 Build/Agent/fix-whitespace.sh create mode 100644 Build/Agent/lib_git.sh create mode 100644 Src/AppCore/COPILOT.md create mode 100644 Src/CacheLight/COPILOT.md create mode 100644 Src/Cellar/COPILOT.md create mode 100644 Src/Common/COPILOT.md create mode 100644 Src/Common/Controls/COPILOT.md create mode 100644 Src/Common/FieldWorks/COPILOT.md create mode 100644 Src/Common/Filters/COPILOT.md create mode 100644 Src/Common/Framework/COPILOT.md create mode 100644 Src/Common/FwUtils/COPILOT.md create mode 100644 Src/Common/RootSite/COPILOT.md create mode 100644 Src/Common/ScriptureUtils/COPILOT.md create mode 100644 Src/Common/SimpleRootSite/COPILOT.md create mode 100644 Src/Common/UIAdapterInterfaces/COPILOT.md create mode 100644 Src/Common/ViewsInterfaces/COPILOT.md create mode 100644 Src/DbExtend/COPILOT.md create mode 100644 Src/DebugProcs/COPILOT.md create mode 100644 Src/DocConvert/COPILOT.md create mode 100644 Src/FXT/COPILOT.md create mode 100644 Src/FdoUi/COPILOT.md create mode 100644 Src/FwCoreDlgs/COPILOT.md create mode 100644 Src/FwParatextLexiconPlugin/COPILOT.md create mode 100644 Src/FwResources/COPILOT.md create mode 100644 Src/GenerateHCConfig/COPILOT.md create mode 100644 Src/Generic/COPILOT.md create mode 100644 Src/InstallValidator/COPILOT.md create mode 100644 Src/Kernel/COPILOT.md create mode 100644 Src/LCMBrowser/COPILOT.md create mode 100644 Src/LexText/COPILOT.md create mode 100644 Src/LexText/Discourse/COPILOT.md create mode 100644 Src/LexText/FlexPathwayPlugin/COPILOT.md create mode 100644 Src/LexText/Interlinear/COPILOT.md create mode 100644 Src/LexText/LexTextControls/COPILOT.md create mode 100644 Src/LexText/LexTextDll/COPILOT.md create mode 100644 Src/LexText/LexTextExe/COPILOT.md create mode 100644 Src/LexText/Lexicon/COPILOT.md create mode 100644 Src/LexText/Morphology/COPILOT.md create mode 100644 Src/LexText/ParserCore/COPILOT.md create mode 100644 Src/LexText/ParserUI/COPILOT.md create mode 100644 Src/ManagedLgIcuCollator/COPILOT.md create mode 100644 Src/ManagedVwDrawRootBuffered/COPILOT.md create mode 100644 Src/ManagedVwWindow/COPILOT.md create mode 100644 Src/MigrateSqlDbs/COPILOT.md create mode 100644 Src/Paratext8Plugin/COPILOT.md create mode 100644 Src/ParatextImport/COPILOT.md create mode 100644 Src/ProjectUnpacker/COPILOT.md create mode 100644 Src/Transforms/COPILOT.md create mode 100644 Src/UnicodeCharEditor/COPILOT.md create mode 100644 Src/Utilities/COPILOT.md create mode 100644 Src/Utilities/FixFwData/COPILOT.md create mode 100644 Src/Utilities/FixFwDataDll/COPILOT.md create mode 100644 Src/Utilities/MessageBoxExLib/COPILOT.md create mode 100644 Src/Utilities/Reporting/COPILOT.md create mode 100644 Src/Utilities/SfmStats/COPILOT.md create mode 100644 Src/Utilities/SfmToXml/COPILOT.md create mode 100644 Src/Utilities/XMLUtils/COPILOT.md create mode 100644 Src/XCore/COPILOT.md create mode 100644 Src/XCore/FlexUIAdapter/COPILOT.md create mode 100644 Src/XCore/SilSidePane/COPILOT.md create mode 100644 Src/XCore/xCoreInterfaces/COPILOT.md create mode 100644 Src/XCore/xCoreTests/COPILOT.md create mode 100644 Src/views/COPILOT.md create mode 100644 Src/xWorks/COPILOT.md diff --git a/.github/chatmodes/installer-engineer.chatmode.md b/.github/chatmodes/installer-engineer.chatmode.md new file mode 100644 index 0000000000..e5443c6241 --- /dev/null +++ b/.github/chatmodes/installer-engineer.chatmode.md @@ -0,0 +1,19 @@ +--- +description: 'Installer engineer for WiX (packaging, upgrades, validation)' +tools: ['search', 'editFiles', 'runTasks'] +--- +You are an installer (WiX) specialist for FieldWorks. You build and validate changes only when installer logic or packaging is affected. + +## Domain scope +- WiX .wxs/.wixproj, packaging inputs under DistFiles/, installer targets under Build/ + +## Must follow +- Read `.github/instructions/installer.instructions.md` +- Follow versioning/upgrade code policies; validate locally when touched + +## Boundaries +- CANNOT modify native or managed app code unless explicitly requested + +## Handy links +- Installer guidance: `.github/instructions/installer.instructions.md` +- CI workflows (patch/base): `.github/workflows/` diff --git a/.github/chatmodes/managed-engineer.chatmode.md b/.github/chatmodes/managed-engineer.chatmode.md new file mode 100644 index 0000000000..4d1893a482 --- /dev/null +++ b/.github/chatmodes/managed-engineer.chatmode.md @@ -0,0 +1,23 @@ +--- +description: 'Managed engineer for C# and .NET (UI, services, tests)' +tools: ['search', 'editFiles', 'runTasks', 'problems', 'testFailure'] +--- +You are a managed (C# and .NET) development specialist for FieldWorks. You work primarily in `Src/` managed projects and follow repository conventions. + +## Domain scope +- UI (WinForms/XAML) and services in managed code +- Unit/integration tests for managed components +- Resource and localization workflows (.resx, Crowdin) + +## Must follow +- Read `.github/instructions/managed.instructions.md` +- Respect `.editorconfig` and CI checks in `.github/workflows/` + +## Boundaries +- CANNOT modify native C++/C++/CLI code unless explicitly requested +- CANNOT modify installer (WiX) unless explicitly requested + +## Handy links +- Src catalog: `.github/src-catalog.md` +- Managed guidance: `.github/instructions/managed.instructions.md` +- Testing guidance: `.github/instructions/testing.instructions.md` diff --git a/.github/chatmodes/native-engineer.chatmode.md b/.github/chatmodes/native-engineer.chatmode.md new file mode 100644 index 0000000000..7ba9c8b3c9 --- /dev/null +++ b/.github/chatmodes/native-engineer.chatmode.md @@ -0,0 +1,22 @@ +--- +description: 'Native engineer for C++ and C++/CLI (interop, kernel, performance)' +tools: ['search', 'editFiles', 'runTasks', 'problems', 'testFailure'] +--- +You are a native (C++ and C++/CLI) development specialist for FieldWorks. You focus on interop boundaries, performance, and correctness. + +## Domain scope +- C++/CLI bridge layers, core native libraries, interop types +- Performance-sensitive code paths, resource management + +## Must follow +- Read `.github/instructions/native.instructions.md` +- Coordinate managed/native changes across boundaries + +## Boundaries +- CANNOT modify WiX installer artifacts unless explicitly requested +- Avoid modifying managed UI unless the task requires boundary changes + +## Handy links +- Src catalog: `.github/src-catalog.md` +- Native guidance: `.github/instructions/native.instructions.md` +- Build guidance: `.github/instructions/build.instructions.md` diff --git a/.github/chatmodes/technical-writer.chatmode.md b/.github/chatmodes/technical-writer.chatmode.md new file mode 100644 index 0000000000..01d50a24eb --- /dev/null +++ b/.github/chatmodes/technical-writer.chatmode.md @@ -0,0 +1,19 @@ +--- +description: 'Technical writer for docs (developer guidance, component docs)' +tools: ['search', 'editFiles'] +--- +You write and maintain developer documentation and component guides with accuracy and minimal code changes. + +## Domain scope +- `.github/*.md`, `Src//COPILOT.md`, `.github/src-catalog.md` + +## Must follow +- Keep docs concise and aligned with repository behavior +- Update COPILOT.md when implementation diverges from docs + +## Boundaries +- CANNOT change code behavior; limit edits to docs unless explicitly requested + +## Handy links +- Onboarding: `.github/copilot-instructions.md` +- Src catalog: `.github/src-catalog.md` diff --git a/.github/check_copilot_docs.py b/.github/check_copilot_docs.py new file mode 100644 index 0000000000..87150a993b --- /dev/null +++ b/.github/check_copilot_docs.py @@ -0,0 +1,381 @@ +#!/usr/bin/env python3 +""" +check_copilot_docs.py — Validate Src/**/COPILOT.md against the canonical skeleton + +Checks: +- Frontmatter: last-reviewed, last-reviewed-tree, status +- last-reviewed-tree not FIXME and matches the current git tree hash +- Required headings present +- References entries appear to map to real files in repo (best-effort) + +Usage: + python .github/check_copilot_docs.py [--root ] [--fail] [--json ] [--verbose] + [--only-changed] [--base ] [--head ] [--since ] + +Exit codes: + 0 = no issues + 1 = warnings (non-fatal) and no --fail provided + 2 = failures when --fail provided +""" +import argparse +import json +import os +import re +import sys +import subprocess +from pathlib import Path + +from copilot_tree_hash import compute_folder_tree_hash + +REQUIRED_HEADINGS = [ + "Purpose", + "Architecture", + "Key Components", + "Technology Stack", + "Dependencies", + "Interop & Contracts", + "Threading & Performance", + "Config & Feature Flags", + "Build Information", + "Interfaces and Data Models", + "Entry Points", + "Test Index", + "Usage Hints", + "Related Folders", + "References", +] + +REFERENCE_EXTS = { + ".cs", + ".cpp", + ".cc", + ".c", + ".h", + ".hpp", + ".ixx", + ".xml", + ".xsl", + ".xslt", + ".xsd", + ".dtd", + ".xaml", + ".resx", + ".config", + ".csproj", + ".vcxproj", + ".props", + ".targets", +} + +PLACEHOLDER_PREFIXES = ("tbd",) + + +def find_repo_root(start: Path) -> Path: + p = start.resolve() + while p != p.parent: + if (p / ".git").exists(): + return p + p = p.parent + return start.resolve() + + +def run(cmd, cwd=None): + return subprocess.check_output(cmd, cwd=cwd, stderr=subprocess.STDOUT).decode( + "utf-8", errors="replace" + ) + + +def git_changed_files( + root: Path, base: str = None, head: str = "HEAD", since: str = None +): + if since: + diff_range = f"{since}..{head}" + elif base: + # Ensure origin/ prefix if a bare branch name is provided + if not base.startswith("origin/") and "/" not in base: + base = f"origin/{base}" + diff_range = f"{base}..{head}" + else: + # Fallback: compare to merge-base with origin/HEAD (best effort) + try: + mb = run(["git", "merge-base", head, "origin/HEAD"], cwd=str(root)).strip() + diff_range = f"{mb}..{head}" + except Exception: + diff_range = f"HEAD~1..{head}" + out = run(["git", "diff", "--name-only", diff_range], cwd=str(root)) + return [l.strip().replace("\\", "/") for l in out.splitlines() if l.strip()] + + +def index_repo_files(root: Path): + index = {} + for dirpath, dirnames, filenames in os.walk(root): + # Skip some big or irrelevant directories + rel = Path(dirpath).relative_to(root) + parts = rel.parts + if parts and parts[0] in { + ".git", + "packages", + "Obj", + "Output", + "Downloads", + "vagrant", + }: + continue + for f in filenames: + index.setdefault(f, []).append(os.path.join(dirpath, f)) + return index + + +def parse_frontmatter(text: str): + lines = text.splitlines() + fm = {} + if len(lines) >= 3 and lines[0].strip() == "---": + # Find closing '---' + try: + end_idx = lines[1:].index("---") + 1 + except ValueError: + # Not properly closed; try to find a line that is just '---' + end_idx = -1 + for i in range(1, min(len(lines), 100)): + if lines[i].strip() == "---": + end_idx = i + break + if end_idx == -1: + return None, text + fm_lines = lines[1:end_idx] + body = "\n".join(lines[end_idx + 1 :]) + for l in fm_lines: + l = l.strip() + if not l or l.startswith("#"): + continue + if ":" in l: + k, v = l.split(":", 1) + fm[k.strip()] = v.strip().strip('"') + return fm, body + return None, text + + +def split_sections(text: str): + sections = {} + current = None + buffer = [] + for line in text.splitlines(): + if line.startswith("## "): + if current is not None: + sections[current] = "\n".join(buffer).strip() + current = line[3:].strip() + buffer = [] + else: + if current is not None: + buffer.append(line) + if current is not None: + sections[current] = "\n".join(buffer).strip() + return sections + + +def extract_references(reference_section: str): + refs = [] + for line in reference_section.splitlines(): + for token in re.split(r"[\s,()]+", line): + if any(token.endswith(ext) for ext in REFERENCE_EXTS): + token = token.rstrip(".,;:") + refs.append(token) + return list(dict.fromkeys(refs)) + + +def maybe_placeholder(text: str) -> bool: + stripped = text.strip() + if not stripped: + return True + lowered = stripped.lower() + return any(lowered.startswith(prefix) for prefix in PLACEHOLDER_PREFIXES) + + +def validate_file(path: Path, repo_index: dict, verbose=False): + result = { + "path": str(path), + "frontmatter": { + "missing": [], + "tree_missing": False, + "tree_placeholder": False, + "tree_value": "", + }, + "headings_missing": [], + "references_missing": [], + "empty_sections": [], + "warnings": [], + "tree_mismatch": False, + "current_tree": "", + "ok": True, + } + text = path.read_text(encoding="utf-8", errors="replace") + fm, body = parse_frontmatter(text) + if not fm: + result["frontmatter"]["missing"] = [ + "last-reviewed", + "last-reviewed-tree", + "status", + ] + result["frontmatter"]["tree_missing"] = True + result["ok"] = False + else: + for key in ["last-reviewed", "last-reviewed-tree", "status"]: + if key not in fm or not fm[key]: + result["frontmatter"]["missing"].append(key) + tree_value = fm.get("last-reviewed-tree", "") + result["frontmatter"]["tree_value"] = tree_value + if not tree_value: + result["frontmatter"]["tree_missing"] = True + result["ok"] = False + elif tree_value.startswith("FIXME"): + result["frontmatter"]["tree_placeholder"] = True + result["ok"] = False + result["warnings"].append( + "last-reviewed-tree placeholder; regenerate frontmatter" + ) + if fm.get("last-verified-commit"): + result["warnings"].append( + "legacy last-verified-commit entry detected; rerun scaffolder" + ) + if result["frontmatter"]["missing"]: + result["ok"] = False + + sections = split_sections(body) + for h in REQUIRED_HEADINGS: + if h not in sections: + result["headings_missing"].append(h) + if result["headings_missing"]: + result["ok"] = False + + for h in REQUIRED_HEADINGS: + if h in sections: + if maybe_placeholder(sections[h]): + result["empty_sections"].append(h) + if result["empty_sections"]: + for h in result["empty_sections"]: + result["warnings"].append(f"Section '{h}' is empty or placeholder text") + + refs = extract_references(sections.get("References", "")) + for r in refs: + base = os.path.basename(r) + if base not in repo_index: + result["references_missing"].append(r) + # references_missing doesn't necessarily fail; treat as warning unless all missing + if refs and len(result["references_missing"]) == len(refs): + result["ok"] = False + + if verbose: + print(f"Checked {path}") + return result + + +def main(): + ap = argparse.ArgumentParser() + ap.add_argument("--root", default=str(find_repo_root(Path.cwd()))) + ap.add_argument("--fail", action="store_true", help="Exit non-zero on failures") + ap.add_argument( + "--json", dest="json_out", default=None, help="Write JSON report to file" + ) + ap.add_argument("--verbose", action="store_true") + ap.add_argument( + "--only-changed", + action="store_true", + help="Validate only changed COPILOT.md files", + ) + ap.add_argument( + "--base", + default=None, + help="Base git ref (e.g., origin/ or branch name)", + ) + ap.add_argument("--head", default="HEAD", help="Head ref (default HEAD)") + ap.add_argument( + "--since", default=None, help="Alternative to base/head: since this ref" + ) + args = ap.parse_args() + + root = Path(args.root).resolve() + src = root / "Src" + if not src.exists(): + print(f"ERROR: Src/ not found under {root}") + return 2 + + repo_index = index_repo_files(root) + + paths_to_check = [] + if args.only_changed: + changed = git_changed_files( + root, base=args.base, head=args.head, since=args.since + ) + for p in changed: + if p.endswith("/COPILOT.md") and p.startswith("Src/"): + paths_to_check.append(root / p) + if not paths_to_check: + paths_to_check = list(src.rglob("COPILOT.md")) + + results = [] + for copath in paths_to_check: + result = validate_file(copath, repo_index, verbose=args.verbose) + rel_parts = Path(result["path"]).relative_to(root).parts + folder_key = "/".join(rel_parts[:-1]) + result["folder"] = folder_key + results.append(result) + + for r in results: + folder_key = r.get("folder") + folder_path = root / folder_key if folder_key else None + if not folder_path or not folder_path.exists(): + r["warnings"].append( + "Folder missing for tree hash computation; verify path" + ) + r["ok"] = False + continue + try: + current_hash = compute_folder_tree_hash(root, folder_path, ref=args.head) + r["current_tree"] = current_hash + except Exception as exc: + r["warnings"].append(f"Unable to compute tree hash: {exc}") + r["ok"] = False + continue + + tree_value = r["frontmatter"].get("tree_value", "") + if tree_value and not tree_value.startswith("FIXME"): + if current_hash != tree_value: + r["tree_mismatch"] = True + r["warnings"].append( + "last-reviewed-tree mismatch with current folder state" + ) + r["ok"] = False + + failures = [r for r in results if not r["ok"]] + print(f"Checked {len(results)} COPILOT.md files. Failures: {len(failures)}") + for r in failures: + print(f"- {r['path']}") + if r["frontmatter"]["missing"]: + print(f" frontmatter missing: {', '.join(r['frontmatter']['missing'])}") + if r["frontmatter"].get("tree_missing"): + print(" last-reviewed-tree missing") + if r["frontmatter"].get("tree_placeholder"): + print(" last-reviewed-tree placeholder; update via scaffolder") + if r.get("tree_mismatch"): + print(" last-reviewed-tree does not match current folder hash") + if r["headings_missing"]: + print(f" headings missing: {', '.join(r['headings_missing'])}") + if r["references_missing"]: + print( + f" unresolved references: {', '.join(r['references_missing'][:10])}{' …' if len(r['references_missing'])>10 else ''}" + ) + warnings = [r for r in results if r["warnings"]] + for r in warnings: + print(f"- WARN {r['path']}: { '; '.join(r['warnings']) }") + + if args.json_out: + with open(args.json_out, "w", encoding="utf-8") as f: + json.dump(results, f, indent=2) + + if args.fail and failures: + return 2 + return 0 if not failures else 1 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/.github/commit-guidelines.md b/.github/commit-guidelines.md new file mode 100644 index 0000000000..41ba41b86e --- /dev/null +++ b/.github/commit-guidelines.md @@ -0,0 +1,34 @@ +# Commit message guidelines (CI-enforced) + +These align with the gitlint rules run in CI. + +## Subject (first line) + +- Max 72 characters. +- Use imperative mood when reasonable (e.g., "Fix crash on startup"). +- No trailing punctuation (e.g., don't end with a period). +- No tabs, no leading/trailing whitespace. + +## Body (optional) + +- Blank line after the subject. +- Wrap lines at 80 characters. +- Explain what and why over how; link issues like "Fixes #1234" when applicable. +- No hard tabs, no trailing whitespace. + +## Helpful commands (Windows PowerShell) + +```powershell +python -m pip install --upgrade gitlint +git fetch origin +gitlint --ignore body-is-missing --commits origin/.. +``` + +Replace `` with your target branch (e.g., `release/9.3`, `develop`). + +## Common examples + +- Good: "Refactor XCore event dispatch to avoid deadlock" +- Good: "Fix: avoid trailing whitespace in generated XSLT layouts" +- Avoid: "Fixes stuff." (too vague, trailing period) +- Avoid: "WIP: temp" (unclear intent, typically avoided in shared history) diff --git a/.github/context/codebase.context.md b/.github/context/codebase.context.md new file mode 100644 index 0000000000..bf2a7e0ee3 --- /dev/null +++ b/.github/context/codebase.context.md @@ -0,0 +1,16 @@ +# High-signal context for FieldWorks agents + +Use these entry points to load context efficiently without scanning the entire repo. + +- Onboarding: `.github/copilot-instructions.md` +- Src catalog (overview of major folders): `.github/src-catalog.md` +- Component guides: `Src//COPILOT.md` (and subfolder COPILOT.md where present) +- Build system: `Build/FieldWorks.targets`, `Build/FieldWorks.proj`, `agent-build-fw.sh`, `FW.sln` +- Installer: `FLExInstaller/` +- Test data: `TestLangProj/` +- Localization: `crowdin.json`, `DistFiles/CommonLocalizations/` +- Documentation discipline: `.github/update-copilot-summaries.md` (three-pass workflow, COPILOT skeleton) + +Tips +- Prefer top-level scripts or FW.sln over ad-hoc project builds +- Respect CI checks (commit messages, whitespace) before pushing diff --git a/.github/copilot-framework-tasks.md b/.github/copilot-framework-tasks.md new file mode 100644 index 0000000000..4daba3de56 --- /dev/null +++ b/.github/copilot-framework-tasks.md @@ -0,0 +1,67 @@ +# AI agent framework tasks + +This checklist tracks repository updates that improve AI workflows using agentic primitives, context engineering, and spec-first development. + +## Option 1 — Docs-first primitives (low effort, high ROI) + +- [x] Create domain instructions files: + - [x] .github/instructions/managed.instructions.md + - [x] .github/instructions/native.instructions.md + - [x] .github/instructions/installer.instructions.md + - [x] .github/instructions/testing.instructions.md + - [x] .github/instructions/build.instructions.md +- [x] Add role-scoped chat modes with tool boundaries: + - [x] .github/chatmodes/managed-engineer.chatmode.md + - [x] .github/chatmodes/native-engineer.chatmode.md + - [x] .github/chatmodes/installer-engineer.chatmode.md + - [x] .github/chatmodes/technical-writer.chatmode.md +- [x] Add context and memory anchors: + - [x] .github/context/codebase.context.md + - [x] .github/memory.md +- [x] Reference these entry points from onboarding: + - [x] Link instructions, chat modes, and context in .github/copilot-instructions.md + +## Option 2 — Agentic workflows + spec-first flow (moderate effort) + +- [ ] Prompts in .github/prompts/: + - [ ] feature-spec.prompt.md (spec → plan → implement with gates; uses spec-kit) + - [ ] bugfix.prompt.md (triage → RCA → fix plan → patch + tests) + - [ ] test-failure-debug.prompt.md (parse NUnit output → targeted fixes) +- [ ] Specification templates: + - [ ] .github/spec-templates/spec.md and plan.md (or link to spec-kit) + - [ ] .github/recipes/*.md playbooks for common tasks +- [ ] Fast inner-loop tasks: + - [ ] Extend .vscode/tasks.json: quick builds (managed/native), smoke tests, whitespace/gitlint + +## Option 3 — Outer-loop automation + MCP integration (higher effort) + +- [ ] Copilot CLI/APM scaffolding: + - [ ] apm.yml: map scripts to prompts and declare MCP dependencies + - [ ] Document local usage: `apm install`, `apm run copilot-feature-spec --param specFile=...` + - [ ] GH Action to run chosen prompt on PR, post summary/comments +- [ ] MCP servers & boundaries: + - [ ] Add GitHub MCP server and Filesystem MCP (pilot set); restrict by chat mode + - [ ] Capture list and policies in `.github/context/mcp.servers.md` +- [ ] CI governance: + - [ ] lint-docs job to verify COPILOT.md presence/links and src-catalog consistency + - [ ] prompt validation job to parse `.prompt.md` frontmatter/structure +- [ ] Security & secrets: + - [ ] Use least-privilege tokens (e.g., `secrets.COPILOT_CLI_PAT`) + - [ ] Add a security review checklist for enabling new tools/servers +- [ ] Rollout strategy: + - [ ] Pilot a no-write prompt (`test-failure-debug.prompt.md`) on PRs + - [ ] Iterate then enable selective write-capable workflows + +See: `.github/option3-plan.md` for details. + +## Notes +- Keep instructions concise and domain-scoped (use `applyTo` when appropriate). +- Follow the canonical COPILOT skeleton in `.github/update-copilot-summaries.md` and its three-pass workflow; remove scaffold leftovers when editing docs. +- Prefer fast inner-loop build/test paths for agents; reserve installer builds for when necessary. + + +## small but high-impact extras +- [ ] Add mermaid diagrams in .github/docs/architecture.md showing component relationships (Cellar/Common/XCore/xWorks), so agents can parse text-based diagrams. +- [ ] Create tests.index.md that maps each major component to its test assemblies and common scenarios (fast lookup for agents). +- [ ] Enrich each COPILOT.md with section headers that match your instructions architecture: Responsibilities, Entry points, Dependencies, Tests, Pitfalls, Extension points. Agents recognize consistent structures quickly. +- [ ] Link your CI checks in the instructions: we already added commit/whitespace/build rules and a PR template—keep those links at the top of copilot-instructions.md. \ No newline at end of file diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md new file mode 100644 index 0000000000..27d2d1bc82 --- /dev/null +++ b/.github/copilot-instructions.md @@ -0,0 +1,208 @@ +# Copilot coding agent onboarding guide + +This document gives you the shortest path to understand, build, test, and validate changes in this repository without exploratory searching. Trust these instructions; only search the repo if a step here is incomplete or produces an error. + +-------------------------------------------------------------------------------- + +## What this repository is + +FieldWorks (often referred to as FLEx) is a large, Windows-focused linguistics and language data management suite developed by SIL International. It includes a mix of C#/.NET managed code and native C++/C++‑CLI components, UI applications, shared libraries, installers, test assets, and localization resources. + +High-level facts: +- Project type: Large mono-repo with multiple applications/libraries and an installer +- Primary OS target: Windows +- Languages likely present: C#, C++/CLI, C++, XAML/WinForms, XML, WiX, scripts (PowerShell/Bash), JSON +- Tooling and runtimes: + - Visual Studio (C#/.NET Framework and Desktop C++ workloads) + - MSBuild + - WiX Toolset for installer (presence of FLExInstaller) + - NUnit-style test projects are common in SIL repos (expect unit/integration tests) + - Crowdin localization (crowdin.json present) + +Documentation: +- Root ReadMe directs to developer docs wiki: + - Developer documentation: https://github.com/sillsdev/FwDocumentation/wiki + +Repo size and structure: +- This is a large codebase with many projects. Prefer building via the provided build scripts or top-level solutions instead of ad-hoc builds of individual projects. + +## Workflow quick links + +| Focus | Primary resources | +| --- | --- | +| Build & test | `agent-build-fw.sh`, `.github/instructions/build.instructions.md`, FW.sln | +| Managed code rules | `.github/instructions/managed.instructions.md`, `.github/chatmodes/managed-engineer.chatmode.md` | +| Native code rules | `.github/instructions/native.instructions.md`, `.github/chatmodes/native-engineer.chatmode.md` | +| Installer work | `.github/instructions/installer.instructions.md`, `.github/chatmodes/installer-engineer.chatmode.md` | +| Documentation upkeep | `.github/update-copilot-summaries.md` (three-pass workflow), COPILOT VS Code tasks | +| Specs & plans | `.github/prompts/`, `.github/spec-templates/`, `.specify/templates/` | + +-------------------------------------------------------------------------------- + +## Repository layout (focused) + +Top-level items you’ll use most often: + +- .editorconfig — code formatting rules +- .github/ — GitHub workflows and configuration (CI runs from here) +- Build/ — build scripts, targets, and shared build infrastructure +- DistFiles/ — packaging inputs and distribution artifacts +- FLExInstaller/ — WiX-based installer project +- Include/ — shared headers/includes for native components +- Lib/ — third-party or prebuilt libraries used by the build +- Src/ — all application, library, and tooling source (see .github/src-catalog.md for folder descriptions; each folder has a COPILOT.md file with detailed documentation) +- TestLangProj/ — test data/projects used by tests and integration scenarios +- ReadMe.md — links to developer documentation wiki +- License.htm — license information +- fw.code-workspace — VS Code workspace settings + +Src/ folder structure: +- For a quick overview of all Src/ folders and subfolders, see `.github/src-catalog.md` +- For detailed information about any specific folder, see its `Src//COPILOT.md` file +- Some folders (Common, LexText, Utilities, XCore) have subfolders, each with their own COPILOT.md file (e.g., `Src/Common/Controls/COPILOT.md`) +- Each COPILOT.md contains: purpose, key components, dependencies, build/test information, and relationships to other folders + +Tip: Use the top-level solution or build scripts instead of building projects individually; this avoids dependency misconfiguration. + +-------------------------------------------------------------------------------- + +## AI agent entry points + +Use these pre-scoped instructions and modes to keep agents focused and reliable: + +- Instructions (domain-specific rules): + - Managed (C# and .NET): `.github/instructions/managed.instructions.md` + - Native (C++ and C++/CLI): `.github/instructions/native.instructions.md` + - Installer (WiX): `.github/instructions/installer.instructions.md` + - Testing: `.github/instructions/testing.instructions.md` + - Build: `.github/instructions/build.instructions.md` +- Chat modes (role boundaries): + - Managed engineer: `.github/chatmodes/managed-engineer.chatmode.md` + - Native engineer: `.github/chatmodes/native-engineer.chatmode.md` + - Installer engineer: `.github/chatmodes/installer-engineer.chatmode.md` + - Technical writer: `.github/chatmodes/technical-writer.chatmode.md` +- Context helpers and memory: + - High-signal context links: `.github/context/codebase.context.md` + - Repository memory (decisions/pitfalls): `.github/memory.md` + +Machine / user specific instructions are available in `.github/machine-specific.md`. The file can be created if needed. + +-------------------------------------------------------------------------------- + +## CI checks you must satisfy + +These run on every PR. Run the quick checks locally before pushing to avoid churn. + +**Commit messages (gitlint)** +- Subject ≤ 72 chars, no trailing punctuation, no tabs/leading/trailing whitespace. +- If you include a body: add a blank line after the subject; body lines ≤ 80 chars. +- Quick check (Windows PowerShell): + ```powershell + python -m pip install --upgrade gitlint + git fetch origin + # Replace with your target branch (e.g., release/9.3, develop) + gitlint --ignore body-is-missing --commits origin/.. + ``` +- Full rules: see `.github/commit-guidelines.md` + +**Whitespace in diffs (git log --check)** +- No trailing whitespace, no space-before-tab in indentation; end files with a newline. +- Quick checks: + ```powershell + git fetch origin + # Review all commits in your PR for whitespace errors + git log --check --pretty=format:"---% h% s" origin/.. + # Also check staged changes before committing + git diff --check --cached + ``` +- Configure your editor to trim trailing whitespace and insert a final newline. + +**Build and tests** +- Build and test locally before PR to avoid CI failures: + ```powershell + # From a Developer Command Prompt for VS or with env set + # Fast path: replicate CI behavior + bash ./agent-build-fw.sh + # Or MSBuild + msbuild FW.sln /m /p:Configuration=Debug + ``` +- If you change installer/config, validate those paths explicitly per the sections below. + +-------------------------------------------------------------------------------- + +## Build, test, run, lint + +Use the build guides in `.github/instructions/build.instructions.md` for full detail. Key reminders: + +- Prerequisites: Visual Studio 2022 with .NET desktop + Desktop C++ workloads, WiX 3.11.x, Git. Install optional tooling (Crowdin CLI, etc.) only when needed. +- Bootstrap: open a Developer Command Prompt, run `source ./environ`, then call `bash ./agent-build-fw.sh` to mirror CI. Use FW.sln with MSBuild/VS when iterating locally. +- Tests: follow `.github/instructions/testing.instructions.md`; run via Visual Studio Test Explorer, `dotnet test`, or `nunit3-console` as appropriate. +- Installer or config changes: execute the WiX validation steps documented in `FLExInstaller` guidance before posting a PR. +- Formatting/localization: respect `.editorconfig`, reuse existing localization patterns, and prefer incremental builds to shorten iteration. + +## Agentic workflows (prompts) and specs + +- Prompts (agentic workflows): `.github/prompts/` + - `feature-spec.prompt.md` — spec → plan → implement with validation gates + - `bugfix.prompt.md` — triage → root cause → minimal fix with gate + - `test-failure-debug.prompt.md` — parse failures and propose targeted fixes (no file edits) +- Specification templates: `.github/spec-templates/` + - `spec.md` — problem, approach, components, risks, tests, rollout + - `plan.md` — implementation plan with gates and rollback +- Recipes/playbooks: `.github/recipes/` — guided steps for common scenarios (e.g., add xWorks dialog, extend Cellar schema) + +-------------------------------------------------------------------------------- + +## CI and validation + +- GitHub Actions are defined under .github/workflows/. Pull Requests trigger validation builds and tests. +- To replicate CI locally: + - Use: source ./environ && bash ./agent-build-fw.sh + - Or run the same msbuild/test steps referenced by the workflow YAMLs. +- Pre-merge checklist the CI approximates: + - Successful build for all targeted configurations + - Unit tests pass + - Packaging (if part of CI) + - Lint/analyzer warnings within policy thresholds + +Before submitting a PR: +- Build locally using the CI-style script if possible. +- Run unit tests relevant to your changes. +- If you touched installer/config files, verify the installer build (requires WiX). +- Ensure formatting follows .editorconfig; fix obvious analyzer/lint issues. + +-------------------------------------------------------------------------------- + +## Where to make changes + +- Core source: Src/ contains the primary C# and C++ projects. Mirror existing patterns for new code. +- Tests: Keep tests close to the code they cover (e.g., Src/.Tests). Add or update tests with behavioral changes. +- Installer changes: FLExInstaller/. +- Shared headers/libs: Include/ and Lib/ (be cautious and avoid committing large binaries unless policy allows). +- Localization: Follow existing string resource usage; do not modify crowdin.json. + +Dependencies and hidden coupling: +- Some components bridge managed and native layers (C# ↔ C++/CLI ↔ C++). When changing type definitions or interfaces at these boundaries, expect to update both managed and native code and ensure marshaling or COM interop stays correct. +- Build props/targets in Build/ and Bld/ may inject include/lib paths and compiler options; avoid bypassing these by building projects in isolation. + +### Documentation discipline +- Follow `.github/update-copilot-summaries.md` and its three-pass workflow whenever code or data changes impact a folder’s documentation. +- Use the COPILOT VS Code tasks (Detect → Propose → Validate) to keep section order canonical. +- Keep `last-reviewed-tree` in each `COPILOT.md` aligned with the folder’s current git tree (`python .github/fill_copilot_frontmatter.py --status draft --ref HEAD`). +- Record uncertainties with `FIXME()` markers instead of guessing, and clear them only after verifying against actual sources. + +-------------------------------------------------------------------------------- + +## Confidence checklist for agents + +- Prefer the top-level build flow (agent-build-fw.sh or solution-wide MSBuild) over piecemeal project builds. +- Always initialize environment via ./environ before script-based builds. +- Validate with tests in Visual Studio or via the same runners CI uses. +- Keep coding style consistent (.editorconfig, ReSharper settings). +- Touch installer/localization only when necessary, and validate those paths explicitly. +- Trust this guide; only search the repo if a command here fails or a path is missing. +-------------------------------------------------------------------------------- + +## Maintaining Src/ Folder Documentation + +Reference `.github/update-copilot-summaries.md` for the canonical skeleton and three-pass workflow. Update the relevant `COPILOT.md` whenever architecture, public contracts, or dependencies change, and leave explicit `FIXME()` markers only for facts pending verification. \ No newline at end of file diff --git a/.github/copilot_tree_hash.py b/.github/copilot_tree_hash.py new file mode 100644 index 0000000000..bf11937f2b --- /dev/null +++ b/.github/copilot_tree_hash.py @@ -0,0 +1,89 @@ +#!/usr/bin/env python3 +"""Utility helpers for computing deterministic Src/ tree hashes. + +The goal is to capture the set of tracked files under a folder (excluding the +folder's COPILOT.md) and produce a stable digest that represents the code/data +state that documentation was written against. + +We hash the list of files paired with their git blob SHAs at the specified ref +(default HEAD). The working tree is not considered; callers should ensure they +run these helpers on a clean tree or handle dirty-state warnings separately. +""" +from __future__ import annotations + +import hashlib +import subprocess +from pathlib import Path +from typing import Iterable, Tuple + +__all__ = [ + "compute_folder_tree_hash", + "list_tracked_blobs", +] + + +def run(cmd: Iterable[str], cwd: Path) -> str: + """Run a subprocess and return stdout decoded as UTF-8.""" + return subprocess.check_output(cmd, cwd=str(cwd), stderr=subprocess.STDOUT).decode( + "utf-8", errors="replace" + ) + + +def list_tracked_blobs( + root: Path, folder: Path, ref: str = "HEAD" +) -> Iterable[Tuple[str, str]]: + """Yield (relative_path, blob_sha) for tracked files under ``folder``. + + ``ref`` defaults to ``HEAD``. ``folder`` must be inside ``root``. + ``COPILOT.md`` is excluded by design so the hash reflects code/data only. + """ + + rel = folder.relative_to(root).as_posix() + if not rel.startswith("Src/"): + raise ValueError(f"Folder must reside under Src/: {rel}") + + try: + output = run( + [ + "git", + "ls-tree", + "-r", + "--full-tree", + ref, + "--", + rel, + ], + cwd=root, + ) + except subprocess.CalledProcessError as exc: + raise RuntimeError( + f"Failed to list tracked files for {rel}: {exc.output.decode('utf-8', errors='replace')}" + ) from exc + + for line in output.splitlines(): + parts = line.split() + if len(parts) < 4: + continue + _, obj_type, blob_sha, *rest = parts + if obj_type != "blob": + continue + path = rest[-1] + if path.endswith("/COPILOT.md") or path == "COPILOT.md": + continue + yield path, blob_sha + + +def compute_folder_tree_hash(root: Path, folder: Path, ref: str = "HEAD") -> str: + """Compute a stable sha256 digest representing ``folder`` at ``ref``. + + The digest is the sha256 of ``"{relative_path}:{blob_sha}\n"`` for each + tracked file (sorted lexicographically) underneath ``folder`` excluding the + COPILOT.md documentation. When a folder has no tracked files besides + COPILOT.md the digest is the sha256 of the empty string. + """ + + items = sorted(list_tracked_blobs(root, folder, ref)) + digest = hashlib.sha256() + for rel_path, blob_sha in items: + digest.update(f"{rel_path}:{blob_sha}\n".encode("utf-8")) + return digest.hexdigest() diff --git a/.github/detect_copilot_needed.py b/.github/detect_copilot_needed.py new file mode 100644 index 0000000000..595aab67b8 --- /dev/null +++ b/.github/detect_copilot_needed.py @@ -0,0 +1,245 @@ +#!/usr/bin/env python3 +""" +detect_copilot_needed.py — Identify folders with code/config changes that likely require COPILOT.md updates. + +Intended for CI (advisory or failing), and for local pre-commit checks. + +Logic: +- Compute changed files between a base and head ref (or since a rev). +- Consider only changes under Src/** that match code/config extensions. +- Group by top-level folder: Src//... +- For each impacted folder, compare the folder's current git tree hash to the + `last-reviewed-tree` recorded in COPILOT.md and track whether the doc changed. +- Report folders whose hashes no longer match or whose docs are missing. +- Optionally validate changed COPILOT.md files with check_copilot_docs.py. + +Exit codes: + 0 = no issues (either no impacted folders or all have COPILOT.md changes, and validations passed) + 1 = advisory warnings (impacted folders without COPILOT.md updated), when --strict not set + 2 = strict failure when --strict is set and there are issues, or validation fails + +Examples: + python .github/detect_copilot_needed.py --base origin/release/9.3 --head HEAD --strict + python .github/detect_copilot_needed.py --since origin/release/9.3 +""" +import argparse +import json +import os +import subprocess +from pathlib import Path +from typing import Dict, Optional, Tuple + +from copilot_tree_hash import compute_folder_tree_hash + +CODE_EXTS = { + ".cs", + ".cpp", + ".cc", + ".c", + ".h", + ".hpp", + ".ixx", + ".xml", + ".xsl", + ".xslt", + ".xsd", + ".dtd", + ".xaml", + ".resx", + ".config", + ".csproj", + ".vcxproj", + ".props", + ".targets", +} + + +def run(cmd, cwd=None): + return subprocess.check_output(cmd, cwd=cwd, stderr=subprocess.STDOUT).decode( + "utf-8", errors="replace" + ) + + +def git_changed_files( + root: Path, base: str = None, head: str = "HEAD", since: str = None +): + if since: + diff_range = f"{since}..{head}" + elif base: + diff_range = f"{base}..{head}" + else: + # Fallback: compare to merge-base with origin/HEAD (best effort) + try: + mb = run(["git", "merge-base", head, "origin/HEAD"], cwd=str(root)).strip() + diff_range = f"{mb}..{head}" + except Exception: + diff_range = f"HEAD~1..{head}" + out = run(["git", "diff", "--name-only", diff_range], cwd=str(root)) + return [l.strip().replace("\\", "/") for l in out.splitlines() if l.strip()] + + +def top_level_src_folder(path: str): + # Expect paths like Src//... + parts = path.split("/") + if len(parts) >= 2 and parts[0] == "Src": + return "/".join(parts[:2]) # Src/Folder + return None + + +def parse_frontmatter(path: Path) -> Tuple[Optional[Dict[str, str]], str]: + if not path.exists(): + return None, "" + text = path.read_text(encoding="utf-8", errors="replace") + lines = text.splitlines() + if len(lines) >= 3 and lines[0].strip() == "---": + end_idx = -1 + for i in range(1, min(len(lines), 200)): + if lines[i].strip() == "---": + end_idx = i + break + if end_idx == -1: + return None, text + fm_lines = lines[1:end_idx] + fm: Dict[str, str] = {} + for l in fm_lines: + l = l.strip() + if not l or l.startswith("#"): + continue + if ":" in l: + k, v = l.split(":", 1) + fm[k.strip()] = v.strip().strip('"') + return fm, "\n".join(lines[end_idx + 1 :]) + return None, "" + + +def main(): + ap = argparse.ArgumentParser() + ap.add_argument("--root", default=str(Path.cwd())) + ap.add_argument( + "--base", default=None, help="Base git ref (e.g., origin/release/9.3)" + ) + ap.add_argument("--head", default="HEAD", help="Head ref (default HEAD)") + ap.add_argument( + "--since", default=None, help="Alternative to base/head: since this ref" + ) + ap.add_argument("--json", dest="json_out", default=None) + ap.add_argument( + "--validate-changed", + action="store_true", + help="Validate changed COPILOT.md with check_copilot_docs.py", + ) + ap.add_argument("--strict", action="store_true", help="Exit non-zero on issues") + args = ap.parse_args() + + root = Path(args.root).resolve() + changed = git_changed_files(root, base=args.base, head=args.head, since=args.since) + + impacted: Dict[str, set] = {} + copilot_changed = set() + for p in changed: + if p.endswith("/COPILOT.md"): + copilot_changed.add(p) + # Only care about Src/** files that look like code/config + if not p.startswith("Src/"): + continue + if p.endswith("/COPILOT.md"): + continue + _, ext = os.path.splitext(p) + if ext.lower() not in CODE_EXTS: + continue + folder = top_level_src_folder(p) + if folder: + impacted.setdefault(folder, set()).add(p) + + results = [] + issues = 0 + for folder, files in sorted(impacted.items()): + copath_rel = f"{folder}/COPILOT.md" + copath = root / copath_rel + folder_path = root / folder + doc_changed = copath_rel in copilot_changed + reasons = [] + recorded_hash: Optional[str] = None + fm, _ = parse_frontmatter(copath) + if not copath.exists(): + reasons.append("COPILOT.md missing") + elif not fm: + reasons.append("frontmatter missing") + else: + recorded_hash = fm.get("last-reviewed-tree") + if not recorded_hash or recorded_hash.startswith("FIXME"): + reasons.append("last-reviewed-tree missing or placeholder") + current_hash: Optional[str] = None + hash_error: Optional[str] = None + if folder_path.exists(): + try: + current_hash = compute_folder_tree_hash( + root, folder_path, ref=args.head + ) + except Exception as exc: # pragma: no cover - diagnostics only + hash_error = str(exc) + reasons.append("unable to compute tree hash") + else: + reasons.append("folder missing at head ref") + + if current_hash and recorded_hash and current_hash == recorded_hash: + up_to_date = True + else: + up_to_date = False + if current_hash and recorded_hash and current_hash != recorded_hash: + reasons.append("tree hash mismatch") + if not doc_changed and not reasons: + # Defensive catch-all + reasons.append("COPILOT.md not updated") + + entry = { + "folder": folder, + "files_changed": sorted(files), + "copilot_path": copath_rel, + "copilot_changed": doc_changed, + "last_reviewed_tree": recorded_hash, + "current_tree": current_hash, + "status": "OK" if up_to_date else "STALE", + "reasons": reasons, + } + if hash_error: + entry["hash_error"] = hash_error + if not up_to_date: + issues += 1 + results.append(entry) + + # Optional validation for changed COPILOT.md files + validation_failures = [] + if args.validate_changed and copilot_changed: + try: + cmd = ["python", ".github/check_copilot_docs.py", "--fail"] + # Limit to changed files by setting CWD and relying on script to scan all; keep simple + run(cmd, cwd=str(root)) + except subprocess.CalledProcessError as e: + validation_failures.append(e.output.decode("utf-8", errors="replace")) + issues += 1 + + print(f"Impacted folders: {len(impacted)}") + for e in results: + if e["status"] == "OK": + detail = "hash aligned" + else: + detail = ", ".join(e["reasons"]) if e["reasons"] else "hash mismatch" + print(f"- {e['folder']}: {e['status']} ({detail})") + + if validation_failures: + print("\nValidation failures from check_copilot_docs.py:") + for vf in validation_failures: + print(vf) + + if args.json_out: + with open(args.json_out, "w", encoding="utf-8") as f: + json.dump({"impacted": results}, f, indent=2) + + if args.strict and issues: + return 2 + return 0 if not issues else 1 + + +if __name__ == "__main__": + raise SystemExit(main()) diff --git a/.github/fill_copilot_frontmatter.py b/.github/fill_copilot_frontmatter.py new file mode 100644 index 0000000000..46c7ab474b --- /dev/null +++ b/.github/fill_copilot_frontmatter.py @@ -0,0 +1,143 @@ +#!/usr/bin/env python3 +""" +fill_copilot_frontmatter.py — Ensure COPILOT.md frontmatter exists and has required fields. + +Behavior: +- If frontmatter missing: insert with provided values at top of file. +- If present: fill missing fields only; preserve existing values. +- last-reviewed defaults to today (YYYY-MM-DD) if missing. +- last-reviewed-tree defaults to the folder hash at the selected ref if not provided. + +Usage: + python .github/fill_copilot_frontmatter.py [--root ] [--status draft|verified] [--ref ] [--dry-run] +""" +import argparse +import datetime as dt +import sys +from pathlib import Path +from typing import Optional + +from copilot_tree_hash import compute_folder_tree_hash + + +def find_repo_root(start: Path) -> Path: + p = start.resolve() + while p != p.parent: + if (p / ".git").exists(): + return p + p = p.parent + return start.resolve() + + +def parse_frontmatter(text: str): + lines = text.splitlines() + if len(lines) >= 3 and lines[0].strip() == "---": + # Find closing '---' + end_idx = -1 + for i in range(1, min(len(lines), 200)): + if lines[i].strip() == "---": + end_idx = i + break + if end_idx == -1: + return None, text + fm_lines = lines[1:end_idx] + fm = {} + for l in fm_lines: + l = l.strip() + if not l or l.startswith("#"): + continue + if ":" in l: + k, v = l.split(":", 1) + fm[k.strip()] = v.strip().strip('"') + body = "\n".join(lines[end_idx + 1 :]) + return fm, body + return None, text + + +def render_frontmatter(fm: dict) -> str: + lines = ["---"] + for k in ["last-reviewed", "last-reviewed-tree", "status"]: + if k in fm and fm[k] is not None: + lines.append(f"{k}: {fm[k]}") + lines.append("---") + return "\n".join(lines) + "\n" + + +def ensure_frontmatter( + path: Path, status: str, folder_hash: Optional[str], dry_run=False +) -> bool: + text = path.read_text(encoding="utf-8", errors="replace") + fm, body = parse_frontmatter(text) + today = dt.date.today().strftime("%Y-%m-%d") + changed = False + hash_value = folder_hash or "FIXME(set-tree-hash)" + if not fm: + fm = { + "last-reviewed": today, + "last-reviewed-tree": hash_value, + "status": status or "draft", + } + new_text = render_frontmatter(fm) + body + changed = True + else: + # Fill missing + if not fm.get("last-reviewed"): + fm["last-reviewed"] = today + changed = True + if "last-verified-commit" in fm: + fm.pop("last-verified-commit") + changed = True + existing_hash = fm.get("last-reviewed-tree") + if existing_hash != hash_value: + fm["last-reviewed-tree"] = hash_value + changed = True + if not fm.get("status"): + fm["status"] = status or "draft" + changed = True + if changed: + new_text = render_frontmatter(fm) + body + else: + new_text = text + + if changed and not dry_run: + path.write_text(new_text, encoding="utf-8") + return changed + + +def main(): + ap = argparse.ArgumentParser() + ap.add_argument("--root", default=str(find_repo_root(Path.cwd()))) + ap.add_argument("--status", default="draft") + ap.add_argument("--ref", "--commit", dest="ref", default="HEAD") + ap.add_argument("--dry-run", action="store_true") + args = ap.parse_args() + + root = Path(args.root).resolve() + src = root / "Src" + if not src.exists(): + print(f"ERROR: Src/ not found under {root}") + return 2 + + changed_count = 0 + hash_cache = {} + for copath in src.rglob("COPILOT.md"): + parent = copath.parent + folder_hash = hash_cache.get(parent) + if folder_hash is None: + try: + folder_hash = compute_folder_tree_hash(root, parent, ref=args.ref) + except Exception as exc: # pragma: no cover - defensive logging only + print(f"WARNING: unable to compute tree hash for {parent}: {exc}") + folder_hash = None + hash_cache[parent] = folder_hash + + if ensure_frontmatter(copath, args.status, folder_hash, dry_run=args.dry_run): + print(f"Updated: {copath}") + changed_count += 1 + + print(f"Frontmatter ensured. Files changed: {changed_count}") + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/.github/instructions/build.instructions.md b/.github/instructions/build.instructions.md new file mode 100644 index 0000000000..7464afd5fb --- /dev/null +++ b/.github/instructions/build.instructions.md @@ -0,0 +1,24 @@ +--- +applyTo: "**/*" +description: "FieldWorks build guidelines and inner-loop tips" +--- +# Build guidelines and inner-loop tips + +## Context loading +- Always initialize the environment when using scripts: `source ./environ`. +- Prefer top-level build scripts or FW.sln to avoid dependency misconfiguration. + +## Deterministic requirements +- Inner loop: Use incremental builds; avoid full clean unless necessary. +- Choose the right path: + - Scripts: `bash ./agent-build-fw.sh` mirrors CI locally. + - MSBuild: `msbuild FW.sln /m /p:Configuration=Debug`. +- Installer: Skip unless you change installer logic. + +## Structured output +- Keep build logs for failures; scan for first error. +- Don’t modify `Build/` targets lightly; coordinate changes. + +## References +- `.github/workflows/` for CI steps +- `Build/` for targets/props and build infrastructure diff --git a/.github/instructions/installer.instructions.md b/.github/instructions/installer.instructions.md new file mode 100644 index 0000000000..b4f04caf3c --- /dev/null +++ b/.github/instructions/installer.instructions.md @@ -0,0 +1,23 @@ +--- +applyTo: "FLExInstaller/**" +description: "FieldWorks installer (WiX) development guidelines" +--- +# Installer development guidelines (WiX) + +## Context loading +- Only build the installer when changing installer logic or packaging; prefer app/library builds in inner loop. +- Review `FLExInstaller/` and related `.wxs/.wixproj` files; confirm WiX 3.11.x tooling. + +## Deterministic requirements +- Versioning: Maintain consistent ProductCode/UpgradeCode policies; ensure patches use higher build numbers than bases. +- Components/Features: Keep component GUID stability; avoid reshuffling that breaks upgrades. +- Files: Use build outputs; avoid hand-copying artifacts. +- Localization: Ensure installer strings align with repository localization patterns. + +## Structured output +- Always validate a local installer build when touching installer config. +- Keep changes minimal and documented in commit messages. + +## References +- Build: See `Build/Installer.targets` and top-level build scripts. +- CI: Patch/base installer workflows live under `.github/workflows/`. diff --git a/.github/instructions/managed.instructions.md b/.github/instructions/managed.instructions.md new file mode 100644 index 0000000000..2ff4e4ba50 --- /dev/null +++ b/.github/instructions/managed.instructions.md @@ -0,0 +1,26 @@ +--- +applyTo: "**/*.{cs,xaml,config,resx}" +description: "FieldWorks managed (.NET/C#) development guidelines" +--- +# Managed development guidelines for C# and .NET + +## Context loading +- Review `.github/src-catalog.md` and `Src//COPILOT.md` for component responsibilities and entry points. +- Follow localization patterns (use .resx resources; avoid hardcoded UI strings). Crowdin sync is configured via `crowdin.json`. + +## Deterministic requirements +- Threading: UI code must run on the UI thread; prefer async patterns for long-running work. Avoid deadlocks; do not block the UI. +- Exceptions: Fail fast for unrecoverable errors; log context. Avoid swallowing exceptions. +- Encoding: Favor UTF-16/UTF-8; be explicit at interop boundaries; avoid locale-dependent APIs. +- Tests: Co-locate unit/integration tests under `Src/.Tests` (NUnit patterns are common). Keep tests deterministic and portable. +- Resources: Place images/strings in resource files; avoid absolute paths; respect `.editorconfig`. + +## Structured output +- Public APIs include XML docs; keep namespaces consistent. +- Include minimal tests (happy path + one edge case) when modifying behavior. +- Follow existing project/solution structure; avoid creating new top-level patterns without consensus. + +## References +- Build: `bash ./agent-build-fw.sh` or `msbuild FW.sln /m /p:Configuration=Debug` +- Tests: Use Test Explorer or `dotnet test` for SDK-style; NUnit console for .NET Framework assemblies. +- Localization: See `DistFiles/CommonLocalizations/` and `crowdin.json`. diff --git a/.github/instructions/native.instructions.md b/.github/instructions/native.instructions.md new file mode 100644 index 0000000000..4e6e3e700e --- /dev/null +++ b/.github/instructions/native.instructions.md @@ -0,0 +1,25 @@ +--- +applyTo: "**/*.{cpp,h,hpp,cc,ixx,def}" +description: "FieldWorks native (C++/C++-CLI) development guidelines" +--- +# Native development guidelines for C++ and C++/CLI + +## Context loading +- Review `Src//COPILOT.md` for managed/native boundaries and interop contracts. +- Include/Lib paths are injected by build props/targets; avoid ad-hoc project configs. + +## Deterministic requirements +- Memory & RAII: Prefer smart pointers and RAII for resource management. +- Interop boundaries: Define clear marshaling rules (strings/arrays/structs). Avoid throwing exceptions across managed/native boundaries; translate appropriately. +- SAL annotations and warnings: Use SAL where feasible; keep warning level strict; fix warnings, don’t suppress casually. +- Encoding: Be explicit about UTF-8/UTF-16 conversions; do not rely on locale defaults. +- Threading: Document thread-affinity for UI and shared objects. + +## Structured output +- Header hygiene: Minimize transitive includes; prefer forward declarations where reasonable. +- ABI stability: Avoid breaking binary interfaces used by C# or other native modules without coordinated changes. +- Tests: Favor deterministic unit tests; isolate filesystem/registry usage. + +## References +- Build: Use top-level solution/scripts to ensure props/targets are loaded. +- Interop: Coordinate with corresponding managed components in `Src/`. diff --git a/.github/instructions/testing.instructions.md b/.github/instructions/testing.instructions.md new file mode 100644 index 0000000000..4bb2ee0830 --- /dev/null +++ b/.github/instructions/testing.instructions.md @@ -0,0 +1,22 @@ +--- +applyTo: "**/*.{cs,cpp,h}" +description: "FieldWorks testing guidelines (unit/integration)" +--- +# Testing guidelines + +## Context loading +- Locate tests near their components (e.g., `Src/.Tests`). Some integration scenarios use `TestLangProj/` data. +- Determine test runner: SDK-style projects use `dotnet test`; .NET Framework often uses NUnit Console. + +## Deterministic requirements +- Keep tests hermetic: avoid external state; use test data under version control. +- Name tests for intent; include happy path and 1–2 edge cases. +- Timeouts: Use sensible limits; see `Build/TestTimeoutValues.xml` for reference values. + +## Structured output +- Provide clear Arrange/Act/Assert; minimal fixture setup. +- Prefer stable IDs and data to avoid flakiness. + +## References +- Build/test scripts: `agent-build-fw.sh`, `Build/build-recent` +- Test data: `TestLangProj/` diff --git a/.github/memory.md b/.github/memory.md new file mode 100644 index 0000000000..f44046a989 --- /dev/null +++ b/.github/memory.md @@ -0,0 +1,10 @@ +# FieldWorks agent memory (curated) + +Use this file to capture decisions and pitfalls that help future agent sessions. +Keep it concise and high-value. + +- Managed ↔ Native boundaries must be coordinated. Avoid throwing exceptions across the boundary; marshal explicitly. +- UI strings should come from .resx; avoid hardcoded user-visible text (Crowdin is configured). +- Prefer CI-style build scripts for reproducibility; installer builds are slow—run only when needed. +- Integration tests often rely on `TestLangProj/`; keep data deterministic. +- Keep `.editorconfig` and CI checks in mind: trailing whitespace, final newline, commit message format. diff --git a/.github/option3-plan.md b/.github/option3-plan.md new file mode 100644 index 0000000000..062522f5a2 --- /dev/null +++ b/.github/option3-plan.md @@ -0,0 +1,49 @@ +# Option 3 plan: Outer-loop automation and MCP integration (pilot later) + +This plan is mothballed for now. It captures the steps to bring our agent workflows into CI/CD with safe tool boundaries. + +## Goals +- Run selected prompts reliably in CI (e.g., spec validation, test failure triage) +- Use least-privilege MCP tools per role/chat mode +- Package agent primitives for sharing and repeatability + +## Steps + +### 1) Copilot CLI and APM scaffold +- Add `apm.yml` with scripts mapping to our prompts (e.g., `copilot-feature-spec` → feature-spec.prompt.md) +- Include MCP dependencies (e.g., `ghcr.io/github/github-mcp-server`) +- Document local usage in README: `apm install`, `apm run copilot-feature-spec --param specFile=...` + +### 2) GitHub Action to run a prompt on PR +- Create `.github/workflows/agent-workflow.yml` +- Matrix run for selected scripts (e.g., spec validation, debug mode) +- Permissions: `pull-requests: write`, `contents: read`, `models: read` +- Post results as PR comments or check summaries + +### 3) MCP servers and boundaries +- Start with GitHub MCP server for PR/issue context and Filesystem MCP for repo search +- Restrict tools by chat mode (e.g., installer mode cannot edit native code) +- Maintain a curated list in `.github/context/mcp.servers.md` (to be created when piloting) + +### 4) Security and secrets +- Use `secrets.COPILOT_CLI_PAT` for Copilot CLI (if needed) +- Principle of least privilege for tokens and tool scopes +- Add a security review checklist for new tools/servers + +### 5) Governance and validation +- Add a `lint-docs` CI job to verify presence and links for: + - `.github/instructions/*.instructions.md` + - `Src/*/COPILOT.md` + - `.github/src-catalog.md` +- Add a `prompt-validate` job: checks frontmatter structure for `.prompt.md` + +### 6) Rollout strategy +- Pilot a single prompt (e.g., `test-failure-debug.prompt.md`) that makes no file edits and only posts analysis +- Gather feedback and iterate before enabling write-capable workflows + +## References +- `.github/copilot-instructions.md` (entry points) +- `.github/prompts/` (agent workflows) +- `.github/instructions/` (domain rules) +- `.github/chatmodes/` (role boundaries) +- `.github/context/` and `.github/memory.md` (signals and decisions) diff --git a/.github/prompts/bugfix.prompt.md b/.github/prompts/bugfix.prompt.md new file mode 100644 index 0000000000..6339546c1d --- /dev/null +++ b/.github/prompts/bugfix.prompt.md @@ -0,0 +1,37 @@ +# Bugfix workflow (triage → RCA → fix) + +You are an expert FieldWorks engineer. Triage and fix a defect with a validation gate before code changes. + +## Inputs +- failure description or issue link: ${issue} +- logs or stack trace (optional): ${logs} + +## Triage +1) Summarize the failure and affected components +2) Reproduce locally if possible; capture steps or failing test +3) Identify recent changes that could be related + +## Root cause analysis (RCA) +- Hypothesize likely causes (3 candidates) and quick tests to confirm/deny +- Note any managed/native or installer boundary implications + +## Validation gate (STOP) +Do not change files yet. Present: +- Root cause hypothesis and evidence +- Proposed fix (minimal diff) and test changes +- Risk assessment and fallback plan + +Wait for approval before proceeding. + +## Implementation +- Apply the minimal fix aligned with repository conventions +- Ensure localization, threading, and interop rules are respected + +## Tests +- Add/adjust tests to reproduce the original failure and verify the fix +- Prefer deterministic tests; update `TestLangProj/` data only if necessary + +## Handoff checklist +- [ ] Build and local tests pass +- [ ] Commit messages conform to gitlint rules +- [ ] COPILOT.md updated if behavior/contract changed diff --git a/.github/prompts/copilot-docs-update.prompt.md b/.github/prompts/copilot-docs-update.prompt.md new file mode 100644 index 0000000000..3ae24c2dfa --- /dev/null +++ b/.github/prompts/copilot-docs-update.prompt.md @@ -0,0 +1,45 @@ +# Copilot task: Update COPILOT.md for changed folders (detect → propose → validate) + +Purpose: Run a reliable 3-step flow to ensure COPILOT.md files are updated whenever code/config changes in `Src/**` are made. + +Context: FieldWorks repository. Scripts referenced below exist under `.github/` and are documented in `.github/update-copilot-summaries.md`. + +Inputs: +- base_ref: optional git ref to diff against (default behavior compares to the repo default via origin/HEAD) +- status: draft|verified (default: draft) + +Success criteria: +- All impacted `Src/` with code/config changes either updated their `COPILOT.md` in the same diff, or the proposer script generated updates. +- `check_copilot_docs.py --fail` passes (frontmatter, headings, references best-effort mapping). + +Steps: +1) Detect impacted folders + - Run: `python .github/detect_copilot_needed.py --strict` (or pass `--base origin/` explicitly) + - Collect the set of folders reported as missing `COPILOT.md` updates or with stale `last-reviewed-tree` hashes. + +2) Propose/prepare updates for those folders + - Run: `python .github/scaffold_copilot_markdown.py --status ` (optionally add `--ref ` to pin the tree hash; defaults to the head ref used in detection) + - This ensures frontmatter (including `last-reviewed-tree`) and all required headings exist and appends a "References (auto-generated hints)" section. + - Do not remove human-written content; only append/fix structure. + +3) Follow the three-pass workflow in `.github/update-copilot-summaries.md` for each folder: + - Pass 1 (Comprehension): read the existing `COPILOT.md` alongside the folder’s code/data to draft an accurate purpose/architecture summary. + - Pass 2 (Contracts & dependencies): verify upstream/downstream links, interop boundaries, configs, and edge cases with concrete references. + - Pass 3 (Synthesis): keep sections in canonical order, resolve or explain `FIXME()` markers, and remove scaffold leftovers such as duplicate `## References` or `## Auto Summary` blocks. + Always read the relevant source files—avoid speculation and keep every statement grounded in code or assets. + +4) Validate documentation integrity + - Run: `python .github/check_copilot_docs.py --only-changed --fail --verbose` + - If failures occur, iterate step 2 or manually edit `Src//COPILOT.md` to address missing headings, placeholders, or reference issues. Re-run until green. + +5) Commit and summarize + - Include a concise summary of impacted folders and changes. + - Example message: `docs(copilot): update COPILOT.md for , ; ensure frontmatter & skeleton; add auto refs` + +Notes: +- VS Code tasks are available for convenience: + - "COPILOT: Detect updates needed" + - "COPILOT: Propose updates for changed folders" + - "COPILOT: Validate COPILOT docs (changed only)" + - "COPILOT: Update flow (detect → propose → validate)" +- In CI, `.github/workflows/copilot-docs-detect.yml` runs the detector and (optionally) the validator in advisory mode. diff --git a/.github/prompts/feature-spec.prompt.md b/.github/prompts/feature-spec.prompt.md new file mode 100644 index 0000000000..0cbceeecc1 --- /dev/null +++ b/.github/prompts/feature-spec.prompt.md @@ -0,0 +1,40 @@ +# Feature implementation from specification + +You are an expert FieldWorks engineer. Implement a feature using a spec-first, validation-gated workflow. Do not modify files until after the validation gate is approved. + +## Inputs +- spec file: ${specFile} + +## Context loading +1) Read the spec at ${specFile} +2) Skim `.github/src-catalog.md` and relevant `Src//COPILOT.md` guides +3) Check build/test constraints in `.github/instructions/*.instructions.md` + +## Plan +- Identify impacted components (managed/native/installer) +- List files to add/modify, and any cross-boundary implications +- Outline tests (unit/integration) and data needed from `TestLangProj/` + +## Validation gate (STOP) +Do not change files yet. Present: +- Summary of the change +- Affected components and risks +- Test strategy (coverage and edge cases) +- Rollback considerations + +Wait for approval before proceeding. + +## Implementation +- Make minimal, incremental changes aligned with the approved plan +- Follow localization and resource patterns (.resx; avoid hardcoded strings) +- Keep interop boundaries explicit (marshaling rules) + +## Tests +- Add/modify tests near affected components +- Ensure deterministic outcomes; avoid relying on external state + +## Handoff checklist +- [ ] Code compiles and local build passes +- [ ] Tests added/updated and pass locally +- [ ] COPILOT.md updated if architecture meaningfully changed +- [ ] `.github/src-catalog.md` updated if folder purpose changed diff --git a/.github/prompts/speckit.analyze.prompt.md b/.github/prompts/speckit.analyze.prompt.md new file mode 100644 index 0000000000..542a3dec1e --- /dev/null +++ b/.github/prompts/speckit.analyze.prompt.md @@ -0,0 +1,184 @@ +--- +description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation. +--- + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Goal + +Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`. + +## Operating Constraints + +**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually). + +**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`. + +## Execution Steps + +### 1. Initialize Analysis Context + +Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths: + +- SPEC = FEATURE_DIR/spec.md +- PLAN = FEATURE_DIR/plan.md +- TASKS = FEATURE_DIR/tasks.md + +Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command). +For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). + +### 2. Load Artifacts (Progressive Disclosure) + +Load only the minimal necessary context from each artifact: + +**From spec.md:** + +- Overview/Context +- Functional Requirements +- Non-Functional Requirements +- User Stories +- Edge Cases (if present) + +**From plan.md:** + +- Architecture/stack choices +- Data Model references +- Phases +- Technical constraints + +**From tasks.md:** + +- Task IDs +- Descriptions +- Phase grouping +- Parallel markers [P] +- Referenced file paths + +**From constitution:** + +- Load `.specify/memory/constitution.md` for principle validation + +### 3. Build Semantic Models + +Create internal representations (do not include raw artifacts in output): + +- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`) +- **User story/action inventory**: Discrete user actions with acceptance criteria +- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases) +- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements + +### 4. Detection Passes (Token-Efficient Analysis) + +Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary. + +#### A. Duplication Detection + +- Identify near-duplicate requirements +- Mark lower-quality phrasing for consolidation + +#### B. Ambiguity Detection + +- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria +- Flag unresolved placeholders (TODO, TKTK, ???, ``, etc.) + +#### C. Underspecification + +- Requirements with verbs but missing object or measurable outcome +- User stories missing acceptance criteria alignment +- Tasks referencing files or components not defined in spec/plan + +#### D. Constitution Alignment + +- Any requirement or plan element conflicting with a MUST principle +- Missing mandated sections or quality gates from constitution + +#### E. Coverage Gaps + +- Requirements with zero associated tasks +- Tasks with no mapped requirement/story +- Non-functional requirements not reflected in tasks (e.g., performance, security) + +#### F. Inconsistency + +- Terminology drift (same concept named differently across files) +- Data entities referenced in plan but absent in spec (or vice versa) +- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note) +- Conflicting requirements (e.g., one requires Next.js while other specifies Vue) + +### 5. Severity Assignment + +Use this heuristic to prioritize findings: + +- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality +- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion +- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case +- **LOW**: Style/wording improvements, minor redundancy not affecting execution order + +### 6. Produce Compact Analysis Report + +Output a Markdown report (no file writes) with the following structure: + +## Specification Analysis Report + +| ID | Category | Severity | Location(s) | Summary | Recommendation | +|----|----------|----------|-------------|---------|----------------| +| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version | + +(Add one row per finding; generate stable IDs prefixed by category initial.) + +**Coverage Summary Table:** + +| Requirement Key | Has Task? | Task IDs | Notes | +|-----------------|-----------|----------|-------| + +**Constitution Alignment Issues:** (if any) + +**Unmapped Tasks:** (if any) + +**Metrics:** + +- Total Requirements +- Total Tasks +- Coverage % (requirements with >=1 task) +- Ambiguity Count +- Duplication Count +- Critical Issues Count + +### 7. Provide Next Actions + +At end of report, output a concise Next Actions block: + +- If CRITICAL issues exist: Recommend resolving before `/speckit.implement` +- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions +- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'" + +### 8. Offer Remediation + +Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.) + +## Operating Principles + +### Context Efficiency + +- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation +- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis +- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow +- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts + +### Analysis Guidelines + +- **NEVER modify files** (this is read-only analysis) +- **NEVER hallucinate missing sections** (if absent, report them accurately) +- **Prioritize constitution violations** (these are always CRITICAL) +- **Use examples over exhaustive rules** (cite specific instances, not generic patterns) +- **Report zero issues gracefully** (emit success report with coverage statistics) + +## Context + +$ARGUMENTS diff --git a/.github/prompts/speckit.checklist.prompt.md b/.github/prompts/speckit.checklist.prompt.md new file mode 100644 index 0000000000..b15f9160db --- /dev/null +++ b/.github/prompts/speckit.checklist.prompt.md @@ -0,0 +1,294 @@ +--- +description: Generate a custom checklist for the current feature based on user requirements. +--- + +## Checklist Purpose: "Unit Tests for English" + +**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain. + +**NOT for verification/testing**: + +- ❌ NOT "Verify the button clicks correctly" +- ❌ NOT "Test error handling works" +- ❌ NOT "Confirm the API returns 200" +- ❌ NOT checking if code/implementation matches the spec + +**FOR requirements quality validation**: + +- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness) +- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity) +- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency) +- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage) +- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases) + +**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works. + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Execution Steps + +1. **Setup**: Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list. + - All file paths must be absolute. + - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). + +2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST: + - Be generated from the user's phrasing + extracted signals from spec/plan/tasks + - Only ask about information that materially changes checklist content + - Be skipped individually if already unambiguous in `$ARGUMENTS` + - Prefer precision over breadth + + Generation algorithm: + 1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts"). + 2. Cluster signals into candidate focus areas (max 4) ranked by relevance. + 3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit. + 4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria. + 5. Formulate questions chosen from these archetypes: + - Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?") + - Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?") + - Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?") + - Audience framing (e.g., "Will this be used by the author only or peers during PR review?") + - Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?") + - Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?") + + Question formatting rules: + - If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters + - Limit to A–E options maximum; omit table if a free-form answer is clearer + - Never ask the user to restate what they already said + - Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope." + + Defaults when interaction impossible: + - Depth: Standard + - Audience: Reviewer (PR) if code-related; Author otherwise + - Focus: Top 2 relevance clusters + + Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more. + +3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers: + - Derive checklist theme (e.g., security, review, deploy, ux) + - Consolidate explicit must-have items mentioned by user + - Map focus selections to category scaffolding + - Infer any missing context from spec/plan/tasks (do NOT hallucinate) + +4. **Load feature context**: Read from FEATURE_DIR: + - spec.md: Feature requirements and scope + - plan.md (if exists): Technical details, dependencies + - tasks.md (if exists): Implementation tasks + + **Context Loading Strategy**: + - Load only necessary portions relevant to active focus areas (avoid full-file dumping) + - Prefer summarizing long sections into concise scenario/requirement bullets + - Use progressive disclosure: add follow-on retrieval only if gaps detected + - If source docs are large, generate interim summary items instead of embedding raw text + +5. **Generate checklist** - Create "Unit Tests for Requirements": + - Create `FEATURE_DIR/checklists/` directory if it doesn't exist + - Generate unique checklist filename: + - Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`) + - Format: `[domain].md` + - If file exists, append to existing file + - Number items sequentially starting from CHK001 + - Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists) + + **CORE PRINCIPLE - Test the Requirements, Not the Implementation**: + Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for: + - **Completeness**: Are all necessary requirements present? + - **Clarity**: Are requirements unambiguous and specific? + - **Consistency**: Do requirements align with each other? + - **Measurability**: Can requirements be objectively verified? + - **Coverage**: Are all scenarios/edge cases addressed? + + **Category Structure** - Group items by requirement quality dimensions: + - **Requirement Completeness** (Are all necessary requirements documented?) + - **Requirement Clarity** (Are requirements specific and unambiguous?) + - **Requirement Consistency** (Do requirements align without conflicts?) + - **Acceptance Criteria Quality** (Are success criteria measurable?) + - **Scenario Coverage** (Are all flows/cases addressed?) + - **Edge Case Coverage** (Are boundary conditions defined?) + - **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?) + - **Dependencies & Assumptions** (Are they documented and validated?) + - **Ambiguities & Conflicts** (What needs clarification?) + + **HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**: + + ❌ **WRONG** (Testing implementation): + - "Verify landing page displays 3 episode cards" + - "Test hover states work on desktop" + - "Confirm logo click navigates home" + + ✅ **CORRECT** (Testing requirements quality): + - "Are the exact number and layout of featured episodes specified?" [Completeness] + - "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity] + - "Are hover state requirements consistent across all interactive elements?" [Consistency] + - "Are keyboard navigation requirements defined for all interactive UI?" [Coverage] + - "Is the fallback behavior specified when logo image fails to load?" [Edge Cases] + - "Are loading states defined for asynchronous episode data?" [Completeness] + - "Does the spec define visual hierarchy for competing UI elements?" [Clarity] + + **ITEM STRUCTURE**: + Each item should follow this pattern: + - Question format asking about requirement quality + - Focus on what's WRITTEN (or not written) in the spec/plan + - Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.] + - Reference spec section `[Spec §X.Y]` when checking existing requirements + - Use `[Gap]` marker when checking for missing requirements + + **EXAMPLES BY QUALITY DIMENSION**: + + Completeness: + - "Are error handling requirements defined for all API failure modes? [Gap]" + - "Are accessibility requirements specified for all interactive elements? [Completeness]" + - "Are mobile breakpoint requirements defined for responsive layouts? [Gap]" + + Clarity: + - "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]" + - "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]" + - "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]" + + Consistency: + - "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]" + - "Are card component requirements consistent between landing and detail pages? [Consistency]" + + Coverage: + - "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]" + - "Are concurrent user interaction scenarios addressed? [Coverage, Gap]" + - "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]" + + Measurability: + - "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]" + - "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]" + + **Scenario Classification & Coverage** (Requirements Quality Focus): + - Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios + - For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?" + - If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]" + - Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]" + + **Traceability Requirements**: + - MINIMUM: ≥80% of items MUST include at least one traceability reference + - Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]` + - If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]" + + **Surface & Resolve Issues** (Requirements Quality Problems): + Ask questions about the requirements themselves: + - Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]" + - Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]" + - Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]" + - Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]" + - Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]" + + **Content Consolidation**: + - Soft cap: If raw candidate items > 40, prioritize by risk/impact + - Merge near-duplicates checking the same requirement aspect + - If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]" + + **🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test: + - ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior + - ❌ References to code execution, user actions, system behavior + - ❌ "Displays correctly", "works properly", "functions as expected" + - ❌ "Click", "navigate", "render", "load", "execute" + - ❌ Test cases, test plans, QA procedures + - ❌ Implementation details (frameworks, APIs, algorithms) + + **✅ REQUIRED PATTERNS** - These test requirements quality: + - ✅ "Are [requirement type] defined/specified/documented for [scenario]?" + - ✅ "Is [vague term] quantified/clarified with specific criteria?" + - ✅ "Are requirements consistent between [section A] and [section B]?" + - ✅ "Can [requirement] be objectively measured/verified?" + - ✅ "Are [edge cases/scenarios] addressed in requirements?" + - ✅ "Does the spec define [missing aspect]?" + +6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### ` lines with globally incrementing IDs starting at CHK001. + +7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize: + - Focus areas selected + - Depth level + - Actor/timing + - Any explicit user-specified must-have items incorporated + +**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows: + +- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`) +- Simple, memorable filenames that indicate checklist purpose +- Easy identification and navigation in the `checklists/` folder + +To avoid clutter, use descriptive types and clean up obsolete checklists when done. + +## Example Checklist Types & Sample Items + +**UX Requirements Quality:** `ux.md` + +Sample items (testing the requirements, NOT the implementation): + +- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]" +- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]" +- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]" +- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]" +- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]" +- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]" + +**API Requirements Quality:** `api.md` + +Sample items: + +- "Are error response formats specified for all failure scenarios? [Completeness]" +- "Are rate limiting requirements quantified with specific thresholds? [Clarity]" +- "Are authentication requirements consistent across all endpoints? [Consistency]" +- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]" +- "Is versioning strategy documented in requirements? [Gap]" + +**Performance Requirements Quality:** `performance.md` + +Sample items: + +- "Are performance requirements quantified with specific metrics? [Clarity]" +- "Are performance targets defined for all critical user journeys? [Coverage]" +- "Are performance requirements under different load conditions specified? [Completeness]" +- "Can performance requirements be objectively measured? [Measurability]" +- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]" + +**Security Requirements Quality:** `security.md` + +Sample items: + +- "Are authentication requirements specified for all protected resources? [Coverage]" +- "Are data protection requirements defined for sensitive information? [Completeness]" +- "Is the threat model documented and requirements aligned to it? [Traceability]" +- "Are security requirements consistent with compliance obligations? [Consistency]" +- "Are security failure/breach response requirements defined? [Gap, Exception Flow]" + +## Anti-Examples: What NOT To Do + +**❌ WRONG - These test implementation, not requirements:** + +```markdown +- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001] +- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003] +- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010] +- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005] +``` + +**✅ CORRECT - These test requirements quality:** + +```markdown +- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001] +- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003] +- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010] +- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005] +- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap] +- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001] +``` + +**Key Differences:** + +- Wrong: Tests if the system works correctly +- Correct: Tests if the requirements are written correctly +- Wrong: Verification of behavior +- Correct: Validation of requirement quality +- Wrong: "Does it do X?" +- Correct: "Is X clearly specified?" diff --git a/.github/prompts/speckit.clarify.prompt.md b/.github/prompts/speckit.clarify.prompt.md new file mode 100644 index 0000000000..4700d2975b --- /dev/null +++ b/.github/prompts/speckit.clarify.prompt.md @@ -0,0 +1,177 @@ +--- +description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec. +--- + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Outline + +Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file. + +Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases. + +Execution steps: + +1. Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -PathsOnly` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields: + - `FEATURE_DIR` + - `FEATURE_SPEC` + - (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.) + - If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment. + - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). + +2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked). + + Functional Scope & Behavior: + - Core user goals & success criteria + - Explicit out-of-scope declarations + - User roles / personas differentiation + + Domain & Data Model: + - Entities, attributes, relationships + - Identity & uniqueness rules + - Lifecycle/state transitions + - Data volume / scale assumptions + + Interaction & UX Flow: + - Critical user journeys / sequences + - Error/empty/loading states + - Accessibility or localization notes + + Non-Functional Quality Attributes: + - Performance (latency, throughput targets) + - Scalability (horizontal/vertical, limits) + - Reliability & availability (uptime, recovery expectations) + - Observability (logging, metrics, tracing signals) + - Security & privacy (authN/Z, data protection, threat assumptions) + - Compliance / regulatory constraints (if any) + + Integration & External Dependencies: + - External services/APIs and failure modes + - Data import/export formats + - Protocol/versioning assumptions + + Edge Cases & Failure Handling: + - Negative scenarios + - Rate limiting / throttling + - Conflict resolution (e.g., concurrent edits) + + Constraints & Tradeoffs: + - Technical constraints (language, storage, hosting) + - Explicit tradeoffs or rejected alternatives + + Terminology & Consistency: + - Canonical glossary terms + - Avoided synonyms / deprecated terms + + Completion Signals: + - Acceptance criteria testability + - Measurable Definition of Done style indicators + + Misc / Placeholders: + - TODO markers / unresolved decisions + - Ambiguous adjectives ("robust", "intuitive") lacking quantification + + For each category with Partial or Missing status, add a candidate question opportunity unless: + - Clarification would not materially change implementation or validation strategy + - Information is better deferred to planning phase (note internally) + +3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints: + - Maximum of 10 total questions across the whole session. + - Each question must be answerable with EITHER: + - A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR + - A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words"). + - Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation. + - Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved. + - Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness). + - Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests. + - If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic. + +4. Sequential questioning loop (interactive): + - Present EXACTLY ONE question at a time. + - For multiple‑choice questions: + - **Analyze all options** and determine the **most suitable option** based on: + - Best practices for the project type + - Common patterns in similar implementations + - Risk reduction (security, performance, maintainability) + - Alignment with any explicit project goals or constraints visible in the spec + - Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice). + - Format as: `**Recommended:** Option [X] - ` + - Then render all options as a Markdown table: + + | Option | Description | + |--------|-------------| + | A |