Compare commits
15 Commits
01cf320c2e
...
bbfcc366c2
| Author | SHA1 | Date | |
|---|---|---|---|
| bbfcc366c2 | |||
| 8b09aa9705 | |||
| 0eaa830050 | |||
| 8577807650 | |||
| a186c136f0 | |||
| e6b5ff0fb8 | |||
| 6a8bda9031 | |||
| 073cc1aa38 | |||
| cea666f3d8 | |||
| 813fd347d5 | |||
| 66ff22f9e6 | |||
| 86afae7d6c | |||
| 5e46938488 | |||
| 8f3951522c | |||
| 6e5af04278 |
195
.opencode/agents/review.md
Normal file
195
.opencode/agents/review.md
Normal file
@@ -0,0 +1,195 @@
|
||||
---
|
||||
description: Reviews code changes for quality, bugs, security, and best practices
|
||||
mode: subagent
|
||||
temperature: 0.1
|
||||
tools:
|
||||
write: false
|
||||
permission:
|
||||
edit: deny
|
||||
bash: allow
|
||||
---
|
||||
|
||||
You are a code reviewer for proposed code changes made by another engineer.
|
||||
|
||||
## Version Control
|
||||
|
||||
This project uses `jj` (Jujutsu) for version control. Use jj commands to inspect changes.
|
||||
|
||||
## Review Modes
|
||||
|
||||
Parse the user's request to determine the review mode. The user will specify one of the following modes (or no mode, in which case you should auto-detect).
|
||||
|
||||
### Auto-detect (no mode specified)
|
||||
|
||||
If no mode is specified:
|
||||
1. Check for working-copy changes with `jj diff --summary`
|
||||
2. If there are working-copy changes, review those (working-copy mode)
|
||||
3. Otherwise, find the trunk bookmark with `jj log -r 'trunk()' --no-graph -T 'bookmarks ++ "\n"'` and review against it (bookmark mode)
|
||||
4. If no trunk bookmark exists, review the current change
|
||||
|
||||
### working-copy
|
||||
|
||||
Review the current working-copy changes (including new files).
|
||||
|
||||
Commands to inspect:
|
||||
- `jj status` - overview of changed files
|
||||
- `jj diff --summary` - summary of changes
|
||||
- `jj diff` - full diff of all changes
|
||||
|
||||
### bookmark <name>
|
||||
|
||||
Review code changes against a base bookmark (PR-style review).
|
||||
|
||||
Steps:
|
||||
1. Resolve the bookmark. If the name contains `@`, split into `name@remote`. Otherwise, look for a local bookmark first, then remote bookmarks.
|
||||
2. Find the merge-base: `jj log -r 'heads(::@ & ::<bookmark_revset>)' --no-graph -T 'change_id.shortest(8) ++ "\n"'`
|
||||
- For local bookmarks: `<bookmark_revset>` = `bookmarks(exact:"<name>")`
|
||||
- For remote bookmarks: `<bookmark_revset>` = `remote_bookmarks(exact:"<name>", exact:"<remote>")`
|
||||
3. Inspect the diff: `jj diff --from <merge_base> --to @`
|
||||
|
||||
Also check for local working-copy changes on top with `jj diff --summary` and include those in the review.
|
||||
|
||||
### change <id>
|
||||
|
||||
Review a specific change by its change ID.
|
||||
|
||||
Commands to inspect:
|
||||
- `jj show <id>` - show the change details and diff
|
||||
- `jj log -r <id>` - show change metadata
|
||||
|
||||
### pr <number-or-url>
|
||||
|
||||
Review a GitHub pull request by materializing it locally.
|
||||
|
||||
Use the `review_materialize_pr` tool to materialize the PR. It returns the PR title, base bookmark, and remote used. Then review as a bookmark-style review against the base bookmark.
|
||||
|
||||
If the `review_materialize_pr` tool is not available, do it manually:
|
||||
1. Get PR info: `gh pr view <number> --json baseRefName,title,headRefName,isCrossRepository,headRepository,headRepositoryOwner`
|
||||
2. Fetch the PR branch: `jj git fetch --remote origin --branch <headRefName>`
|
||||
3. Save current position: `jj log -r @ --no-graph -T 'change_id.shortest(8)'`
|
||||
4. Create a new change on the PR: `jj new 'remote_bookmarks(exact:"<headRefName>", exact:"origin")'`
|
||||
5. Find merge-base and review as bookmark mode against `<baseRefName>`
|
||||
6. After the review, restore position: `jj edit <saved_change_id>`
|
||||
|
||||
For cross-repository (forked) PRs:
|
||||
1. Add a temporary remote: `jj git remote add <temp_name> <fork_url>`
|
||||
2. Fetch from that remote instead
|
||||
3. After the review, remove the temporary remote: `jj git remote remove <temp_name>`
|
||||
|
||||
Parse PR references as either a number (e.g. `123`) or a GitHub URL (e.g. `https://github.com/owner/repo/pull/123`).
|
||||
|
||||
### folder <paths...>
|
||||
|
||||
Snapshot review (not a diff) of specific folders or files.
|
||||
|
||||
Read the files directly in the specified paths. Do not compare against any previous state.
|
||||
|
||||
## Extra Instructions
|
||||
|
||||
If the user's request contains `--extra "..."` or `--extra=...`, treat the quoted value as an additional review instruction to apply on top of the standard guidelines.
|
||||
|
||||
## Project-Specific Review Guidelines
|
||||
|
||||
Before starting the review, check if a `REVIEW_GUIDELINES.md` file exists in the project root. If it does, read it and incorporate those guidelines into this review. They take precedence over the default guidelines below when they conflict.
|
||||
|
||||
## Review Guidelines
|
||||
|
||||
Below are default guidelines for determining what to flag. If you encounter more specific guidelines in the project's REVIEW_GUIDELINES.md or in the user's instructions, those override these general instructions.
|
||||
|
||||
### Determining what to flag
|
||||
|
||||
Flag issues that:
|
||||
1. Meaningfully impact the accuracy, performance, security, or maintainability of the code.
|
||||
2. Are discrete and actionable (not general issues or multiple combined issues).
|
||||
3. Don't demand rigor inconsistent with the rest of the codebase.
|
||||
4. Were introduced in the changes being reviewed (not pre-existing bugs).
|
||||
5. The author would likely fix if aware of them.
|
||||
6. Don't rely on unstated assumptions about the codebase or author's intent.
|
||||
7. Have provable impact on other parts of the code -- it is not enough to speculate that a change may disrupt another part, you must identify the parts that are provably affected.
|
||||
8. Are clearly not intentional changes by the author.
|
||||
9. Be particularly careful with untrusted user input and follow the specific guidelines to review.
|
||||
10. Treat silent local error recovery (especially parsing/IO/network fallbacks) as high-signal review candidates unless there is explicit boundary-level justification.
|
||||
|
||||
### Untrusted User Input
|
||||
|
||||
1. Be careful with open redirects, they must always be checked to only go to trusted domains (?next_page=...)
|
||||
2. Always flag SQL that is not parametrized
|
||||
3. In systems with user supplied URL input, http fetches always need to be protected against access to local resources (intercept DNS resolver!)
|
||||
4. Escape, don't sanitize if you have the option (eg: HTML escaping)
|
||||
|
||||
### Comment guidelines
|
||||
|
||||
1. Be clear about why the issue is a problem.
|
||||
2. Communicate severity appropriately - don't exaggerate.
|
||||
3. Be brief - at most 1 paragraph.
|
||||
4. Keep code snippets under 3 lines, wrapped in inline code or code blocks.
|
||||
5. Use ```suggestion blocks ONLY for concrete replacement code (minimal lines; no commentary inside the block). Preserve the exact leading whitespace of the replaced lines.
|
||||
6. Explicitly state scenarios/environments where the issue arises.
|
||||
7. Use a matter-of-fact tone - helpful AI assistant, not accusatory.
|
||||
8. Write for quick comprehension without close reading.
|
||||
9. Avoid excessive flattery or unhelpful phrases like "Great job...".
|
||||
|
||||
### Review priorities
|
||||
|
||||
1. Surface critical non-blocking human callouts (migrations, dependency churn, auth/permissions, compatibility, destructive operations) at the end.
|
||||
2. Prefer simple, direct solutions over wrappers or abstractions without clear value.
|
||||
3. Treat back pressure handling as critical to system stability.
|
||||
4. Apply system-level thinking; flag changes that increase operational risk or on-call wakeups.
|
||||
5. Ensure that errors are always checked against codes or stable identifiers, never error messages.
|
||||
|
||||
### Fail-fast error handling (strict)
|
||||
|
||||
When reviewing added or modified error handling, default to fail-fast behavior.
|
||||
|
||||
1. Evaluate every new or changed `try/catch`: identify what can fail and why local handling is correct at that exact layer.
|
||||
2. Prefer propagation over local recovery. If the current scope cannot fully recover while preserving correctness, rethrow (optionally with context) instead of returning fallbacks.
|
||||
3. Flag catch blocks that hide failure signals (e.g. returning `null`/`[]`/`false`, swallowing JSON parse failures, logging-and-continue, or "best effort" silent recovery).
|
||||
4. JSON parsing/decoding should fail loudly by default. Quiet fallback parsing is only acceptable with an explicit compatibility requirement and clear tested behavior.
|
||||
5. Boundary handlers (HTTP routes, CLI entrypoints, supervisors) may translate errors, but must not pretend success or silently degrade.
|
||||
6. If a catch exists only to satisfy lint/style without real handling, treat it as a bug.
|
||||
7. When uncertain, prefer crashing fast over silent degradation.
|
||||
|
||||
### Priority levels
|
||||
|
||||
Tag each finding with a priority level in the title:
|
||||
- [P0] - Drop everything to fix. Blocking release/operations. Only for universal issues that do not depend on assumptions about inputs.
|
||||
- [P1] - Urgent. Should be addressed in the next cycle.
|
||||
- [P2] - Normal. To be fixed eventually.
|
||||
- [P3] - Low. Nice to have.
|
||||
|
||||
## Output Format
|
||||
|
||||
Provide your findings in a clear, structured format:
|
||||
|
||||
1. List each finding with its priority tag, file location, and explanation.
|
||||
2. Findings must reference locations that overlap with the actual diff -- don't flag pre-existing code.
|
||||
3. Keep line references as short as possible (avoid ranges over 5-10 lines; pick the most suitable subrange).
|
||||
4. Provide an overall verdict: "correct" (no blocking issues) or "needs attention" (has blocking issues).
|
||||
5. Ignore trivial style issues unless they obscure meaning or violate documented standards.
|
||||
6. Do not generate a full PR fix -- only flag issues and optionally provide short suggestion blocks.
|
||||
7. End with the required "Human Reviewer Callouts (Non-Blocking)" section and all applicable bold callouts (no yes/no).
|
||||
|
||||
Output all findings the author would fix if they knew about them. If there are no qualifying findings, explicitly state the code looks good. Don't stop at the first finding - list every qualifying issue. Then append the required non-blocking callouts section.
|
||||
|
||||
### Required Human Reviewer Callouts (Non-Blocking)
|
||||
|
||||
After findings/verdict, you MUST append this final section:
|
||||
|
||||
## Human Reviewer Callouts (Non-Blocking)
|
||||
|
||||
Include only applicable callouts (no yes/no lines):
|
||||
|
||||
- **This change adds a database migration:** <files/details>
|
||||
- **This change introduces a new dependency:** <package(s)/details>
|
||||
- **This change changes a dependency (or the lockfile):** <files/package(s)/details>
|
||||
- **This change modifies auth/permission behavior:** <what changed and where>
|
||||
- **This change introduces backwards-incompatible public schema/API/contract changes:** <what changed and where>
|
||||
- **This change includes irreversible or destructive operations:** <operation and scope>
|
||||
|
||||
Rules for this section:
|
||||
1. These are informational callouts for the human reviewer, not fix items.
|
||||
2. Do not include them in Findings unless there is an independent defect.
|
||||
3. These callouts alone must not change the verdict.
|
||||
4. Only include callouts that apply to the reviewed change.
|
||||
5. Keep each emitted callout bold exactly as written.
|
||||
6. If none apply, write "- (none)".
|
||||
@@ -1,12 +0,0 @@
|
||||
{
|
||||
"id": "95b075f0",
|
||||
"title": "Fix Wipr 2 mas installation failure in nixos-config",
|
||||
"tags": [
|
||||
"bugfix",
|
||||
"mas",
|
||||
"nix-darwin"
|
||||
],
|
||||
"status": "in_progress",
|
||||
"created_at": "2026-03-29T18:55:14.812Z",
|
||||
"assigned_to_session": "8318f7d4-ccd1-4467-b7c9-fb05e53e4a1d"
|
||||
}
|
||||
242
flake.lock
generated
242
flake.lock
generated
@@ -114,11 +114,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1773000227,
|
||||
"narHash": "sha256-zm3ftUQw0MPumYi91HovoGhgyZBlM4o3Zy0LhPNwzXE=",
|
||||
"lastModified": 1775023938,
|
||||
"narHash": "sha256-0/aPuEXIIaehfP/t9icDJUTCmAu13dfS+RNKWdMV5P0=",
|
||||
"owner": "LnL7",
|
||||
"repo": "nix-darwin",
|
||||
"rev": "da529ac9e46f25ed5616fd634079a5f3c579135f",
|
||||
"rev": "5176e2f4b45de02f1c90133854634a6c675ef41b",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -130,11 +130,11 @@
|
||||
},
|
||||
"den": {
|
||||
"locked": {
|
||||
"lastModified": 1774843778,
|
||||
"narHash": "sha256-o8mzc+4fzC1wdx16E6+Xon+IIjysu9jtaVDExs7h9/Y=",
|
||||
"lastModified": 1775034229,
|
||||
"narHash": "sha256-BZPqamTWnWdKA+tSjt5y57EDYZnSRQYNZWQNFtqn9rw=",
|
||||
"owner": "vic",
|
||||
"repo": "den",
|
||||
"rev": "a28272a6c108e8be03da7f720c8e33263f981810",
|
||||
"rev": "88533ec7ac8ddda4a59243387de4b9d24d3932ae",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -191,11 +191,11 @@
|
||||
"rust-analyzer-src": "rust-analyzer-src"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1774857716,
|
||||
"narHash": "sha256-z05BKQ6F9/6H2/ecIYEXuq54JCUEiOpdYXTQIijB/wM=",
|
||||
"lastModified": 1775029908,
|
||||
"narHash": "sha256-QuPn+EN/097aBLeSqbQ7vOwc5TSOb68bAxg1+mknfmw=",
|
||||
"owner": "nix-community",
|
||||
"repo": "fenix",
|
||||
"rev": "9ad9c53e902485e006c07ae54a7dd4ad55a8c4d8",
|
||||
"rev": "380f1969f440e683333af5746caac76811b4a1a8",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -282,11 +282,11 @@
|
||||
},
|
||||
"flake-file": {
|
||||
"locked": {
|
||||
"lastModified": 1774666175,
|
||||
"narHash": "sha256-WaZxvtOvVNikiNTen2Emhds2RvzFCWIb7KU9C0eWrNA=",
|
||||
"lastModified": 1774886516,
|
||||
"narHash": "sha256-w2LoQVM6DXrSdGUZBZqa1nYkMzHoB0t82DrptzZKhTs=",
|
||||
"owner": "vic",
|
||||
"repo": "flake-file",
|
||||
"rev": "953d01f3ae5ba50869c5e1248062198f73e971bf",
|
||||
"rev": "3daadf37de2bb85b0ff34e2a7ab0d71e077c2b9e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -414,24 +414,6 @@
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"flake-utils_3": {
|
||||
"inputs": {
|
||||
"systems": "systems_6"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1731533236,
|
||||
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
|
||||
"owner": "numtide",
|
||||
"repo": "flake-utils",
|
||||
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "numtide",
|
||||
"repo": "flake-utils",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"himalaya": {
|
||||
"inputs": {
|
||||
"fenix": "fenix_2",
|
||||
@@ -459,11 +441,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1774738535,
|
||||
"narHash": "sha256-2jfBEZUC67IlnxO5KItFCAd7Oc+1TvyV/jQlR+2ykGQ=",
|
||||
"lastModified": 1774991950,
|
||||
"narHash": "sha256-kScKj3qJDIWuN9/6PMmgy5esrTUkYinrO5VvILik/zw=",
|
||||
"owner": "nix-community",
|
||||
"repo": "home-manager",
|
||||
"rev": "769e07ef8f4cf7b1ec3b96ef015abec9bc6b1e2a",
|
||||
"rev": "f2d3e04e278422c7379e067e323734f3e8c585a7",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -475,11 +457,11 @@
|
||||
"homebrew-cask": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1774861507,
|
||||
"narHash": "sha256-f/VujsOHeKdEjh1370RZcy4+pwcX6L7V9CCLv2S2eWA=",
|
||||
"lastModified": 1775034103,
|
||||
"narHash": "sha256-poo46muSZsDLcnN8wY/30YeLAdRCxIwzr2s1Z12aC28=",
|
||||
"owner": "homebrew",
|
||||
"repo": "homebrew-cask",
|
||||
"rev": "47817d96ba845ccedf309ed3f1e820076cecd0ba",
|
||||
"rev": "0285f9dcb1dfaacde1fb6218ebe92540d9a3762d",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -491,11 +473,11 @@
|
||||
"homebrew-core": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1774864828,
|
||||
"narHash": "sha256-rc488bmmFCj3C+DG3yjtsd62ldwPwXzMHvXMPET0q4s=",
|
||||
"lastModified": 1775034425,
|
||||
"narHash": "sha256-nTdPP63yUkmUsx/ksOvfRs6MjXztPh6GEv6FQU5IFGA=",
|
||||
"owner": "homebrew",
|
||||
"repo": "homebrew-core",
|
||||
"rev": "74d2c1974b3d7bcb2a5445e672e5fedbe3b73b4c",
|
||||
"rev": "da66ad06774537e48644d117e6300ad9c2db25a0",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -611,11 +593,11 @@
|
||||
"treefmt-nix": "treefmt-nix"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1774841075,
|
||||
"narHash": "sha256-+VuPFeU1RxcaAYRS7bgxxqCTYw43Eq2pm8sWJXPbfPU=",
|
||||
"lastModified": 1775013753,
|
||||
"narHash": "sha256-uIEYD2rwgV9EFO5x0SQ34Yj50r/4Abj28OibW404eCw=",
|
||||
"owner": "numtide",
|
||||
"repo": "llm-agents.nix",
|
||||
"rev": "0e02e0e660bacd064c47e61a01cd59038792d0f3",
|
||||
"rev": "5a192c61b052a7713ea8eb5490a64087a996afa7",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -655,11 +637,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1774829101,
|
||||
"narHash": "sha256-iOJHT8hP9IurF6gnvyEL6NVeMmDKvlCxwuKtPhXAt00=",
|
||||
"lastModified": 1774915815,
|
||||
"narHash": "sha256-LocQzkSjVS4G0AKMBiEIVdBKCNTMZXQFjQMWFId4Jpg=",
|
||||
"owner": "nix-community",
|
||||
"repo": "neovim-nightly-overlay",
|
||||
"rev": "a49f9d17bcaa684b81fc4322fbcbfc3ba501d40e",
|
||||
"rev": "9001416dc5d0ca24c8e4b5a44bfe7cd6fbeb1dd1",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -671,11 +653,11 @@
|
||||
"neovim-src": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1774827424,
|
||||
"narHash": "sha256-Jd05KifFViRnFmSc8Wi58VQXDfaKvwmly1e9olXlefw=",
|
||||
"lastModified": 1774915197,
|
||||
"narHash": "sha256-yor+eo8CVi7wBp7CjAMQnVoK+m197gsl7MvUzaqicns=",
|
||||
"owner": "neovim",
|
||||
"repo": "neovim",
|
||||
"rev": "ed822a085db0b6f4c682707be2c35a7acdca57b2",
|
||||
"rev": "dbc4800dda2b0dc3290dc79955f857256e0694e2",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -752,11 +734,11 @@
|
||||
},
|
||||
"nixpkgs_4": {
|
||||
"locked": {
|
||||
"lastModified": 1774610258,
|
||||
"narHash": "sha256-HaThtroVD9wRdx7KQk0B75JmFcXlMUoEdDFNOMOlsOs=",
|
||||
"lastModified": 1774855581,
|
||||
"narHash": "sha256-YkreHeMgTCYvJ5fESV0YyqQK49bHGe2B51tH6claUh4=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "832efc09b4caf6b4569fbf9dc01bec3082a00611",
|
||||
"rev": "15c6719d8c604779cf59e03c245ea61d3d7ab69b",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -768,11 +750,11 @@
|
||||
},
|
||||
"nixpkgs_5": {
|
||||
"locked": {
|
||||
"lastModified": 1774864832,
|
||||
"narHash": "sha256-l6iagD76ZxQjF0XgvdHUJODxV+pu2OKq+R5co3ntxek=",
|
||||
"lastModified": 1775036421,
|
||||
"narHash": "sha256-kOAGXAqmmCmXpTJ0ZC/v0pUlyTFgwj31hEfJbcf0l70=",
|
||||
"owner": "nixos",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "c7a12b31049276dcccb94571d77af89c408e1a29",
|
||||
"rev": "f16ce1b999cc00aa1222578a740e74b5fbfa0284",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -799,22 +781,6 @@
|
||||
}
|
||||
},
|
||||
"nixpkgs_7": {
|
||||
"locked": {
|
||||
"lastModified": 1769188852,
|
||||
"narHash": "sha256-aBAGyMum27K7cP5OR7BMioJOF3icquJMZDDgk6ZEg1A=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "a1bab9e494f5f4939442a57a58d0449a109593fe",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "nixpkgs-unstable",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"nixpkgs_8": {
|
||||
"locked": {
|
||||
"lastModified": 1765934234,
|
||||
"narHash": "sha256-pJjWUzNnjbIAMIc5gRFUuKCDQ9S1cuh3b2hKgA7Mc4A=",
|
||||
@@ -850,86 +816,6 @@
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"pi-agent-stuff": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1774816987,
|
||||
"narHash": "sha256-EBcMuXbKj0yRutgBRKrSsphq//JtRDtioyzLuKMfeiE=",
|
||||
"owner": "mitsuhiko",
|
||||
"repo": "agent-stuff",
|
||||
"rev": "d8d6a20edabc5f151ace1342dcd384aa5169b6fd",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "mitsuhiko",
|
||||
"repo": "agent-stuff",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"pi-elixir": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1772900407,
|
||||
"narHash": "sha256-QoCPVdN5CYGe5288cJQmB10ds/UOucHIyG9z9E/4hsw=",
|
||||
"owner": "dannote",
|
||||
"repo": "pi-elixir",
|
||||
"rev": "3b8f667beb696ce6ed456e762bfcf61e7326f5c4",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "dannote",
|
||||
"repo": "pi-elixir",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"pi-harness": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1774794426,
|
||||
"narHash": "sha256-pm1pfWAzDgRbgkdZwMMUOrlTXdcyRu/bUMrFeToPNEA=",
|
||||
"owner": "aliou",
|
||||
"repo": "pi-harness",
|
||||
"rev": "5f4836a60ae6f562fe1f0b69c2ab5a8edc1bdc0b",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "aliou",
|
||||
"repo": "pi-harness",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"pi-mcp-adapter": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1774247177,
|
||||
"narHash": "sha256-HTexm+b+UUbJD4qwIqlNcVPhF/G7/MtBtXa0AdeztbY=",
|
||||
"owner": "nicobailon",
|
||||
"repo": "pi-mcp-adapter",
|
||||
"rev": "c0919a29d263c2058c302641ddb04769c21be262",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "nicobailon",
|
||||
"repo": "pi-mcp-adapter",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"pi-rose-pine": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1770936151,
|
||||
"narHash": "sha256-6TzuWJPAn8zz+lUjZ3slFCNdPVd/Z2C+WoXFsLopk1g=",
|
||||
"owner": "zenobi-us",
|
||||
"repo": "pi-rose-pine",
|
||||
"rev": "9b342f6e16d6b28c00c2f888ba2f050273981bdb",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "zenobi-us",
|
||||
"repo": "pi-rose-pine",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"pimalaya": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
@@ -946,25 +832,6 @@
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"qmd": {
|
||||
"inputs": {
|
||||
"flake-utils": "flake-utils_2",
|
||||
"nixpkgs": "nixpkgs_7"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1774742449,
|
||||
"narHash": "sha256-x6+O8KX2LVqL49MLZsvyENITC5pY+IiTrI59OSwxurU=",
|
||||
"owner": "tobi",
|
||||
"repo": "qmd",
|
||||
"rev": "1fb2e2819e4024045203b4ea550ec793683baf2b",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "tobi",
|
||||
"repo": "qmd",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"root": {
|
||||
"inputs": {
|
||||
"code-review-nvim": "code-review-nvim",
|
||||
@@ -994,12 +861,6 @@
|
||||
"nixpkgs"
|
||||
],
|
||||
"nixvim": "nixvim",
|
||||
"pi-agent-stuff": "pi-agent-stuff",
|
||||
"pi-elixir": "pi-elixir",
|
||||
"pi-harness": "pi-harness",
|
||||
"pi-mcp-adapter": "pi-mcp-adapter",
|
||||
"pi-rose-pine": "pi-rose-pine",
|
||||
"qmd": "qmd",
|
||||
"sops-nix": "sops-nix",
|
||||
"zjstatus": "zjstatus"
|
||||
}
|
||||
@@ -1007,11 +868,11 @@
|
||||
"rust-analyzer-src": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1774787924,
|
||||
"narHash": "sha256-Cbpmf0+1pqi/zbpub2vkp5lTPx3QdVtDkkagDwQzHHg=",
|
||||
"lastModified": 1774948198,
|
||||
"narHash": "sha256-oVPo0/3CXM/5uFKu1ZwP7osSV2tiQIFU09Y3UzNbm7g=",
|
||||
"owner": "rust-lang",
|
||||
"repo": "rust-analyzer",
|
||||
"rev": "f1297b21119565c626320c1ffc248965fffb2527",
|
||||
"rev": "63b3eff38ef1c216480147dd53b0e4365d55f269",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -1083,11 +944,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1774760784,
|
||||
"narHash": "sha256-D+tgywBHldTc0klWCIC49+6Zlp57Y4GGwxP1CqfxZrY=",
|
||||
"lastModified": 1774910634,
|
||||
"narHash": "sha256-B+rZDPyktGEjOMt8PcHKYmgmKoF+GaNAFJhguktXAo0=",
|
||||
"owner": "Mic92",
|
||||
"repo": "sops-nix",
|
||||
"rev": "8adb84861fe70e131d44e1e33c426a51e2e0bfa5",
|
||||
"rev": "19bf3d8678fbbfbc173beaa0b5b37d37938db301",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -1171,21 +1032,6 @@
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"systems_6": {
|
||||
"locked": {
|
||||
"lastModified": 1681028828,
|
||||
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"treefmt-nix": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
@@ -1228,8 +1074,8 @@
|
||||
"zjstatus": {
|
||||
"inputs": {
|
||||
"crane": "crane",
|
||||
"flake-utils": "flake-utils_3",
|
||||
"nixpkgs": "nixpkgs_8",
|
||||
"flake-utils": "flake-utils_2",
|
||||
"nixpkgs": "nixpkgs_7",
|
||||
"rust-overlay": "rust-overlay"
|
||||
},
|
||||
"locked": {
|
||||
|
||||
21
flake.nix
21
flake.nix
@@ -68,27 +68,6 @@
|
||||
nixpkgs.url = "github:nixos/nixpkgs/master";
|
||||
nixpkgs-lib.follows = "nixpkgs";
|
||||
nixvim.url = "github:nix-community/nixvim";
|
||||
pi-agent-stuff = {
|
||||
url = "github:mitsuhiko/agent-stuff";
|
||||
flake = false;
|
||||
};
|
||||
pi-elixir = {
|
||||
url = "github:dannote/pi-elixir";
|
||||
flake = false;
|
||||
};
|
||||
pi-harness = {
|
||||
url = "github:aliou/pi-harness";
|
||||
flake = false;
|
||||
};
|
||||
pi-mcp-adapter = {
|
||||
url = "github:nicobailon/pi-mcp-adapter";
|
||||
flake = false;
|
||||
};
|
||||
pi-rose-pine = {
|
||||
url = "github:zenobi-us/pi-rose-pine";
|
||||
flake = false;
|
||||
};
|
||||
qmd.url = "github:tobi/qmd";
|
||||
sops-nix = {
|
||||
url = "github:Mic92/sops-nix";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
|
||||
@@ -1,190 +0,0 @@
|
||||
/**
|
||||
* No Git Extension
|
||||
*
|
||||
* Blocks direct git invocations and tells the LLM to use jj (Jujutsu) instead.
|
||||
* Mentions of the word "git" in search patterns, strings, comments, etc. are allowed.
|
||||
*/
|
||||
|
||||
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
|
||||
import { isToolCallEventType } from "@mariozechner/pi-coding-agent";
|
||||
|
||||
type ShellToken =
|
||||
| { type: "word"; value: string }
|
||||
| { type: "operator"; value: string };
|
||||
|
||||
const COMMAND_PREFIXES = new Set(["env", "command", "builtin", "time", "sudo", "nohup", "nice"]);
|
||||
const SHELL_KEYWORDS = new Set(["if", "then", "elif", "else", "do", "while", "until", "case", "in"]);
|
||||
const SHELL_INTERPRETERS = new Set(["bash", "sh", "zsh", "fish", "nu"]);
|
||||
|
||||
function isAssignmentWord(value: string): boolean {
|
||||
return /^[A-Za-z_][A-Za-z0-9_]*=.*/.test(value);
|
||||
}
|
||||
|
||||
function tokenizeShell(command: string): ShellToken[] {
|
||||
const tokens: ShellToken[] = [];
|
||||
let current = "";
|
||||
let quote: "'" | '"' | null = null;
|
||||
|
||||
const pushWord = () => {
|
||||
if (!current) return;
|
||||
tokens.push({ type: "word", value: current });
|
||||
current = "";
|
||||
};
|
||||
|
||||
for (let i = 0; i < command.length; i++) {
|
||||
const char = command[i];
|
||||
|
||||
if (quote) {
|
||||
if (quote === "'") {
|
||||
if (char === "'") {
|
||||
quote = null;
|
||||
} else {
|
||||
current += char;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
if (char === '"') {
|
||||
quote = null;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (char === "\\") {
|
||||
if (i + 1 < command.length) {
|
||||
current += command[i + 1];
|
||||
i += 1;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
current += char;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (char === "'" || char === '"') {
|
||||
quote = char;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (char === "\\") {
|
||||
if (i + 1 < command.length) {
|
||||
current += command[i + 1];
|
||||
i += 1;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
if (/\s/.test(char)) {
|
||||
pushWord();
|
||||
if (char === "\n") {
|
||||
tokens.push({ type: "operator", value: "\n" });
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
const twoCharOperator = command.slice(i, i + 2);
|
||||
if (twoCharOperator === "&&" || twoCharOperator === "||") {
|
||||
pushWord();
|
||||
tokens.push({ type: "operator", value: twoCharOperator });
|
||||
i += 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (char === ";" || char === "|" || char === "(" || char === ")") {
|
||||
pushWord();
|
||||
tokens.push({ type: "operator", value: char });
|
||||
continue;
|
||||
}
|
||||
|
||||
current += char;
|
||||
}
|
||||
|
||||
pushWord();
|
||||
return tokens;
|
||||
}
|
||||
|
||||
function findCommandWord(words: string[]): { word?: string; index: number } {
|
||||
for (let i = 0; i < words.length; i++) {
|
||||
const word = words[i];
|
||||
if (SHELL_KEYWORDS.has(word)) {
|
||||
continue;
|
||||
}
|
||||
if (isAssignmentWord(word)) {
|
||||
continue;
|
||||
}
|
||||
if (COMMAND_PREFIXES.has(word)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
return { word, index: i };
|
||||
}
|
||||
|
||||
return { index: words.length };
|
||||
}
|
||||
|
||||
function getInlineShellCommand(words: string[], commandIndex: number): string | null {
|
||||
for (let i = commandIndex + 1; i < words.length; i++) {
|
||||
const word = words[i];
|
||||
if (/^(?:-[A-Za-z]*c[A-Za-z]*|--command)$/.test(word)) {
|
||||
return words[i + 1] ?? null;
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
function segmentContainsBlockedGit(words: string[]): boolean {
|
||||
const { word, index } = findCommandWord(words);
|
||||
if (!word) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (word === "git") {
|
||||
return true;
|
||||
}
|
||||
|
||||
if (word === "jj") {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (SHELL_INTERPRETERS.has(word)) {
|
||||
const inlineCommand = getInlineShellCommand(words, index);
|
||||
return inlineCommand ? containsBlockedGitInvocation(inlineCommand) : false;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
function containsBlockedGitInvocation(command: string): boolean {
|
||||
const tokens = tokenizeShell(command);
|
||||
let words: string[] = [];
|
||||
|
||||
for (const token of tokens) {
|
||||
if (token.type === "operator") {
|
||||
if (segmentContainsBlockedGit(words)) {
|
||||
return true;
|
||||
}
|
||||
words = [];
|
||||
continue;
|
||||
}
|
||||
|
||||
words.push(token.value);
|
||||
}
|
||||
|
||||
return segmentContainsBlockedGit(words);
|
||||
}
|
||||
|
||||
export default function (pi: ExtensionAPI) {
|
||||
pi.on("tool_call", async (event, _ctx) => {
|
||||
if (!isToolCallEventType("bash", event)) return;
|
||||
|
||||
const command = event.input.command.trim();
|
||||
|
||||
if (containsBlockedGitInvocation(command)) {
|
||||
return {
|
||||
block: true,
|
||||
reason: "git is not used in this project. Use jj (Jujutsu) instead.",
|
||||
};
|
||||
}
|
||||
});
|
||||
}
|
||||
@@ -1,28 +0,0 @@
|
||||
/**
|
||||
* No Scripting Extension
|
||||
*
|
||||
* Blocks python, perl, ruby, php, lua, node -e, and inline bash/sh scripts.
|
||||
* Tells the LLM to use `nu -c` instead.
|
||||
*/
|
||||
|
||||
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
|
||||
import { isToolCallEventType } from "@mariozechner/pi-coding-agent";
|
||||
|
||||
const SCRIPTING_PATTERN =
|
||||
/(?:^|[;&|]\s*|&&\s*|\|\|\s*|\$\(\s*|`\s*)(?:python[23]?|perl|ruby|php|lua|node\s+-e|bash\s+-c|sh\s+-c)\s/;
|
||||
|
||||
export default function (pi: ExtensionAPI) {
|
||||
pi.on("tool_call", async (event, _ctx) => {
|
||||
if (!isToolCallEventType("bash", event)) return;
|
||||
|
||||
const command = event.input.command.trim();
|
||||
|
||||
if (SCRIPTING_PATTERN.test(command)) {
|
||||
return {
|
||||
block: true,
|
||||
reason:
|
||||
"Do not use python, perl, ruby, php, lua, node -e, or inline bash/sh for scripting. Use `nu -c` instead.",
|
||||
};
|
||||
}
|
||||
});
|
||||
}
|
||||
@@ -1,687 +0,0 @@
|
||||
import { readFile, writeFile, mkdir, readdir } from "node:fs/promises";
|
||||
import * as fs from "node:fs";
|
||||
import * as os from "node:os";
|
||||
import * as path from "node:path";
|
||||
import * as crypto from "node:crypto";
|
||||
import { Box, Text } from "@mariozechner/pi-tui";
|
||||
import type { ExtensionAPI, ExtensionContext, ExtensionCommandContext, Model } from "@mariozechner/pi-coding-agent";
|
||||
import {
|
||||
createAgentSession,
|
||||
DefaultResourceLoader,
|
||||
getAgentDir,
|
||||
SessionManager,
|
||||
SettingsManager,
|
||||
} from "@mariozechner/pi-coding-agent";
|
||||
|
||||
interface IngestManifest {
|
||||
version: number;
|
||||
job_id: string;
|
||||
note_id: string;
|
||||
operation: string;
|
||||
requested_at: string;
|
||||
title: string;
|
||||
source_relpath: string;
|
||||
source_path: string;
|
||||
input_path: string;
|
||||
archive_path: string;
|
||||
output_path: string;
|
||||
transcript_path: string;
|
||||
result_path: string;
|
||||
session_dir: string;
|
||||
source_hash: string;
|
||||
last_generated_output_hash?: string | null;
|
||||
force_overwrite_generated?: boolean;
|
||||
source_transport?: string;
|
||||
}
|
||||
|
||||
interface IngestResult {
|
||||
success: boolean;
|
||||
job_id: string;
|
||||
note_id: string;
|
||||
archive_path: string;
|
||||
source_hash: string;
|
||||
session_dir: string;
|
||||
output_path?: string;
|
||||
output_hash?: string;
|
||||
conflict_path?: string;
|
||||
write_mode?: "create" | "overwrite" | "force-overwrite" | "conflict";
|
||||
updated_main_output?: boolean;
|
||||
transcript_path?: string;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
interface FrontmatterInfo {
|
||||
values: Record<string, string>;
|
||||
body: string;
|
||||
}
|
||||
|
||||
interface RenderedPage {
|
||||
path: string;
|
||||
image: {
|
||||
type: "image";
|
||||
source: {
|
||||
type: "base64";
|
||||
mediaType: string;
|
||||
data: string;
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
const TRANSCRIBE_SKILL = "notability-transcribe";
|
||||
const NORMALIZE_SKILL = "notability-normalize";
|
||||
const STATUS_TYPE = "notability-status";
|
||||
const DEFAULT_TRANSCRIBE_THINKING = "low" as const;
|
||||
const DEFAULT_NORMALIZE_THINKING = "off" as const;
|
||||
const PREFERRED_VISION_MODEL: [string, string] = ["openai-codex", "gpt-5.4"];
|
||||
|
||||
function getNotesRoot(): string {
|
||||
return process.env.NOTABILITY_NOTES_DIR ?? path.join(os.homedir(), "Notes");
|
||||
}
|
||||
|
||||
function getDataRoot(): string {
|
||||
return process.env.NOTABILITY_DATA_ROOT ?? path.join(os.homedir(), ".local", "share", "notability-ingest");
|
||||
}
|
||||
|
||||
function getRenderRoot(): string {
|
||||
return process.env.NOTABILITY_RENDER_ROOT ?? path.join(getDataRoot(), "rendered-pages");
|
||||
}
|
||||
|
||||
function getNotabilityScriptDir(): string {
|
||||
return path.join(getAgentDir(), "notability");
|
||||
}
|
||||
|
||||
function getSkillPath(skillName: string): string {
|
||||
return path.join(getAgentDir(), "skills", skillName, "SKILL.md");
|
||||
}
|
||||
|
||||
function stripFrontmatterBlock(text: string): string {
|
||||
const trimmed = text.trim();
|
||||
if (!trimmed.startsWith("---\n")) return trimmed;
|
||||
const end = trimmed.indexOf("\n---\n", 4);
|
||||
if (end === -1) return trimmed;
|
||||
return trimmed.slice(end + 5).trim();
|
||||
}
|
||||
|
||||
function stripCodeFence(text: string): string {
|
||||
const trimmed = text.trim();
|
||||
const match = trimmed.match(/^```(?:markdown|md)?\n([\s\S]*?)\n```$/i);
|
||||
return match ? match[1].trim() : trimmed;
|
||||
}
|
||||
|
||||
function parseFrontmatter(text: string): FrontmatterInfo {
|
||||
const trimmed = stripCodeFence(text);
|
||||
if (!trimmed.startsWith("---\n")) {
|
||||
return { values: {}, body: trimmed };
|
||||
}
|
||||
|
||||
const end = trimmed.indexOf("\n---\n", 4);
|
||||
if (end === -1) {
|
||||
return { values: {}, body: trimmed };
|
||||
}
|
||||
|
||||
const block = trimmed.slice(4, end);
|
||||
const body = trimmed.slice(end + 5).trim();
|
||||
const values: Record<string, string> = {};
|
||||
for (const line of block.split("\n")) {
|
||||
const idx = line.indexOf(":");
|
||||
if (idx === -1) continue;
|
||||
const key = line.slice(0, idx).trim();
|
||||
const value = line.slice(idx + 1).trim();
|
||||
values[key] = value;
|
||||
}
|
||||
return { values, body };
|
||||
}
|
||||
|
||||
function quoteYaml(value: string): string {
|
||||
return JSON.stringify(value);
|
||||
}
|
||||
|
||||
function sha256(content: string | Buffer): string {
|
||||
return crypto.createHash("sha256").update(content).digest("hex");
|
||||
}
|
||||
|
||||
async function sha256File(filePath: string): Promise<string> {
|
||||
const buffer = await readFile(filePath);
|
||||
return sha256(buffer);
|
||||
}
|
||||
|
||||
function extractTitle(normalized: string, fallbackTitle: string): string {
|
||||
const parsed = parseFrontmatter(normalized);
|
||||
const frontmatterTitle = parsed.values.title?.replace(/^['"]|['"]$/g, "").trim();
|
||||
if (frontmatterTitle) return frontmatterTitle;
|
||||
const heading = parsed.body
|
||||
.split("\n")
|
||||
.map((line) => line.trim())
|
||||
.find((line) => line.startsWith("# "));
|
||||
if (heading) return heading.replace(/^#\s+/, "").trim();
|
||||
return fallbackTitle;
|
||||
}
|
||||
|
||||
function sourceFormat(filePath: string): string {
|
||||
const extension = path.extname(filePath).toLowerCase();
|
||||
if (extension === ".pdf") return "pdf";
|
||||
if (extension === ".png") return "png";
|
||||
return extension.replace(/^\./, "") || "unknown";
|
||||
}
|
||||
|
||||
function buildMarkdown(manifest: IngestManifest, normalized: string): string {
|
||||
const parsed = parseFrontmatter(normalized);
|
||||
const title = extractTitle(normalized, manifest.title);
|
||||
const now = new Date().toISOString().replace(/\.\d{3}Z$/, "Z");
|
||||
const created = manifest.requested_at.slice(0, 10);
|
||||
const body = parsed.body.trim();
|
||||
const outputBody = body.length > 0 ? body : `# ${title}\n`;
|
||||
|
||||
return [
|
||||
"---",
|
||||
`title: ${quoteYaml(title)}`,
|
||||
`created: ${quoteYaml(created)}`,
|
||||
`updated: ${quoteYaml(now.slice(0, 10))}`,
|
||||
`source: ${quoteYaml("notability")}`,
|
||||
`source_transport: ${quoteYaml(manifest.source_transport ?? "webdav")}`,
|
||||
`source_relpath: ${quoteYaml(manifest.source_relpath)}`,
|
||||
`note_id: ${quoteYaml(manifest.note_id)}`,
|
||||
`managed_by: ${quoteYaml("notability-ingest")}`,
|
||||
`source_file: ${quoteYaml(manifest.archive_path)}`,
|
||||
`source_file_hash: ${quoteYaml(`sha256:${manifest.source_hash}`)}`,
|
||||
`source_format: ${quoteYaml(sourceFormat(manifest.archive_path))}`,
|
||||
`status: ${quoteYaml("active")}`,
|
||||
"tags:",
|
||||
" - handwritten",
|
||||
" - notability",
|
||||
"---",
|
||||
"",
|
||||
outputBody,
|
||||
"",
|
||||
].join("\n");
|
||||
}
|
||||
|
||||
function conflictPathFor(outputPath: string): string {
|
||||
const parsed = path.parse(outputPath);
|
||||
const stamp = new Date().toISOString().replace(/[:]/g, "-").replace(/\.\d{3}Z$/, "Z");
|
||||
return path.join(parsed.dir, `${parsed.name}.conflict-${stamp}${parsed.ext}`);
|
||||
}
|
||||
|
||||
async function ensureParent(filePath: string): Promise<void> {
|
||||
await mkdir(path.dirname(filePath), { recursive: true });
|
||||
}
|
||||
|
||||
async function loadSkillText(skillName: string): Promise<string> {
|
||||
const raw = await readFile(getSkillPath(skillName), "utf8");
|
||||
return stripFrontmatterBlock(raw).trim();
|
||||
}
|
||||
|
||||
function normalizePathArg(arg: string): string {
|
||||
return arg.startsWith("@") ? arg.slice(1) : arg;
|
||||
}
|
||||
|
||||
function resolveModel(ctx: ExtensionContext, requireImage = false): Model {
|
||||
const available = ctx.modelRegistry.getAvailable();
|
||||
const matching = requireImage ? available.filter((model) => model.input.includes("image")) : available;
|
||||
|
||||
if (matching.length === 0) {
|
||||
throw new Error(
|
||||
requireImage
|
||||
? "No image-capable model configured for pi note ingestion"
|
||||
: "No available model configured for pi note ingestion",
|
||||
);
|
||||
}
|
||||
|
||||
if (ctx.model && (!requireImage || ctx.model.input.includes("image"))) {
|
||||
if (!requireImage) return ctx.model;
|
||||
}
|
||||
|
||||
if (requireImage) {
|
||||
const [provider, id] = PREFERRED_VISION_MODEL;
|
||||
const preferred = matching.find((model) => model.provider === provider && model.id === id);
|
||||
if (preferred) return preferred;
|
||||
|
||||
const subscriptionModel = matching.find(
|
||||
(model) => model.provider !== "opencode" && model.provider !== "opencode-go",
|
||||
);
|
||||
if (subscriptionModel) return subscriptionModel;
|
||||
}
|
||||
|
||||
if (ctx.model && (!requireImage || ctx.model.input.includes("image"))) {
|
||||
return ctx.model;
|
||||
}
|
||||
|
||||
return matching[0];
|
||||
}
|
||||
|
||||
async function runSkillPrompt(
|
||||
ctx: ExtensionContext,
|
||||
systemPrompt: string,
|
||||
prompt: string,
|
||||
images: RenderedPage[] = [],
|
||||
thinkingLevel: "off" | "low" = "off",
|
||||
): Promise<string> {
|
||||
if (images.length > 0) {
|
||||
const model = resolveModel(ctx, true);
|
||||
const { execFile } = await import("node:child_process");
|
||||
const promptPath = path.join(os.tmpdir(), `pi-note-ingest-${crypto.randomUUID()}.md`);
|
||||
await writeFile(promptPath, `${prompt}\n`);
|
||||
const args = [
|
||||
"45s",
|
||||
"pi",
|
||||
"--model",
|
||||
`${model.provider}/${model.id}`,
|
||||
"--thinking",
|
||||
thinkingLevel,
|
||||
"--no-tools",
|
||||
"--no-session",
|
||||
"-p",
|
||||
...images.map((page) => `@${page.path}`),
|
||||
`@${promptPath}`,
|
||||
];
|
||||
|
||||
try {
|
||||
const output = await new Promise<string>((resolve, reject) => {
|
||||
execFile("timeout", args, { cwd: ctx.cwd, env: process.env, maxBuffer: 10 * 1024 * 1024 }, (error, stdout, stderr) => {
|
||||
if ((stdout ?? "").trim().length > 0) {
|
||||
resolve(stdout);
|
||||
return;
|
||||
}
|
||||
if (error) {
|
||||
reject(new Error(stderr || stdout || error.message));
|
||||
return;
|
||||
}
|
||||
resolve(stdout);
|
||||
});
|
||||
});
|
||||
|
||||
return stripCodeFence(output).trim();
|
||||
} finally {
|
||||
try {
|
||||
fs.unlinkSync(promptPath);
|
||||
} catch {
|
||||
// Ignore temp file cleanup failures.
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const agentDir = getAgentDir();
|
||||
const settingsManager = SettingsManager.create(ctx.cwd, agentDir);
|
||||
const resourceLoader = new DefaultResourceLoader({
|
||||
cwd: ctx.cwd,
|
||||
agentDir,
|
||||
settingsManager,
|
||||
noExtensions: true,
|
||||
noPromptTemplates: true,
|
||||
noThemes: true,
|
||||
noSkills: true,
|
||||
systemPromptOverride: () => systemPrompt,
|
||||
appendSystemPromptOverride: () => [],
|
||||
agentsFilesOverride: () => ({ agentsFiles: [] }),
|
||||
});
|
||||
await resourceLoader.reload();
|
||||
|
||||
const { session } = await createAgentSession({
|
||||
model: resolveModel(ctx, images.length > 0),
|
||||
thinkingLevel,
|
||||
sessionManager: SessionManager.inMemory(),
|
||||
modelRegistry: ctx.modelRegistry,
|
||||
resourceLoader,
|
||||
tools: [],
|
||||
});
|
||||
|
||||
let output = "";
|
||||
const unsubscribe = session.subscribe((event) => {
|
||||
if (event.type === "message_update" && event.assistantMessageEvent.type === "text_delta") {
|
||||
output += event.assistantMessageEvent.delta;
|
||||
}
|
||||
});
|
||||
|
||||
try {
|
||||
await session.prompt(prompt, {
|
||||
images: images.map((page) => page.image),
|
||||
});
|
||||
} finally {
|
||||
unsubscribe();
|
||||
}
|
||||
|
||||
if (!output.trim()) {
|
||||
const assistantMessages = session.messages.filter((message) => message.role === "assistant");
|
||||
const lastAssistant = assistantMessages.at(-1);
|
||||
if (lastAssistant && Array.isArray(lastAssistant.content)) {
|
||||
output = lastAssistant.content
|
||||
.filter((part) => part.type === "text")
|
||||
.map((part) => part.text)
|
||||
.join("");
|
||||
}
|
||||
}
|
||||
|
||||
session.dispose();
|
||||
return stripCodeFence(output).trim();
|
||||
}
|
||||
|
||||
async function renderPdfPages(pdfPath: string, jobId: string): Promise<RenderedPage[]> {
|
||||
const renderDir = path.join(getRenderRoot(), jobId);
|
||||
await mkdir(renderDir, { recursive: true });
|
||||
const prefix = path.join(renderDir, "page");
|
||||
const args = ["-png", "-r", "200", pdfPath, prefix];
|
||||
const { execFile } = await import("node:child_process");
|
||||
await new Promise<void>((resolve, reject) => {
|
||||
execFile("pdftoppm", args, (error) => {
|
||||
if (error) reject(error);
|
||||
else resolve();
|
||||
});
|
||||
});
|
||||
|
||||
const entries = await readdir(renderDir);
|
||||
const pngs = entries
|
||||
.filter((entry) => entry.endsWith(".png"))
|
||||
.sort((left, right) => left.localeCompare(right, undefined, { numeric: true }));
|
||||
if (pngs.length === 0) {
|
||||
throw new Error(`No rendered pages produced for ${pdfPath}`);
|
||||
}
|
||||
|
||||
const pages: RenderedPage[] = [];
|
||||
for (const entry of pngs) {
|
||||
const pagePath = path.join(renderDir, entry);
|
||||
const buffer = await readFile(pagePath);
|
||||
pages.push({
|
||||
path: pagePath,
|
||||
image: {
|
||||
type: "image",
|
||||
source: {
|
||||
type: "base64",
|
||||
mediaType: "image/png",
|
||||
data: buffer.toString("base64"),
|
||||
},
|
||||
},
|
||||
});
|
||||
}
|
||||
return pages;
|
||||
}
|
||||
|
||||
async function loadImagePage(imagePath: string): Promise<RenderedPage> {
|
||||
const extension = path.extname(imagePath).toLowerCase();
|
||||
const mediaType = extension === ".png" ? "image/png" : undefined;
|
||||
if (!mediaType) {
|
||||
throw new Error(`Unsupported image input format for ${imagePath}`);
|
||||
}
|
||||
|
||||
const buffer = await readFile(imagePath);
|
||||
return {
|
||||
path: imagePath,
|
||||
image: {
|
||||
type: "image",
|
||||
source: {
|
||||
type: "base64",
|
||||
mediaType,
|
||||
data: buffer.toString("base64"),
|
||||
},
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
async function renderInputPages(inputPath: string, jobId: string): Promise<RenderedPage[]> {
|
||||
const extension = path.extname(inputPath).toLowerCase();
|
||||
if (extension === ".pdf") {
|
||||
return await renderPdfPages(inputPath, jobId);
|
||||
}
|
||||
if (extension === ".png") {
|
||||
return [await loadImagePage(inputPath)];
|
||||
}
|
||||
throw new Error(`Unsupported Notability input format: ${inputPath}`);
|
||||
}
|
||||
|
||||
async function findManagedOutputs(noteId: string): Promise<string[]> {
|
||||
const matches: string[] = [];
|
||||
const stack = [getNotesRoot()];
|
||||
|
||||
while (stack.length > 0) {
|
||||
const currentDir = stack.pop();
|
||||
if (!currentDir || !fs.existsSync(currentDir)) continue;
|
||||
|
||||
const entries = await readdir(currentDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (entry.name.startsWith(".")) continue;
|
||||
const fullPath = path.join(currentDir, entry.name);
|
||||
if (entry.isDirectory()) {
|
||||
stack.push(fullPath);
|
||||
continue;
|
||||
}
|
||||
if (!entry.isFile() || !entry.name.endsWith(".md")) continue;
|
||||
|
||||
try {
|
||||
const parsed = parseFrontmatter(await readFile(fullPath, "utf8"));
|
||||
const managedBy = parsed.values.managed_by?.replace(/^['"]|['"]$/g, "");
|
||||
const frontmatterNoteId = parsed.values.note_id?.replace(/^['"]|['"]$/g, "");
|
||||
if (managedBy === "notability-ingest" && frontmatterNoteId === noteId) {
|
||||
matches.push(fullPath);
|
||||
}
|
||||
} catch {
|
||||
// Ignore unreadable or malformed files while scanning the notebook.
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return matches.sort();
|
||||
}
|
||||
|
||||
async function resolveManagedOutputPath(noteId: string, configuredOutputPath: string): Promise<string> {
|
||||
if (fs.existsSync(configuredOutputPath)) {
|
||||
const parsed = parseFrontmatter(await readFile(configuredOutputPath, "utf8"));
|
||||
const managedBy = parsed.values.managed_by?.replace(/^['"]|['"]$/g, "");
|
||||
const frontmatterNoteId = parsed.values.note_id?.replace(/^['"]|['"]$/g, "");
|
||||
if (managedBy === "notability-ingest" && frontmatterNoteId === noteId) {
|
||||
return configuredOutputPath;
|
||||
}
|
||||
}
|
||||
|
||||
const discovered = await findManagedOutputs(noteId);
|
||||
if (discovered.length === 0) return configuredOutputPath;
|
||||
if (discovered.length === 1) return discovered[0];
|
||||
|
||||
throw new Error(
|
||||
`Multiple managed note files found for ${noteId}: ${discovered.join(", ")}`,
|
||||
);
|
||||
}
|
||||
|
||||
async function determineWriteTarget(manifest: IngestManifest, markdown: string): Promise<{
|
||||
outputPath: string;
|
||||
writePath: string;
|
||||
writeMode: "create" | "overwrite" | "force-overwrite" | "conflict";
|
||||
updatedMainOutput: boolean;
|
||||
}> {
|
||||
const outputPath = await resolveManagedOutputPath(manifest.note_id, manifest.output_path);
|
||||
if (!fs.existsSync(outputPath)) {
|
||||
return { outputPath, writePath: outputPath, writeMode: "create", updatedMainOutput: true };
|
||||
}
|
||||
|
||||
const existing = await readFile(outputPath, "utf8");
|
||||
const existingHash = sha256(existing);
|
||||
const parsed = parseFrontmatter(existing);
|
||||
const isManaged = parsed.values.managed_by?.replace(/^['"]|['"]$/g, "") === "notability-ingest";
|
||||
const sameNoteId = parsed.values.note_id?.replace(/^['"]|['"]$/g, "") === manifest.note_id;
|
||||
|
||||
if (manifest.last_generated_output_hash && existingHash === manifest.last_generated_output_hash) {
|
||||
return { outputPath, writePath: outputPath, writeMode: "overwrite", updatedMainOutput: true };
|
||||
}
|
||||
|
||||
if (manifest.force_overwrite_generated && isManaged && sameNoteId) {
|
||||
return { outputPath, writePath: outputPath, writeMode: "force-overwrite", updatedMainOutput: true };
|
||||
}
|
||||
|
||||
return {
|
||||
outputPath,
|
||||
writePath: conflictPathFor(outputPath),
|
||||
writeMode: "conflict",
|
||||
updatedMainOutput: false,
|
||||
};
|
||||
}
|
||||
|
||||
async function writeIngestResult(resultPath: string, payload: IngestResult): Promise<void> {
|
||||
await ensureParent(resultPath);
|
||||
await writeFile(resultPath, JSON.stringify(payload, null, 2));
|
||||
}
|
||||
|
||||
async function ingestManifest(manifestPath: string, ctx: ExtensionContext): Promise<IngestResult> {
|
||||
const manifest = JSON.parse(await readFile(manifestPath, "utf8")) as IngestManifest;
|
||||
await ensureParent(manifest.transcript_path);
|
||||
await ensureParent(manifest.result_path);
|
||||
await mkdir(manifest.session_dir, { recursive: true });
|
||||
|
||||
const normalizeSkill = await loadSkillText(NORMALIZE_SKILL);
|
||||
const pages = await renderInputPages(manifest.input_path, manifest.job_id);
|
||||
const pageSummary = pages.map((page, index) => `- page ${index + 1}: ${page.path}`).join("\n");
|
||||
const transcriptPrompt = [
|
||||
"Transcribe this note into clean Markdown.",
|
||||
"Read it like a human and preserve the intended reading order and visible structure.",
|
||||
"Keep headings, lists, and paragraphs when they are visible.",
|
||||
"Do not summarize. Do not add commentary. Return Markdown only.",
|
||||
"Rendered pages:",
|
||||
pageSummary,
|
||||
].join("\n\n");
|
||||
let transcript = await runSkillPrompt(
|
||||
ctx,
|
||||
"",
|
||||
transcriptPrompt,
|
||||
pages,
|
||||
DEFAULT_TRANSCRIBE_THINKING,
|
||||
);
|
||||
if (!transcript.trim()) {
|
||||
throw new Error("Transcription skill returned empty output");
|
||||
}
|
||||
await writeFile(manifest.transcript_path, `${transcript.trim()}\n`);
|
||||
|
||||
const normalizePrompt = [
|
||||
`Note ID: ${manifest.note_id}`,
|
||||
`Source path: ${manifest.source_relpath}`,
|
||||
`Preferred output path: ${manifest.output_path}`,
|
||||
"Normalize the following transcription into clean Markdown.",
|
||||
"Restore natural prose formatting and intended reading order when the transcription contains OCR or layout artifacts.",
|
||||
"If words are split across separate lines but clearly belong to the same phrase or sentence, merge them.",
|
||||
"Return only Markdown. No code fences.",
|
||||
"",
|
||||
"<transcription>",
|
||||
transcript.trim(),
|
||||
"</transcription>",
|
||||
].join("\n");
|
||||
const normalized = await runSkillPrompt(
|
||||
ctx,
|
||||
normalizeSkill,
|
||||
normalizePrompt,
|
||||
[],
|
||||
DEFAULT_NORMALIZE_THINKING,
|
||||
);
|
||||
if (!normalized.trim()) {
|
||||
throw new Error("Normalization skill returned empty output");
|
||||
}
|
||||
|
||||
const markdown = buildMarkdown(manifest, normalized);
|
||||
const target = await determineWriteTarget(manifest, markdown);
|
||||
await ensureParent(target.writePath);
|
||||
await writeFile(target.writePath, markdown);
|
||||
|
||||
const result: IngestResult = {
|
||||
success: true,
|
||||
job_id: manifest.job_id,
|
||||
note_id: manifest.note_id,
|
||||
archive_path: manifest.archive_path,
|
||||
source_hash: manifest.source_hash,
|
||||
session_dir: manifest.session_dir,
|
||||
output_path: target.outputPath,
|
||||
output_hash: target.updatedMainOutput ? await sha256File(target.writePath) : undefined,
|
||||
conflict_path: target.writeMode === "conflict" ? target.writePath : undefined,
|
||||
write_mode: target.writeMode,
|
||||
updated_main_output: target.updatedMainOutput,
|
||||
transcript_path: manifest.transcript_path,
|
||||
};
|
||||
await writeIngestResult(manifest.result_path, result);
|
||||
return result;
|
||||
}
|
||||
|
||||
async function runScript(scriptName: string, args: string[]): Promise<string> {
|
||||
const { execFile } = await import("node:child_process");
|
||||
const scriptPath = path.join(getNotabilityScriptDir(), scriptName);
|
||||
return await new Promise<string>((resolve, reject) => {
|
||||
execFile("nu", [scriptPath, ...args], (error, stdout, stderr) => {
|
||||
if (error) {
|
||||
reject(new Error(stderr || stdout || error.message));
|
||||
return;
|
||||
}
|
||||
resolve(stdout.trim());
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
function splitArgs(input: string): string[] {
|
||||
return input
|
||||
.trim()
|
||||
.split(/\s+/)
|
||||
.filter((part) => part.length > 0);
|
||||
}
|
||||
|
||||
function postStatus(pi: ExtensionAPI, content: string): void {
|
||||
pi.sendMessage({
|
||||
customType: STATUS_TYPE,
|
||||
content,
|
||||
display: true,
|
||||
});
|
||||
}
|
||||
|
||||
export default function noteIngestExtension(pi: ExtensionAPI) {
|
||||
pi.registerMessageRenderer(STATUS_TYPE, (message, _options, theme) => {
|
||||
const box = new Box(1, 1, (text) => theme.bg("customMessageBg", text));
|
||||
box.addChild(new Text(message.content, 0, 0));
|
||||
return box;
|
||||
});
|
||||
|
||||
pi.registerCommand("note-status", {
|
||||
description: "Show Notability ingest status",
|
||||
handler: async (args, _ctx) => {
|
||||
const output = await runScript("status.nu", splitArgs(args));
|
||||
postStatus(pi, output.length > 0 ? output : "No status output");
|
||||
},
|
||||
});
|
||||
|
||||
pi.registerCommand("note-reingest", {
|
||||
description: "Enqueue a note for reingestion",
|
||||
handler: async (args, _ctx) => {
|
||||
const trimmed = args.trim();
|
||||
if (!trimmed) {
|
||||
postStatus(pi, "Usage: /note-reingest <note-id> [--latest-source|--latest-archive] [--force-overwrite-generated]");
|
||||
return;
|
||||
}
|
||||
const output = await runScript("reingest.nu", splitArgs(trimmed));
|
||||
postStatus(pi, output.length > 0 ? output : "Reingest enqueued");
|
||||
},
|
||||
});
|
||||
|
||||
pi.registerCommand("note-ingest", {
|
||||
description: "Ingest a queued Notability job manifest",
|
||||
handler: async (args, ctx: ExtensionCommandContext) => {
|
||||
const manifestPath = normalizePathArg(args.trim());
|
||||
if (!manifestPath) {
|
||||
throw new Error("Usage: /note-ingest <job.json>");
|
||||
}
|
||||
|
||||
let resultPath = "";
|
||||
try {
|
||||
const raw = await readFile(manifestPath, "utf8");
|
||||
const manifest = JSON.parse(raw) as IngestManifest;
|
||||
resultPath = manifest.result_path;
|
||||
const result = await ingestManifest(manifestPath, ctx);
|
||||
postStatus(pi, `Ingested ${result.note_id} (${result.write_mode})`);
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
if (resultPath) {
|
||||
const manifest = JSON.parse(await readFile(manifestPath, "utf8")) as IngestManifest;
|
||||
await writeIngestResult(resultPath, {
|
||||
success: false,
|
||||
job_id: manifest.job_id,
|
||||
note_id: manifest.note_id,
|
||||
archive_path: manifest.archive_path,
|
||||
source_hash: manifest.source_hash,
|
||||
session_dir: manifest.session_dir,
|
||||
error: message,
|
||||
});
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
},
|
||||
});
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,260 +0,0 @@
|
||||
import type { ExtensionAPI, ExtensionContext } from "@mariozechner/pi-coding-agent";
|
||||
import {
|
||||
createAgentSession,
|
||||
DefaultResourceLoader,
|
||||
getAgentDir,
|
||||
SessionManager,
|
||||
SettingsManager,
|
||||
} from "@mariozechner/pi-coding-agent";
|
||||
|
||||
interface SessionNameState {
|
||||
hasAutoNamed: boolean;
|
||||
}
|
||||
|
||||
const TITLE_MODEL = {
|
||||
provider: "openai-codex",
|
||||
id: "gpt-5.4-mini",
|
||||
} as const;
|
||||
|
||||
const MAX_TITLE_LENGTH = 50;
|
||||
const MAX_RETRIES = 2;
|
||||
const FALLBACK_LENGTH = 50;
|
||||
const TITLE_ENTRY_TYPE = "vendored-session-title";
|
||||
|
||||
const TITLE_SYSTEM_PROMPT = `You are generating a succinct title for a coding session based on the provided conversation.
|
||||
|
||||
Requirements:
|
||||
- Maximum 50 characters
|
||||
- Sentence case (capitalize only first word and proper nouns)
|
||||
- Capture the main intent or task
|
||||
- Reuse the user's exact words and technical terms
|
||||
- Match the user's language
|
||||
- No quotes, colons, or markdown formatting
|
||||
- No generic titles like "Coding session" or "Help with code"
|
||||
- No explanations or commentary
|
||||
|
||||
Output ONLY the title text. Nothing else.`;
|
||||
|
||||
function isTurnCompleted(event: unknown): boolean {
|
||||
if (!event || typeof event !== "object") return false;
|
||||
const message = (event as { message?: unknown }).message;
|
||||
if (!message || typeof message !== "object") return false;
|
||||
const stopReason = (message as { stopReason?: unknown }).stopReason;
|
||||
return typeof stopReason === "string" && stopReason.toLowerCase() === "stop";
|
||||
}
|
||||
|
||||
function buildFallbackTitle(userText: string): string {
|
||||
const text = userText.trim();
|
||||
if (text.length <= FALLBACK_LENGTH) return text;
|
||||
const truncated = text.slice(0, FALLBACK_LENGTH - 3);
|
||||
const lastSpace = truncated.lastIndexOf(" ");
|
||||
return `${lastSpace > 0 ? truncated.slice(0, lastSpace) : truncated}...`;
|
||||
}
|
||||
|
||||
function postProcessTitle(raw: string): string {
|
||||
let title = raw;
|
||||
|
||||
title = title.replace(/<thinking[\s\S]*?<\/thinking>\s*/g, "");
|
||||
title = title.replace(/^["'`]+|["'`]+$/g, "");
|
||||
title = title.replace(/^#+\s*/, "");
|
||||
title = title.replace(/\*{1,2}(.*?)\*{1,2}/g, "$1");
|
||||
title = title.replace(/_{1,2}(.*?)_{1,2}/g, "$1");
|
||||
title = title.replace(/^(Title|Summary|Session)\s*:\s*/i, "");
|
||||
title =
|
||||
title
|
||||
.split("\n")
|
||||
.map((line) => line.trim())
|
||||
.find((line) => line.length > 0) ?? title;
|
||||
title = title.trim();
|
||||
|
||||
if (title.length > MAX_TITLE_LENGTH) {
|
||||
const truncated = title.slice(0, MAX_TITLE_LENGTH - 3);
|
||||
const lastSpace = truncated.lastIndexOf(" ");
|
||||
title = `${lastSpace > 0 ? truncated.slice(0, lastSpace) : truncated}...`;
|
||||
}
|
||||
|
||||
return title;
|
||||
}
|
||||
|
||||
function getLatestUserText(ctx: ExtensionContext): string | null {
|
||||
const entries = ctx.sessionManager.getEntries();
|
||||
for (let i = entries.length - 1; i >= 0; i -= 1) {
|
||||
const entry = entries[i];
|
||||
if (!entry || entry.type !== "message") continue;
|
||||
if (entry.message.role !== "user") continue;
|
||||
|
||||
const { content } = entry.message as { content: unknown };
|
||||
if (typeof content === "string") return content;
|
||||
if (!Array.isArray(content)) return null;
|
||||
|
||||
return content
|
||||
.filter(
|
||||
(part): part is { type: string; text?: string } =>
|
||||
typeof part === "object" && part !== null && "type" in part,
|
||||
)
|
||||
.filter((part) => part.type === "text" && typeof part.text === "string")
|
||||
.map((part) => part.text ?? "")
|
||||
.join(" ");
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
function getLatestAssistantText(ctx: ExtensionContext): string | null {
|
||||
const entries = ctx.sessionManager.getEntries();
|
||||
for (let i = entries.length - 1; i >= 0; i -= 1) {
|
||||
const entry = entries[i];
|
||||
if (!entry || entry.type !== "message") continue;
|
||||
if (entry.message.role !== "assistant") continue;
|
||||
|
||||
const { content } = entry.message as { content: unknown };
|
||||
if (typeof content === "string") return content;
|
||||
if (!Array.isArray(content)) return null;
|
||||
|
||||
return content
|
||||
.filter(
|
||||
(part): part is { type: string; text?: string } =>
|
||||
typeof part === "object" && part !== null && "type" in part,
|
||||
)
|
||||
.filter((part) => part.type === "text" && typeof part.text === "string")
|
||||
.map((part) => part.text ?? "")
|
||||
.join("\n");
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
function resolveModel(ctx: ExtensionContext) {
|
||||
const available = ctx.modelRegistry.getAvailable();
|
||||
const model = available.find(
|
||||
(candidate) => candidate.provider === TITLE_MODEL.provider && candidate.id === TITLE_MODEL.id,
|
||||
);
|
||||
if (model) return model;
|
||||
|
||||
const existsWithoutKey = ctx.modelRegistry
|
||||
.getAll()
|
||||
.some((candidate) => candidate.provider === TITLE_MODEL.provider && candidate.id === TITLE_MODEL.id);
|
||||
if (existsWithoutKey) {
|
||||
throw new Error(
|
||||
`Model ${TITLE_MODEL.provider}/${TITLE_MODEL.id} exists but has no configured API key.`,
|
||||
);
|
||||
}
|
||||
|
||||
throw new Error(`Model ${TITLE_MODEL.provider}/${TITLE_MODEL.id} is not available.`);
|
||||
}
|
||||
|
||||
async function generateTitle(userText: string, assistantText: string, ctx: ExtensionContext): Promise<string> {
|
||||
const agentDir = getAgentDir();
|
||||
const settingsManager = SettingsManager.create(ctx.cwd, agentDir);
|
||||
const resourceLoader = new DefaultResourceLoader({
|
||||
cwd: ctx.cwd,
|
||||
agentDir,
|
||||
settingsManager,
|
||||
noExtensions: true,
|
||||
noPromptTemplates: true,
|
||||
noThemes: true,
|
||||
noSkills: true,
|
||||
systemPromptOverride: () => TITLE_SYSTEM_PROMPT,
|
||||
appendSystemPromptOverride: () => [],
|
||||
agentsFilesOverride: () => ({ agentsFiles: [] }),
|
||||
});
|
||||
await resourceLoader.reload();
|
||||
|
||||
const { session } = await createAgentSession({
|
||||
model: resolveModel(ctx),
|
||||
thinkingLevel: "off",
|
||||
sessionManager: SessionManager.inMemory(),
|
||||
modelRegistry: ctx.modelRegistry,
|
||||
resourceLoader,
|
||||
});
|
||||
|
||||
let accumulated = "";
|
||||
const unsubscribe = session.subscribe((event) => {
|
||||
if (event.type === "message_update" && event.assistantMessageEvent.type === "text_delta") {
|
||||
accumulated += event.assistantMessageEvent.delta;
|
||||
}
|
||||
});
|
||||
|
||||
const description = assistantText
|
||||
? `<user>${userText}</user>\n<assistant>${assistantText}</assistant>`
|
||||
: `<user>${userText}</user>`;
|
||||
const userMessage = `<conversation>\n${description}\n</conversation>\n\nGenerate a title:`;
|
||||
|
||||
try {
|
||||
await session.prompt(userMessage);
|
||||
} finally {
|
||||
unsubscribe();
|
||||
session.dispose();
|
||||
}
|
||||
|
||||
return postProcessTitle(accumulated);
|
||||
}
|
||||
|
||||
async function generateAndSetTitle(pi: ExtensionAPI, ctx: ExtensionContext): Promise<void> {
|
||||
const userText = getLatestUserText(ctx);
|
||||
if (!userText?.trim()) return;
|
||||
|
||||
const assistantText = getLatestAssistantText(ctx) ?? "";
|
||||
if (!assistantText.trim()) return;
|
||||
|
||||
let lastError: Error | null = null;
|
||||
for (let attempt = 1; attempt <= MAX_RETRIES; attempt += 1) {
|
||||
try {
|
||||
const title = await generateTitle(userText, assistantText, ctx);
|
||||
if (!title) continue;
|
||||
|
||||
pi.setSessionName(title);
|
||||
pi.appendEntry(TITLE_ENTRY_TYPE, {
|
||||
title,
|
||||
rawUserText: userText,
|
||||
rawAssistantText: assistantText,
|
||||
attempt,
|
||||
model: `${TITLE_MODEL.provider}/${TITLE_MODEL.id}`,
|
||||
});
|
||||
ctx.ui.notify(`Session: ${title}`, "info");
|
||||
return;
|
||||
} catch (error) {
|
||||
lastError = error instanceof Error ? error : new Error(String(error));
|
||||
}
|
||||
}
|
||||
|
||||
const fallback = buildFallbackTitle(userText);
|
||||
pi.setSessionName(fallback);
|
||||
pi.appendEntry(TITLE_ENTRY_TYPE, {
|
||||
title: fallback,
|
||||
fallback: true,
|
||||
error: lastError?.message ?? "Unknown error",
|
||||
rawUserText: userText,
|
||||
rawAssistantText: assistantText,
|
||||
model: `${TITLE_MODEL.provider}/${TITLE_MODEL.id}`,
|
||||
});
|
||||
ctx.ui.notify(`Title generation failed, using fallback: ${fallback}`, "warning");
|
||||
}
|
||||
|
||||
export default function setupSessionNameHook(pi: ExtensionAPI) {
|
||||
const state: SessionNameState = {
|
||||
hasAutoNamed: false,
|
||||
};
|
||||
|
||||
pi.on("session_start", async () => {
|
||||
state.hasAutoNamed = false;
|
||||
});
|
||||
|
||||
pi.on("session_switch", async () => {
|
||||
state.hasAutoNamed = false;
|
||||
});
|
||||
|
||||
pi.on("turn_end", async (event, ctx) => {
|
||||
if (state.hasAutoNamed) return;
|
||||
|
||||
if (pi.getSessionName()) {
|
||||
state.hasAutoNamed = true;
|
||||
return;
|
||||
}
|
||||
|
||||
if (!isTurnCompleted(event)) return;
|
||||
|
||||
await generateAndSetTitle(pi, ctx);
|
||||
state.hasAutoNamed = true;
|
||||
});
|
||||
}
|
||||
@@ -1,21 +0,0 @@
|
||||
{
|
||||
"mcpServers": {
|
||||
"opensrc": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "opensrc-mcp"],
|
||||
"lifecycle": "eager"
|
||||
},
|
||||
"context7": {
|
||||
"url": "https://mcp.context7.com/mcp",
|
||||
"lifecycle": "eager"
|
||||
},
|
||||
"grep_app": {
|
||||
"url": "https://mcp.grep.app",
|
||||
"lifecycle": "eager"
|
||||
},
|
||||
"sentry": {
|
||||
"url": "https://mcp.sentry.dev/mcp",
|
||||
"auth": "oauth"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,143 +0,0 @@
|
||||
---
|
||||
name: jujutsu
|
||||
description: Manages version control with Jujutsu (jj), including rebasing, conflict resolution, and Git interop. Use when tracking changes, navigating history, squashing/splitting commits, or pushing to Git remotes.
|
||||
---
|
||||
|
||||
# Jujutsu
|
||||
|
||||
Git-compatible VCS focused on concurrent development and ease of use.
|
||||
|
||||
> ⚠️ **Not Git!** Jujutsu syntax differs from Git:
|
||||
>
|
||||
> - Parent: `@-` not `@~1` or `@^`
|
||||
> - Grandparent: `@--` not `@~2`
|
||||
> - Child: `@+` not `@~-1`
|
||||
> - Use `jj log` not `jj changes`
|
||||
|
||||
## Key Commands
|
||||
|
||||
| Command | Description |
|
||||
| -------------------------- | -------------------------------------------- |
|
||||
| `jj st` | Show working copy status |
|
||||
| `jj log` | Show change log |
|
||||
| `jj diff` | Show changes in working copy |
|
||||
| `jj new` | Create new change |
|
||||
| `jj desc` | Edit change description |
|
||||
| `jj squash` | Move changes to parent |
|
||||
| `jj split` | Split current change |
|
||||
| `jj rebase -s src -d dest` | Rebase changes |
|
||||
| `jj absorb` | Move changes into stack of mutable revisions |
|
||||
| `jj bisect` | Find bad revision by bisection |
|
||||
| `jj fix` | Update files with formatting fixes |
|
||||
| `jj sign` | Cryptographically sign a revision |
|
||||
| `jj metaedit` | Modify metadata without changing content |
|
||||
|
||||
## Basic Workflow
|
||||
|
||||
```bash
|
||||
jj new # Create new change
|
||||
jj desc -m "feat: add feature" # Set description
|
||||
jj log # View history
|
||||
jj edit change-id # Switch to change
|
||||
jj new --before @ # Time travel (create before current)
|
||||
jj edit @- # Go to parent
|
||||
```
|
||||
|
||||
## Time Travel
|
||||
|
||||
```bash
|
||||
jj edit change-id # Switch to specific change
|
||||
jj next --edit # Next child change
|
||||
jj edit @- # Parent change
|
||||
jj new --before @ -m msg # Insert before current
|
||||
```
|
||||
|
||||
## Merging & Rebasing
|
||||
|
||||
```bash
|
||||
jj new x yz -m msg # Merge changes
|
||||
jj rebase -s src -d dest # Rebase source onto dest
|
||||
jj abandon # Delete current change
|
||||
```
|
||||
|
||||
## Conflicts
|
||||
|
||||
```bash
|
||||
jj resolve # Interactive conflict resolution
|
||||
# Edit files, then continue
|
||||
```
|
||||
|
||||
## Revset Syntax
|
||||
|
||||
**Parent/child operators:**
|
||||
|
||||
| Syntax | Meaning | Example |
|
||||
| ------ | ---------------- | -------------------- |
|
||||
| `@-` | Parent of @ | `jj diff -r @-` |
|
||||
| `@--` | Grandparent | `jj log -r @--` |
|
||||
| `x-` | Parent of x | `jj diff -r abc123-` |
|
||||
| `@+` | Child of @ | `jj log -r @+` |
|
||||
| `x::y` | x to y inclusive | `jj log -r main::@` |
|
||||
| `x..y` | x to y exclusive | `jj log -r main..@` |
|
||||
| `x\|y` | Union (or) | `jj log -r 'a \| b'` |
|
||||
|
||||
**⚠️ Common mistakes:**
|
||||
|
||||
- ❌ `@~1` → ✅ `@-` (parent)
|
||||
- ❌ `@^` → ✅ `@-` (parent)
|
||||
- ❌ `@~-1` → ✅ `@+` (child)
|
||||
- ❌ `jj changes` → ✅ `jj log` or `jj diff`
|
||||
- ❌ `a,b,c` → ✅ `a | b | c` (union uses pipe, not comma)
|
||||
|
||||
**Functions:**
|
||||
|
||||
```bash
|
||||
jj log -r 'heads(all())' # All heads
|
||||
jj log -r 'remote_bookmarks()..' # Not on remote
|
||||
jj log -r 'author(name)' # By author
|
||||
jj log -r 'description(regex)' # By description
|
||||
jj log -r 'mine()' # My commits
|
||||
jj log -r 'committer_date(after:"7 days ago")' # Recent commits
|
||||
jj log -r 'mine() & committer_date(after:"yesterday")' # My recent
|
||||
```
|
||||
|
||||
## Templates
|
||||
|
||||
```bash
|
||||
jj log -T 'commit_id ++ "\n" ++ description'
|
||||
```
|
||||
|
||||
## Git Interop
|
||||
|
||||
```bash
|
||||
jj bookmark create main -r @ # Create bookmark
|
||||
jj git push --bookmark main # Push bookmark
|
||||
jj git fetch # Fetch from remote
|
||||
jj bookmark track main@origin # Track remote
|
||||
```
|
||||
|
||||
## Advanced Commands
|
||||
|
||||
```bash
|
||||
jj absorb # Auto-move changes to relevant commits in stack
|
||||
jj bisect start # Start bisection
|
||||
jj bisect good # Mark current as good
|
||||
jj bisect bad # Mark current as bad
|
||||
jj fix # Run configured formatters on files
|
||||
jj sign -r @ # Sign current revision
|
||||
jj metaedit -r @ -m "new message" # Edit metadata only
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- No staging: changes are immediate
|
||||
- Use conventional commits: `type(scope): desc`
|
||||
- `jj undo` to revert operations
|
||||
- `jj op log` to see operation history
|
||||
- Bookmarks are like branches
|
||||
- `jj absorb` is powerful for fixing up commits in a stack
|
||||
|
||||
## Related Skills
|
||||
|
||||
- **gh**: GitHub CLI for PRs and issues
|
||||
- **review**: Code review before committing
|
||||
@@ -1,36 +0,0 @@
|
||||
---
|
||||
name: notability-normalize
|
||||
description: Normalizes an exact Notability transcription into clean, searchable Markdown while preserving all original content and uncertainty markers. Use after a faithful transcription pass.
|
||||
---
|
||||
|
||||
# Notability Normalize
|
||||
|
||||
You are doing a **Markdown normalization** pass on a previously transcribed Notability note.
|
||||
|
||||
## Rules
|
||||
|
||||
- Do **not** summarize.
|
||||
- Do **not** remove uncertainty markers such as `[unclear: ...]`.
|
||||
- Preserve all substantive content from the transcription.
|
||||
- Clean up only formatting and Markdown structure.
|
||||
- Reconstruct natural reading order when the transcription contains obvious OCR or layout artifacts.
|
||||
- Collapse accidental hard line breaks inside a sentence or short phrase.
|
||||
- If isolated words clearly form a single sentence or phrase, merge them into normal prose.
|
||||
- Prefer readable Markdown headings, lists, and tables.
|
||||
- Keep content in the same overall order as the transcription.
|
||||
- Do not invent content.
|
||||
- Do not output code fences.
|
||||
- Output Markdown only.
|
||||
|
||||
## Output
|
||||
|
||||
- Produce a clean Markdown document.
|
||||
- Include a top-level `#` heading if the note clearly has a title.
|
||||
- Use standard Markdown lists and checkboxes.
|
||||
- Represent tables as Markdown tables when practical.
|
||||
- Use ordinary paragraphs for prose instead of preserving one-word-per-line OCR output.
|
||||
- Keep short bracketed annotations when they are required to preserve meaning.
|
||||
|
||||
## Important
|
||||
|
||||
The source PDF remains the ground truth. When in doubt, preserve ambiguity instead of cleaning it away.
|
||||
@@ -1,38 +0,0 @@
|
||||
---
|
||||
name: notability-transcribe
|
||||
description: Faithfully transcribes handwritten or mixed handwritten/typed Notability note pages into Markdown without summarizing. Use when converting note page images or PDFs into an exact textual transcription.
|
||||
---
|
||||
|
||||
# Notability Transcribe
|
||||
|
||||
You are doing a **faithful transcription** pass for handwritten Notability notes.
|
||||
|
||||
## Rules
|
||||
|
||||
- Preserve the original order of content.
|
||||
- Reconstruct the intended reading order from the page layout.
|
||||
- Read the page in the order a human would: top-to-bottom and left-to-right, while respecting obvious grouping.
|
||||
- Do **not** summarize, explain, clean up, or reorganize beyond what is necessary to transcribe faithfully.
|
||||
- Preserve headings, bullets, numbered items, checkboxes, tables, separators, callouts, and obvious layout structure.
|
||||
- Do **not** preserve accidental OCR-style hard line breaks when the note is clearly continuous prose or a single phrase.
|
||||
- If words are staggered on the page but clearly belong to the same sentence, combine them into normal lines.
|
||||
- If text is uncertain, keep the uncertainty inline as `[unclear: ...]`.
|
||||
- If a word is partially legible, include the best reading and uncertainty marker.
|
||||
- If there is a drawing or diagram that cannot be represented exactly, describe it minimally in brackets, for example `[diagram: arrow from A to B]`.
|
||||
- Preserve language exactly as written.
|
||||
- Do not invent missing words.
|
||||
- Do not output code fences.
|
||||
- Output Markdown only.
|
||||
|
||||
## Output shape
|
||||
|
||||
- Use headings when headings are clearly present.
|
||||
- Use `- [ ]` or `- [x]` for checkboxes when visible.
|
||||
- Use bullet lists for bullet lists.
|
||||
- Use normal paragraphs or single-line phrases for continuous prose instead of one word per line.
|
||||
- Keep side notes in the position that best preserves reading order.
|
||||
- Insert blank lines between major sections.
|
||||
|
||||
## Safety
|
||||
|
||||
If a page is partly unreadable, still transcribe everything you can and mark uncertain content with `[unclear: ...]`.
|
||||
@@ -1,141 +0,0 @@
|
||||
#!/usr/bin/env nu
|
||||
|
||||
use ./lib.nu *
|
||||
|
||||
|
||||
def active-job-exists [note_id: string, source_hash: string] {
|
||||
let rows = (sql-json $"
|
||||
select job_id
|
||||
from jobs
|
||||
where note_id = (sql-quote $note_id)
|
||||
and source_hash = (sql-quote $source_hash)
|
||||
and status != 'done'
|
||||
and status != 'failed'
|
||||
limit 1;
|
||||
")
|
||||
not ($rows | is-empty)
|
||||
}
|
||||
|
||||
|
||||
export def archive-and-version [note_id: string, source_path: path, source_relpath: string, source_size: any, source_mtime: string, source_hash: string] {
|
||||
let source_size_int = ($source_size | into int)
|
||||
let archive_path = (archive-path-for $note_id $source_hash $source_relpath)
|
||||
cp $source_path $archive_path
|
||||
|
||||
let version_id = (new-version-id)
|
||||
let seen_at = (now-iso)
|
||||
let version_id_q = (sql-quote $version_id)
|
||||
let note_id_q = (sql-quote $note_id)
|
||||
let seen_at_q = (sql-quote $seen_at)
|
||||
let archive_path_q = (sql-quote $archive_path)
|
||||
let source_hash_q = (sql-quote $source_hash)
|
||||
let source_mtime_q = (sql-quote $source_mtime)
|
||||
let source_relpath_q = (sql-quote $source_relpath)
|
||||
let sql = ([
|
||||
"insert into versions (version_id, note_id, seen_at, archive_path, source_hash, source_size, source_mtime, source_relpath, ingest_result, session_path) values ("
|
||||
$version_id_q
|
||||
", "
|
||||
$note_id_q
|
||||
", "
|
||||
$seen_at_q
|
||||
", "
|
||||
$archive_path_q
|
||||
", "
|
||||
$source_hash_q
|
||||
", "
|
||||
($source_size_int | into string)
|
||||
", "
|
||||
$source_mtime_q
|
||||
", "
|
||||
$source_relpath_q
|
||||
", 'pending', null);"
|
||||
] | str join '')
|
||||
sql-run $sql | ignore
|
||||
|
||||
{
|
||||
version_id: $version_id
|
||||
seen_at: $seen_at
|
||||
archive_path: $archive_path
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
export def enqueue-job [
|
||||
note: record,
|
||||
operation: string,
|
||||
input_path: string,
|
||||
archive_path: string,
|
||||
source_hash: string,
|
||||
title: string,
|
||||
force_overwrite_generated: bool = false,
|
||||
source_transport: string = 'webdav',
|
||||
] {
|
||||
if (active-job-exists $note.note_id $source_hash) {
|
||||
return null
|
||||
}
|
||||
|
||||
let job_id = (new-job-id)
|
||||
let requested_at = (now-iso)
|
||||
let manifest_path = (manifest-path-for $job_id 'queued')
|
||||
let result_path = (result-path-for $job_id)
|
||||
let transcript_path = (transcript-path-for $note.note_id $job_id)
|
||||
let session_dir = ([(sessions-root) $note.note_id $job_id] | path join)
|
||||
mkdir $session_dir
|
||||
|
||||
let manifest = {
|
||||
version: 1
|
||||
job_id: $job_id
|
||||
note_id: $note.note_id
|
||||
operation: $operation
|
||||
requested_at: $requested_at
|
||||
title: $title
|
||||
source_relpath: $note.source_relpath
|
||||
source_path: $note.source_path
|
||||
input_path: $input_path
|
||||
archive_path: $archive_path
|
||||
output_path: $note.output_path
|
||||
transcript_path: $transcript_path
|
||||
result_path: $result_path
|
||||
session_dir: $session_dir
|
||||
source_hash: $source_hash
|
||||
last_generated_output_hash: ($note.last_generated_output_hash? | default null)
|
||||
force_overwrite_generated: $force_overwrite_generated
|
||||
source_transport: $source_transport
|
||||
}
|
||||
|
||||
($manifest | to json --indent 2) | save -f $manifest_path
|
||||
let job_id_q = (sql-quote $job_id)
|
||||
let note_id_q = (sql-quote $note.note_id)
|
||||
let operation_q = (sql-quote $operation)
|
||||
let requested_at_q = (sql-quote $requested_at)
|
||||
let source_hash_q = (sql-quote $source_hash)
|
||||
let manifest_path_q = (sql-quote $manifest_path)
|
||||
let result_path_q = (sql-quote $result_path)
|
||||
let sql = ([
|
||||
"insert into jobs (job_id, note_id, operation, status, requested_at, source_hash, job_manifest_path, result_path) values ("
|
||||
$job_id_q
|
||||
", "
|
||||
$note_id_q
|
||||
", "
|
||||
$operation_q
|
||||
", 'queued', "
|
||||
$requested_at_q
|
||||
", "
|
||||
$source_hash_q
|
||||
", "
|
||||
$manifest_path_q
|
||||
", "
|
||||
$result_path_q
|
||||
");"
|
||||
] | str join '')
|
||||
sql-run $sql | ignore
|
||||
|
||||
{
|
||||
job_id: $job_id
|
||||
requested_at: $requested_at
|
||||
manifest_path: $manifest_path
|
||||
result_path: $result_path
|
||||
transcript_path: $transcript_path
|
||||
session_dir: $session_dir
|
||||
}
|
||||
}
|
||||
@@ -1,433 +0,0 @@
|
||||
export def home-dir [] {
|
||||
$nu.home-dir
|
||||
}
|
||||
|
||||
export def data-root [] {
|
||||
if ('NOTABILITY_DATA_ROOT' in ($env | columns)) {
|
||||
$env.NOTABILITY_DATA_ROOT
|
||||
} else {
|
||||
[$nu.home-dir ".local" "share" "notability-ingest"] | path join
|
||||
}
|
||||
}
|
||||
|
||||
export def state-root [] {
|
||||
if ('NOTABILITY_STATE_ROOT' in ($env | columns)) {
|
||||
$env.NOTABILITY_STATE_ROOT
|
||||
} else {
|
||||
[$nu.home-dir ".local" "state" "notability-ingest"] | path join
|
||||
}
|
||||
}
|
||||
|
||||
export def notes-root [] {
|
||||
if ('NOTABILITY_NOTES_DIR' in ($env | columns)) {
|
||||
$env.NOTABILITY_NOTES_DIR
|
||||
} else {
|
||||
[$nu.home-dir "Notes"] | path join
|
||||
}
|
||||
}
|
||||
|
||||
export def webdav-root [] {
|
||||
if ('NOTABILITY_WEBDAV_ROOT' in ($env | columns)) {
|
||||
$env.NOTABILITY_WEBDAV_ROOT
|
||||
} else {
|
||||
[(data-root) "webdav-root"] | path join
|
||||
}
|
||||
}
|
||||
|
||||
export def archive-root [] {
|
||||
if ('NOTABILITY_ARCHIVE_ROOT' in ($env | columns)) {
|
||||
$env.NOTABILITY_ARCHIVE_ROOT
|
||||
} else {
|
||||
[(data-root) "archive"] | path join
|
||||
}
|
||||
}
|
||||
|
||||
export def render-root [] {
|
||||
if ('NOTABILITY_RENDER_ROOT' in ($env | columns)) {
|
||||
$env.NOTABILITY_RENDER_ROOT
|
||||
} else {
|
||||
[(data-root) "rendered-pages"] | path join
|
||||
}
|
||||
}
|
||||
|
||||
export def transcript-root [] {
|
||||
if ('NOTABILITY_TRANSCRIPT_ROOT' in ($env | columns)) {
|
||||
$env.NOTABILITY_TRANSCRIPT_ROOT
|
||||
} else {
|
||||
[(state-root) "transcripts"] | path join
|
||||
}
|
||||
}
|
||||
|
||||
export def jobs-root [] {
|
||||
if ('NOTABILITY_JOBS_ROOT' in ($env | columns)) {
|
||||
$env.NOTABILITY_JOBS_ROOT
|
||||
} else {
|
||||
[(state-root) "jobs"] | path join
|
||||
}
|
||||
}
|
||||
|
||||
export def queued-root [] {
|
||||
[(jobs-root) "queued"] | path join
|
||||
}
|
||||
|
||||
export def running-root [] {
|
||||
[(jobs-root) "running"] | path join
|
||||
}
|
||||
|
||||
export def failed-root [] {
|
||||
[(jobs-root) "failed"] | path join
|
||||
}
|
||||
|
||||
export def done-root [] {
|
||||
[(jobs-root) "done"] | path join
|
||||
}
|
||||
|
||||
export def results-root [] {
|
||||
[(jobs-root) "results"] | path join
|
||||
}
|
||||
|
||||
export def sessions-root [] {
|
||||
if ('NOTABILITY_SESSIONS_ROOT' in ($env | columns)) {
|
||||
$env.NOTABILITY_SESSIONS_ROOT
|
||||
} else {
|
||||
[(state-root) "sessions"] | path join
|
||||
}
|
||||
}
|
||||
|
||||
export def qmd-dirty-file [] {
|
||||
[(state-root) "qmd-dirty"] | path join
|
||||
}
|
||||
|
||||
export def db-path [] {
|
||||
if ('NOTABILITY_DB_PATH' in ($env | columns)) {
|
||||
$env.NOTABILITY_DB_PATH
|
||||
} else {
|
||||
[(state-root) "db.sqlite"] | path join
|
||||
}
|
||||
}
|
||||
|
||||
export def now-iso [] {
|
||||
date now | format date "%Y-%m-%dT%H:%M:%SZ"
|
||||
}
|
||||
|
||||
export def sql-quote [value?: any] {
|
||||
if $value == null {
|
||||
"NULL"
|
||||
} else {
|
||||
let text = ($value | into string | str replace -a "'" "''")
|
||||
["'" $text "'"] | str join ''
|
||||
}
|
||||
}
|
||||
|
||||
export def sql-run [sql: string] {
|
||||
let database = (db-path)
|
||||
let result = (^sqlite3 -cmd '.timeout 5000' $database $sql | complete)
|
||||
if $result.exit_code != 0 {
|
||||
error make {
|
||||
msg: $"sqlite3 failed: ($result.stderr | str trim)"
|
||||
}
|
||||
}
|
||||
$result.stdout
|
||||
}
|
||||
|
||||
export def sql-json [sql: string] {
|
||||
let database = (db-path)
|
||||
let result = (^sqlite3 -cmd '.timeout 5000' -json $database $sql | complete)
|
||||
if $result.exit_code != 0 {
|
||||
error make {
|
||||
msg: $"sqlite3 failed: ($result.stderr | str trim)"
|
||||
}
|
||||
}
|
||||
let text = ($result.stdout | str trim)
|
||||
if $text == "" {
|
||||
[]
|
||||
} else {
|
||||
$text | from json
|
||||
}
|
||||
}
|
||||
|
||||
export def ensure-layout [] {
|
||||
mkdir (data-root)
|
||||
mkdir (state-root)
|
||||
mkdir (notes-root)
|
||||
mkdir (webdav-root)
|
||||
mkdir (archive-root)
|
||||
mkdir (render-root)
|
||||
mkdir (transcript-root)
|
||||
mkdir (jobs-root)
|
||||
mkdir (queued-root)
|
||||
mkdir (running-root)
|
||||
mkdir (failed-root)
|
||||
mkdir (done-root)
|
||||
mkdir (results-root)
|
||||
mkdir (sessions-root)
|
||||
|
||||
sql-run '
|
||||
create table if not exists notes (
|
||||
note_id text primary key,
|
||||
source_relpath text not null unique,
|
||||
title text not null,
|
||||
output_path text not null,
|
||||
status text not null,
|
||||
first_seen_at text not null,
|
||||
last_seen_at text not null,
|
||||
last_processed_at text,
|
||||
missing_since text,
|
||||
deleted_at text,
|
||||
current_source_hash text,
|
||||
current_source_size integer,
|
||||
current_source_mtime text,
|
||||
current_archive_path text,
|
||||
latest_version_id text,
|
||||
last_generated_source_hash text,
|
||||
last_generated_output_hash text,
|
||||
conflict_path text,
|
||||
last_error text
|
||||
);
|
||||
|
||||
create table if not exists versions (
|
||||
version_id text primary key,
|
||||
note_id text not null,
|
||||
seen_at text not null,
|
||||
archive_path text not null unique,
|
||||
source_hash text not null,
|
||||
source_size integer not null,
|
||||
source_mtime text not null,
|
||||
source_relpath text not null,
|
||||
ingest_result text,
|
||||
session_path text,
|
||||
foreign key (note_id) references notes (note_id)
|
||||
);
|
||||
|
||||
create table if not exists jobs (
|
||||
job_id text primary key,
|
||||
note_id text not null,
|
||||
operation text not null,
|
||||
status text not null,
|
||||
requested_at text not null,
|
||||
started_at text,
|
||||
finished_at text,
|
||||
source_hash text,
|
||||
job_manifest_path text not null,
|
||||
result_path text not null,
|
||||
error_summary text,
|
||||
foreign key (note_id) references notes (note_id)
|
||||
);
|
||||
|
||||
create table if not exists events (
|
||||
id integer primary key autoincrement,
|
||||
note_id text not null,
|
||||
ts text not null,
|
||||
kind text not null,
|
||||
details text,
|
||||
foreign key (note_id) references notes (note_id)
|
||||
);
|
||||
|
||||
create index if not exists idx_jobs_status_requested_at on jobs(status, requested_at);
|
||||
create index if not exists idx_versions_note_id_seen_at on versions(note_id, seen_at);
|
||||
create index if not exists idx_events_note_id_ts on events(note_id, ts);
|
||||
'
|
||||
| ignore
|
||||
}
|
||||
|
||||
export def log-event [note_id: string, kind: string, details?: any] {
|
||||
let payload = if $details == null { null } else { $details | to json }
|
||||
let note_id_q = (sql-quote $note_id)
|
||||
let now_q = (sql-quote (now-iso))
|
||||
let kind_q = (sql-quote $kind)
|
||||
let payload_q = (sql-quote $payload)
|
||||
let sql = ([
|
||||
"insert into events (note_id, ts, kind, details) values ("
|
||||
$note_id_q
|
||||
", "
|
||||
$now_q
|
||||
", "
|
||||
$kind_q
|
||||
", "
|
||||
$payload_q
|
||||
");"
|
||||
] | str join '')
|
||||
sql-run $sql | ignore
|
||||
}
|
||||
|
||||
export def slugify [value: string] {
|
||||
let slug = (
|
||||
$value
|
||||
| str downcase
|
||||
| str replace -r '[^a-z0-9]+' '-'
|
||||
| str replace -r '^-+' ''
|
||||
| str replace -r '-+$' ''
|
||||
)
|
||||
if $slug == '' {
|
||||
'note'
|
||||
} else {
|
||||
$slug
|
||||
}
|
||||
}
|
||||
|
||||
export def sha256 [file: path] {
|
||||
(^sha256sum $file | lines | first | split row ' ' | first)
|
||||
}
|
||||
|
||||
export def parse-output-frontmatter [file: path] {
|
||||
if not ($file | path exists) {
|
||||
{}
|
||||
} else {
|
||||
let content = (open --raw $file)
|
||||
if not ($content | str starts-with "---\n") {
|
||||
{}
|
||||
} else {
|
||||
let rest = ($content | str substring 4..)
|
||||
let end = ($rest | str index-of "\n---\n")
|
||||
if $end == null {
|
||||
{}
|
||||
} else {
|
||||
let block = ($rest | str substring 0..($end - 1))
|
||||
$block
|
||||
| lines
|
||||
| where ($it | str contains ':')
|
||||
| reduce --fold {} {|line, acc|
|
||||
let idx = ($line | str index-of ':')
|
||||
if $idx == null {
|
||||
$acc
|
||||
} else {
|
||||
let key = ($line | str substring 0..($idx - 1) | str trim)
|
||||
let value = ($line | str substring ($idx + 1).. | str trim)
|
||||
$acc | upsert $key $value
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export def zk-generated-note-path [title: string] {
|
||||
let root = (notes-root)
|
||||
let effective_title = if ($title | str trim) == '' {
|
||||
'Imported note'
|
||||
} else {
|
||||
$title
|
||||
}
|
||||
let result = (
|
||||
^zk --notebook-dir $root --working-dir $root new $root --no-input --title $effective_title --print-path --dry-run
|
||||
| complete
|
||||
)
|
||||
|
||||
if $result.exit_code != 0 {
|
||||
error make {
|
||||
msg: $"zk failed to generate a note path: ($result.stderr | str trim)"
|
||||
}
|
||||
}
|
||||
|
||||
let path_text = ($result.stderr | str trim)
|
||||
if $path_text == '' {
|
||||
error make {
|
||||
msg: 'zk did not return a generated note path'
|
||||
}
|
||||
}
|
||||
|
||||
$path_text
|
||||
| lines
|
||||
| last
|
||||
| str trim
|
||||
}
|
||||
|
||||
export def new-note-id [] {
|
||||
let suffix = (random uuid | str replace -a '-' '')
|
||||
$"ntl_($suffix)"
|
||||
}
|
||||
|
||||
export def new-job-id [] {
|
||||
let suffix = (random uuid | str replace -a '-' '')
|
||||
$"job_($suffix)"
|
||||
}
|
||||
|
||||
export def new-version-id [] {
|
||||
let suffix = (random uuid | str replace -a '-' '')
|
||||
$"ver_($suffix)"
|
||||
}
|
||||
|
||||
export def archive-path-for [note_id: string, source_hash: string, source_relpath: string] {
|
||||
let stamp = (date now | format date "%Y-%m-%dT%H-%M-%SZ")
|
||||
let short = ($source_hash | str substring 0..11)
|
||||
let directory = [(archive-root) $note_id] | path join
|
||||
let parsed = ($source_relpath | path parse)
|
||||
let extension = if (($parsed.extension? | default '') | str trim) == '' {
|
||||
'bin'
|
||||
} else {
|
||||
($parsed.extension | str downcase)
|
||||
}
|
||||
mkdir $directory
|
||||
[$directory $"($stamp)-($short).($extension)"] | path join
|
||||
}
|
||||
|
||||
export def transcript-path-for [note_id: string, job_id: string] {
|
||||
let directory = [(transcript-root) $note_id] | path join
|
||||
mkdir $directory
|
||||
[$directory $"($job_id).md"] | path join
|
||||
}
|
||||
|
||||
export def result-path-for [job_id: string] {
|
||||
[(results-root) $"($job_id).json"] | path join
|
||||
}
|
||||
|
||||
export def manifest-path-for [job_id: string, status: string] {
|
||||
let root = match $status {
|
||||
'queued' => (queued-root)
|
||||
'running' => (running-root)
|
||||
'failed' => (failed-root)
|
||||
'done' => (done-root)
|
||||
_ => (queued-root)
|
||||
}
|
||||
[$root $"($job_id).json"] | path join
|
||||
}
|
||||
|
||||
export def note-output-path [title: string] {
|
||||
zk-generated-note-path $title
|
||||
}
|
||||
|
||||
export def is-supported-source-path [path: string] {
|
||||
let lower = ($path | str downcase)
|
||||
(($lower | str ends-with '.pdf') or ($lower | str ends-with '.png'))
|
||||
}
|
||||
|
||||
export def is-ignored-path [relpath: string] {
|
||||
let lower = ($relpath | str downcase)
|
||||
let hidden = (($lower | str contains '/.') or ($lower | str starts-with '.'))
|
||||
let temp = (($lower | str contains '/~') or ($lower | str ends-with '.tmp') or ($lower | str ends-with '.part'))
|
||||
let conflict = ($lower | str contains '.sync-conflict')
|
||||
($hidden or $temp or $conflict)
|
||||
}
|
||||
|
||||
export def scan-source-files [] {
|
||||
let root = (webdav-root)
|
||||
if not ($root | path exists) {
|
||||
[]
|
||||
} else {
|
||||
let files = ([
|
||||
(glob $"($root)/**/*.pdf")
|
||||
(glob $"($root)/**/*.PDF")
|
||||
(glob $"($root)/**/*.png")
|
||||
(glob $"($root)/**/*.PNG")
|
||||
] | flatten)
|
||||
$files
|
||||
| sort
|
||||
| uniq
|
||||
| each {|file|
|
||||
let relpath = ($file | path relative-to $root)
|
||||
if ((is-ignored-path $relpath) or not (is-supported-source-path $file)) {
|
||||
null
|
||||
} else {
|
||||
let stat = (ls -l $file | first)
|
||||
{
|
||||
source_path: $file
|
||||
source_relpath: $relpath
|
||||
source_size: $stat.size
|
||||
source_mtime: ($stat.modified | format date "%Y-%m-%dT%H:%M:%SZ")
|
||||
title: (($relpath | path parse).stem)
|
||||
}
|
||||
}
|
||||
}
|
||||
| where $it != null
|
||||
}
|
||||
}
|
||||
@@ -1,387 +0,0 @@
|
||||
#!/usr/bin/env nu
|
||||
|
||||
use ./lib.nu *
|
||||
use ./jobs.nu [archive-and-version, enqueue-job]
|
||||
|
||||
const settle_window = 45sec
|
||||
const delete_grace = 15min
|
||||
|
||||
|
||||
def settle-remaining [source_mtime: string] {
|
||||
let modified = ($source_mtime | into datetime)
|
||||
let age = ((date now) - $modified)
|
||||
if $age >= $settle_window {
|
||||
0sec
|
||||
} else {
|
||||
$settle_window - $age
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def is-settled [source_mtime: string] {
|
||||
let modified = ($source_mtime | into datetime)
|
||||
((date now) - $modified) >= $settle_window
|
||||
}
|
||||
|
||||
|
||||
def log-job-enqueued [note_id: string, job_id: string, operation: string, source_hash: string, archive_path: string] {
|
||||
log-event $note_id 'job-enqueued' {
|
||||
job_id: $job_id
|
||||
operation: $operation
|
||||
source_hash: $source_hash
|
||||
archive_path: $archive_path
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def find-rename-candidate [source_hash: string] {
|
||||
sql-json $"
|
||||
select *
|
||||
from notes
|
||||
where current_source_hash = (sql-quote $source_hash)
|
||||
and status != 'active'
|
||||
and status != 'failed'
|
||||
and status != 'conflict'
|
||||
order by last_seen_at desc
|
||||
limit 1;
|
||||
"
|
||||
}
|
||||
|
||||
|
||||
def touch-note [note_id: string, source_size: any, source_mtime: string, status: string = 'active'] {
|
||||
let source_size_int = ($source_size | into int)
|
||||
let now_q = (sql-quote (now-iso))
|
||||
let source_mtime_q = (sql-quote $source_mtime)
|
||||
let status_q = (sql-quote $status)
|
||||
let note_id_q = (sql-quote $note_id)
|
||||
sql-run $"
|
||||
update notes
|
||||
set last_seen_at = ($now_q),
|
||||
current_source_size = ($source_size_int),
|
||||
current_source_mtime = ($source_mtime_q),
|
||||
status = ($status_q)
|
||||
where note_id = ($note_id_q);
|
||||
"
|
||||
| ignore
|
||||
}
|
||||
|
||||
|
||||
def process-existing [note: record, source: record] {
|
||||
let title = $source.title
|
||||
let note_id = ($note | get note_id)
|
||||
let note_status = ($note | get status)
|
||||
let source_size_int = ($source.source_size | into int)
|
||||
if not (is-settled $source.source_mtime) {
|
||||
touch-note $note_id $source_size_int $source.source_mtime $note_status
|
||||
return
|
||||
}
|
||||
|
||||
let previous_size = ($note.current_source_size? | default (-1))
|
||||
let previous_mtime = ($note.current_source_mtime? | default '')
|
||||
let size_changed = ($previous_size != $source_size_int)
|
||||
let mtime_changed = ($previous_mtime != $source.source_mtime)
|
||||
let needs_ingest = (($note.last_generated_source_hash? | default '') != ($note.current_source_hash? | default ''))
|
||||
let hash_needed = ($note.current_source_hash? | default null) == null or $size_changed or $mtime_changed or ($note_status != 'active') or $needs_ingest
|
||||
|
||||
if not $hash_needed {
|
||||
let now_q = (sql-quote (now-iso))
|
||||
let title_q = (sql-quote $title)
|
||||
let note_id_q = (sql-quote $note_id)
|
||||
sql-run $"
|
||||
update notes
|
||||
set last_seen_at = ($now_q),
|
||||
status = 'active',
|
||||
title = ($title_q),
|
||||
missing_since = null,
|
||||
deleted_at = null
|
||||
where note_id = ($note_id_q);
|
||||
"
|
||||
| ignore
|
||||
return
|
||||
}
|
||||
|
||||
let source_hash = (sha256 $source.source_path)
|
||||
if ($source_hash == ($note.current_source_hash? | default '')) {
|
||||
let now_q = (sql-quote (now-iso))
|
||||
let title_q = (sql-quote $title)
|
||||
let source_mtime_q = (sql-quote $source.source_mtime)
|
||||
let note_id_q = (sql-quote $note_id)
|
||||
let next_status = if $note_status == 'failed' { 'failed' } else { 'active' }
|
||||
sql-run $"
|
||||
update notes
|
||||
set last_seen_at = ($now_q),
|
||||
title = ($title_q),
|
||||
status = (sql-quote $next_status),
|
||||
missing_since = null,
|
||||
deleted_at = null,
|
||||
current_source_size = ($source_size_int),
|
||||
current_source_mtime = ($source_mtime_q)
|
||||
where note_id = ($note_id_q);
|
||||
"
|
||||
| ignore
|
||||
|
||||
let should_enqueue = ($note_status == 'failed' or (($note.last_generated_source_hash? | default '') != $source_hash))
|
||||
if not $should_enqueue {
|
||||
return
|
||||
}
|
||||
|
||||
let archive_path = if (($note.current_archive_path? | default '') | str trim) == '' {
|
||||
let version = (archive-and-version $note_id $source.source_path $source.source_relpath $source_size_int $source.source_mtime $source_hash)
|
||||
let archive_path_q = (sql-quote $version.archive_path)
|
||||
let version_id_q = (sql-quote $version.version_id)
|
||||
sql-run $"
|
||||
update notes
|
||||
set current_archive_path = ($archive_path_q),
|
||||
latest_version_id = ($version_id_q)
|
||||
where note_id = ($note_id_q);
|
||||
"
|
||||
| ignore
|
||||
$version.archive_path
|
||||
} else {
|
||||
$note.current_archive_path
|
||||
}
|
||||
|
||||
let runtime_note = ($note | upsert source_path $source.source_path | upsert source_relpath $source.source_relpath | upsert output_path $note.output_path | upsert last_generated_output_hash ($note.last_generated_output_hash? | default null))
|
||||
let retry_job = (enqueue-job $runtime_note 'upsert' $archive_path $archive_path $source_hash $title)
|
||||
if $retry_job != null {
|
||||
log-job-enqueued $note_id $retry_job.job_id 'upsert' $source_hash $archive_path
|
||||
let reason = if $note_status == 'failed' {
|
||||
'retry-failed-note'
|
||||
} else {
|
||||
'missing-generated-output'
|
||||
}
|
||||
log-event $note_id 'job-reenqueued' {
|
||||
job_id: $retry_job.job_id
|
||||
reason: $reason
|
||||
source_hash: $source_hash
|
||||
archive_path: $archive_path
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
let version = (archive-and-version $note_id $source.source_path $source.source_relpath $source_size_int $source.source_mtime $source_hash)
|
||||
let now_q = (sql-quote (now-iso))
|
||||
let title_q = (sql-quote $title)
|
||||
let source_hash_q = (sql-quote $source_hash)
|
||||
let source_mtime_q = (sql-quote $source.source_mtime)
|
||||
let archive_path_q = (sql-quote $version.archive_path)
|
||||
let version_id_q = (sql-quote $version.version_id)
|
||||
let note_id_q = (sql-quote $note_id)
|
||||
sql-run $"
|
||||
update notes
|
||||
set last_seen_at = ($now_q),
|
||||
title = ($title_q),
|
||||
status = 'active',
|
||||
missing_since = null,
|
||||
deleted_at = null,
|
||||
current_source_hash = ($source_hash_q),
|
||||
current_source_size = ($source_size_int),
|
||||
current_source_mtime = ($source_mtime_q),
|
||||
current_archive_path = ($archive_path_q),
|
||||
latest_version_id = ($version_id_q),
|
||||
last_error = null
|
||||
where note_id = ($note_id_q);
|
||||
"
|
||||
| ignore
|
||||
|
||||
let runtime_note = ($note | upsert source_path $source.source_path | upsert source_relpath $source.source_relpath | upsert output_path $note.output_path | upsert last_generated_output_hash ($note.last_generated_output_hash? | default null))
|
||||
let job = (enqueue-job $runtime_note 'upsert' $version.archive_path $version.archive_path $source_hash $title)
|
||||
if $job != null {
|
||||
log-job-enqueued $note_id $job.job_id 'upsert' $source_hash $version.archive_path
|
||||
}
|
||||
|
||||
log-event $note_id 'source-updated' {
|
||||
source_relpath: $source.source_relpath
|
||||
source_hash: $source_hash
|
||||
archive_path: $version.archive_path
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def process-new [source: record] {
|
||||
if not (is-settled $source.source_mtime) {
|
||||
return
|
||||
}
|
||||
|
||||
let source_hash = (sha256 $source.source_path)
|
||||
let source_size_int = ($source.source_size | into int)
|
||||
let rename_candidates = (find-rename-candidate $source_hash)
|
||||
if not ($rename_candidates | is-empty) {
|
||||
let rename_candidate = ($rename_candidates | first)
|
||||
let source_relpath_q = (sql-quote $source.source_relpath)
|
||||
let title_q = (sql-quote $source.title)
|
||||
let now_q = (sql-quote (now-iso))
|
||||
let source_mtime_q = (sql-quote $source.source_mtime)
|
||||
let note_id_q = (sql-quote $rename_candidate.note_id)
|
||||
sql-run $"
|
||||
update notes
|
||||
set source_relpath = ($source_relpath_q),
|
||||
title = ($title_q),
|
||||
last_seen_at = ($now_q),
|
||||
status = 'active',
|
||||
missing_since = null,
|
||||
deleted_at = null,
|
||||
current_source_size = ($source_size_int),
|
||||
current_source_mtime = ($source_mtime_q)
|
||||
where note_id = ($note_id_q);
|
||||
"
|
||||
| ignore
|
||||
log-event $rename_candidate.note_id 'source-renamed' {
|
||||
from: $rename_candidate.source_relpath
|
||||
to: $source.source_relpath
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
let note_id = (new-note-id)
|
||||
let first_seen_at = (now-iso)
|
||||
let output_path = (note-output-path $source.title)
|
||||
let version = (archive-and-version $note_id $source.source_path $source.source_relpath $source_size_int $source.source_mtime $source_hash)
|
||||
let note_id_q = (sql-quote $note_id)
|
||||
let source_relpath_q = (sql-quote $source.source_relpath)
|
||||
let title_q = (sql-quote $source.title)
|
||||
let output_path_q = (sql-quote $output_path)
|
||||
let first_seen_q = (sql-quote $first_seen_at)
|
||||
let source_hash_q = (sql-quote $source_hash)
|
||||
let source_mtime_q = (sql-quote $source.source_mtime)
|
||||
let archive_path_q = (sql-quote $version.archive_path)
|
||||
let version_id_q = (sql-quote $version.version_id)
|
||||
let sql = ([
|
||||
"insert into notes (note_id, source_relpath, title, output_path, status, first_seen_at, last_seen_at, current_source_hash, current_source_size, current_source_mtime, current_archive_path, latest_version_id) values ("
|
||||
$note_id_q
|
||||
", "
|
||||
$source_relpath_q
|
||||
", "
|
||||
$title_q
|
||||
", "
|
||||
$output_path_q
|
||||
", 'active', "
|
||||
$first_seen_q
|
||||
", "
|
||||
$first_seen_q
|
||||
", "
|
||||
$source_hash_q
|
||||
", "
|
||||
($source_size_int | into string)
|
||||
", "
|
||||
$source_mtime_q
|
||||
", "
|
||||
$archive_path_q
|
||||
", "
|
||||
$version_id_q
|
||||
");"
|
||||
] | str join '')
|
||||
sql-run $sql | ignore
|
||||
|
||||
let note = {
|
||||
note_id: $note_id
|
||||
source_relpath: $source.source_relpath
|
||||
source_path: $source.source_path
|
||||
output_path: $output_path
|
||||
last_generated_output_hash: null
|
||||
}
|
||||
let job = (enqueue-job $note 'upsert' $version.archive_path $version.archive_path $source_hash $source.title)
|
||||
if $job != null {
|
||||
log-job-enqueued $note_id $job.job_id 'upsert' $source_hash $version.archive_path
|
||||
}
|
||||
|
||||
log-event $note_id 'source-discovered' {
|
||||
source_relpath: $source.source_relpath
|
||||
source_hash: $source_hash
|
||||
archive_path: $version.archive_path
|
||||
output_path: $output_path
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def mark-missing [seen_relpaths: list<string>] {
|
||||
let notes = (sql-json 'select note_id, source_relpath, status, missing_since from notes;')
|
||||
for note in $notes {
|
||||
if ($seen_relpaths | any {|rel| $rel == $note.source_relpath }) {
|
||||
continue
|
||||
}
|
||||
|
||||
if $note.status == 'active' {
|
||||
let missing_since = (now-iso)
|
||||
let missing_since_q = (sql-quote $missing_since)
|
||||
let note_id_q = (sql-quote $note.note_id)
|
||||
sql-run $"
|
||||
update notes
|
||||
set status = 'source_missing',
|
||||
missing_since = ($missing_since_q)
|
||||
where note_id = ($note_id_q);
|
||||
"
|
||||
| ignore
|
||||
log-event $note.note_id 'source-missing' {
|
||||
source_relpath: $note.source_relpath
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if $note.status == 'source_missing' and ($note.missing_since? | default null) != null {
|
||||
let missing_since = ($note.missing_since | into datetime)
|
||||
if ((date now) - $missing_since) >= $delete_grace {
|
||||
let deleted_at = (now-iso)
|
||||
let deleted_at_q = (sql-quote $deleted_at)
|
||||
let note_id_q = (sql-quote $note.note_id)
|
||||
sql-run $"
|
||||
update notes
|
||||
set status = 'source_deleted',
|
||||
deleted_at = ($deleted_at_q)
|
||||
where note_id = ($note_id_q);
|
||||
"
|
||||
| ignore
|
||||
log-event $note.note_id 'source-deleted' {
|
||||
source_relpath: $note.source_relpath
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
export def reconcile-run [] {
|
||||
ensure-layout
|
||||
mut sources = (scan-source-files)
|
||||
|
||||
let unsettled = (
|
||||
$sources
|
||||
| each {|source|
|
||||
{
|
||||
source_path: $source.source_path
|
||||
remaining: (settle-remaining $source.source_mtime)
|
||||
}
|
||||
}
|
||||
| where remaining > 0sec
|
||||
)
|
||||
|
||||
if not ($unsettled | is-empty) {
|
||||
let max_remaining = ($unsettled | get remaining | math max)
|
||||
print $"Waiting ($max_remaining) for recent Notability uploads to settle"
|
||||
sleep ($max_remaining + 2sec)
|
||||
$sources = (scan-source-files)
|
||||
}
|
||||
|
||||
for source in $sources {
|
||||
let existing_rows = (sql-json $"
|
||||
select *
|
||||
from notes
|
||||
where source_relpath = (sql-quote $source.source_relpath)
|
||||
limit 1;
|
||||
")
|
||||
if (($existing_rows | length) == 0) {
|
||||
process-new $source
|
||||
} else {
|
||||
let existing = ($existing_rows | first)
|
||||
process-existing ($existing | upsert source_path $source.source_path) $source
|
||||
}
|
||||
}
|
||||
|
||||
mark-missing ($sources | get source_relpath)
|
||||
}
|
||||
|
||||
|
||||
def main [] {
|
||||
reconcile-run
|
||||
}
|
||||
@@ -1,148 +0,0 @@
|
||||
#!/usr/bin/env nu
|
||||
|
||||
use ./lib.nu *
|
||||
use ./jobs.nu [archive-and-version, enqueue-job]
|
||||
use ./worker.nu [worker-run]
|
||||
|
||||
|
||||
def latest-version [note_id: string] {
|
||||
sql-json $"
|
||||
select *
|
||||
from versions
|
||||
where note_id = (sql-quote $note_id)
|
||||
order by seen_at desc
|
||||
limit 1;
|
||||
"
|
||||
| first
|
||||
}
|
||||
|
||||
|
||||
def existing-active-job [note_id: string, source_hash: string] {
|
||||
sql-json $"
|
||||
select job_id
|
||||
from jobs
|
||||
where note_id = (sql-quote $note_id)
|
||||
and source_hash = (sql-quote $source_hash)
|
||||
and status != 'done'
|
||||
and status != 'failed'
|
||||
order by requested_at desc
|
||||
limit 1;
|
||||
"
|
||||
| first
|
||||
}
|
||||
|
||||
|
||||
def archive-current-source [note: record] {
|
||||
if not ($note.source_path | path exists) {
|
||||
error make {
|
||||
msg: $"Current source path is missing: ($note.source_path)"
|
||||
}
|
||||
}
|
||||
|
||||
let source_hash = (sha256 $note.source_path)
|
||||
let source_size = (((ls -l $note.source_path | first).size) | into int)
|
||||
let source_mtime = (((ls -l $note.source_path | first).modified) | format date "%Y-%m-%dT%H:%M:%SZ")
|
||||
let version = (archive-and-version $note.note_id $note.source_path $note.source_relpath $source_size $source_mtime $source_hash)
|
||||
|
||||
sql-run $"
|
||||
update notes
|
||||
set current_source_hash = (sql-quote $source_hash),
|
||||
current_source_size = ($source_size),
|
||||
current_source_mtime = (sql-quote $source_mtime),
|
||||
current_archive_path = (sql-quote $version.archive_path),
|
||||
latest_version_id = (sql-quote $version.version_id),
|
||||
last_seen_at = (sql-quote (now-iso)),
|
||||
status = 'active',
|
||||
missing_since = null,
|
||||
deleted_at = null
|
||||
where note_id = (sql-quote $note.note_id);
|
||||
"
|
||||
| ignore
|
||||
|
||||
{
|
||||
input_path: $version.archive_path
|
||||
archive_path: $version.archive_path
|
||||
source_hash: $source_hash
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def enqueue-reingest-job [note: record, source_hash: string, input_path: string, archive_path: string, force_overwrite_generated: bool] {
|
||||
let job = (enqueue-job $note 'reingest' $input_path $archive_path $source_hash $note.title $force_overwrite_generated)
|
||||
if $job == null {
|
||||
let existing = (existing-active-job $note.note_id $source_hash)
|
||||
print $"Already queued: ($existing.job_id? | default 'unknown')"
|
||||
return
|
||||
}
|
||||
|
||||
log-event $note.note_id 'reingest-enqueued' {
|
||||
job_id: $job.job_id
|
||||
source_hash: $source_hash
|
||||
archive_path: $archive_path
|
||||
force_overwrite_generated: $force_overwrite_generated
|
||||
}
|
||||
|
||||
print $"Enqueued ($job.job_id) for ($note.note_id)"
|
||||
|
||||
try {
|
||||
worker-run --drain
|
||||
} catch {|error|
|
||||
error make {
|
||||
msg: (($error.msg? | default ($error | to nuon)) | into string)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def main [note_id: string, --latest-source, --latest-archive, --force-overwrite-generated] {
|
||||
ensure-layout
|
||||
|
||||
let note_row = (sql-json $"
|
||||
select *
|
||||
from notes
|
||||
where note_id = (sql-quote $note_id)
|
||||
limit 1;
|
||||
" | first)
|
||||
let note = if $note_row == null {
|
||||
null
|
||||
} else {
|
||||
$note_row | upsert source_path ([ (webdav-root) $note_row.source_relpath ] | path join)
|
||||
}
|
||||
|
||||
if $note == null {
|
||||
error make {
|
||||
msg: $"Unknown note id: ($note_id)"
|
||||
}
|
||||
}
|
||||
|
||||
if $latest_source and $latest_archive {
|
||||
error make {
|
||||
msg: 'Choose only one of --latest-source or --latest-archive'
|
||||
}
|
||||
}
|
||||
|
||||
let source_mode = if $latest_source {
|
||||
'source'
|
||||
} else if $latest_archive {
|
||||
'archive'
|
||||
} else if ($note.status == 'active' and ($note.source_path | path exists)) {
|
||||
'source'
|
||||
} else {
|
||||
'archive'
|
||||
}
|
||||
|
||||
if $source_mode == 'source' {
|
||||
let archived = (archive-current-source $note)
|
||||
enqueue-reingest-job $note $archived.source_hash $archived.input_path $archived.archive_path $force_overwrite_generated
|
||||
return
|
||||
}
|
||||
|
||||
let version = (latest-version $note.note_id)
|
||||
if $version == null {
|
||||
error make {
|
||||
msg: $"No archived version found for ($note.note_id)"
|
||||
}
|
||||
}
|
||||
|
||||
enqueue-reingest-job $note $version.source_hash $version.archive_path $version.archive_path $force_overwrite_generated
|
||||
}
|
||||
@@ -1,202 +0,0 @@
|
||||
#!/usr/bin/env nu
|
||||
|
||||
use ./lib.nu *
|
||||
|
||||
|
||||
def format-summary [] {
|
||||
let counts = (sql-json '
|
||||
select status, count(*) as count
|
||||
from notes
|
||||
group by status
|
||||
order by status;
|
||||
')
|
||||
let queue = (sql-json "
|
||||
select status, count(*) as count
|
||||
from jobs
|
||||
where status in ('queued', 'running', 'failed')
|
||||
group by status
|
||||
order by status;
|
||||
")
|
||||
|
||||
let lines = [
|
||||
$"notes db: (db-path)"
|
||||
$"webdav root: (webdav-root)"
|
||||
$"notes root: (notes-root)"
|
||||
''
|
||||
'notes:'
|
||||
]
|
||||
|
||||
let note_statuses = ('active,source_missing,source_deleted,conflict,failed' | split row ',')
|
||||
let note_lines = (
|
||||
$note_statuses
|
||||
| each {|status|
|
||||
let row = ($counts | where {|row| ($row | get 'status') == $status } | first)
|
||||
let count = ($row.count? | default 0)
|
||||
$" ($status): ($count)"
|
||||
}
|
||||
)
|
||||
|
||||
let job_statuses = ('queued,running,failed' | split row ',')
|
||||
let job_lines = (
|
||||
$job_statuses
|
||||
| each {|status|
|
||||
let row = ($queue | where {|row| ($row | get 'status') == $status } | first)
|
||||
let count = ($row.count? | default 0)
|
||||
$" ($status): ($count)"
|
||||
}
|
||||
)
|
||||
|
||||
($lines ++ $note_lines ++ ['' 'jobs:'] ++ $job_lines ++ ['']) | str join "\n"
|
||||
}
|
||||
|
||||
|
||||
def format-note [note_id: string] {
|
||||
let note = (sql-json $"
|
||||
select *
|
||||
from notes
|
||||
where note_id = (sql-quote $note_id)
|
||||
limit 1;
|
||||
" | first)
|
||||
|
||||
if $note == null {
|
||||
error make {
|
||||
msg: $"Unknown note id: ($note_id)"
|
||||
}
|
||||
}
|
||||
|
||||
let jobs = (sql-json $"
|
||||
select job_id, operation, status, requested_at, started_at, finished_at, source_hash, error_summary
|
||||
from jobs
|
||||
where note_id = (sql-quote $note_id)
|
||||
order by requested_at desc
|
||||
limit 5;
|
||||
")
|
||||
let events = (sql-json $"
|
||||
select ts, kind, details
|
||||
from events
|
||||
where note_id = (sql-quote $note_id)
|
||||
order by ts desc
|
||||
limit 10;
|
||||
")
|
||||
let output_exists = ($note.output_path | path exists)
|
||||
let frontmatter = (parse-output-frontmatter $note.output_path)
|
||||
|
||||
let lines = [
|
||||
$"note_id: ($note.note_id)"
|
||||
$"title: ($note.title)"
|
||||
$"status: ($note.status)"
|
||||
$"source_relpath: ($note.source_relpath)"
|
||||
$"output_path: ($note.output_path)"
|
||||
$"output_exists: ($output_exists)"
|
||||
$"managed_by: ($frontmatter.managed_by? | default '')"
|
||||
$"frontmatter_note_id: ($frontmatter.note_id? | default '')"
|
||||
$"current_source_hash: ($note.current_source_hash? | default '')"
|
||||
$"last_generated_output_hash: ($note.last_generated_output_hash? | default '')"
|
||||
$"current_archive_path: ($note.current_archive_path? | default '')"
|
||||
$"last_processed_at: ($note.last_processed_at? | default '')"
|
||||
$"missing_since: ($note.missing_since? | default '')"
|
||||
$"deleted_at: ($note.deleted_at? | default '')"
|
||||
$"conflict_path: ($note.conflict_path? | default '')"
|
||||
$"last_error: ($note.last_error? | default '')"
|
||||
''
|
||||
'recent jobs:'
|
||||
]
|
||||
|
||||
let job_lines = if ($jobs | is-empty) {
|
||||
[' (none)']
|
||||
} else {
|
||||
$jobs | each {|job|
|
||||
$" ($job.job_id) [($job.status)] ($job.operation) requested=($job.requested_at) error=($job.error_summary? | default '')"
|
||||
}
|
||||
}
|
||||
|
||||
let event_lines = if ($events | is-empty) {
|
||||
[' (none)']
|
||||
} else {
|
||||
$events | each {|event|
|
||||
$" ($event.ts) ($event.kind) ($event.details? | default '')"
|
||||
}
|
||||
}
|
||||
|
||||
($lines ++ $job_lines ++ ['' 'recent events:'] ++ $event_lines ++ ['']) | str join "\n"
|
||||
}
|
||||
|
||||
|
||||
def format-filtered [status: string, label: string] {
|
||||
let notes = (sql-json $"
|
||||
select note_id, title, source_relpath, output_path, status, last_error, conflict_path
|
||||
from notes
|
||||
where status = (sql-quote $status)
|
||||
order by last_seen_at desc;
|
||||
")
|
||||
|
||||
let header = [$label]
|
||||
let body = if ($notes | is-empty) {
|
||||
[' (none)']
|
||||
} else {
|
||||
$notes | each {|note|
|
||||
let extra = if $status == 'conflict' {
|
||||
$" conflict_path=($note.conflict_path? | default '')"
|
||||
} else if $status == 'failed' {
|
||||
$" last_error=($note.last_error? | default '')"
|
||||
} else {
|
||||
''
|
||||
}
|
||||
$" ($note.note_id) ($note.title) [($note.status)] source=($note.source_relpath) output=($note.output_path)($extra)"
|
||||
}
|
||||
}
|
||||
|
||||
($header ++ $body ++ ['']) | str join "\n"
|
||||
}
|
||||
|
||||
|
||||
def format-queue [] {
|
||||
let jobs = (sql-json "
|
||||
select job_id, note_id, operation, status, requested_at, started_at, error_summary
|
||||
from jobs
|
||||
where status in ('queued', 'running', 'failed')
|
||||
order by requested_at asc;
|
||||
")
|
||||
|
||||
let lines = if ($jobs | is-empty) {
|
||||
['queue' ' (empty)' '']
|
||||
} else {
|
||||
['queue'] ++ ($jobs | each {|job|
|
||||
$" ($job.job_id) note=($job.note_id) [($job.status)] ($job.operation) requested=($job.requested_at) error=($job.error_summary? | default '')"
|
||||
}) ++ ['']
|
||||
}
|
||||
|
||||
$lines | str join "\n"
|
||||
}
|
||||
|
||||
|
||||
def main [note_id?: string, --failed, --queue, --deleted, --conflicts] {
|
||||
ensure-layout
|
||||
|
||||
if $queue {
|
||||
print (format-queue)
|
||||
return
|
||||
}
|
||||
|
||||
if $failed {
|
||||
print (format-filtered 'failed' 'failed notes')
|
||||
return
|
||||
}
|
||||
|
||||
if $deleted {
|
||||
print (format-filtered 'source_deleted' 'deleted notes')
|
||||
return
|
||||
}
|
||||
|
||||
if $conflicts {
|
||||
print (format-filtered 'conflict' 'conflict notes')
|
||||
return
|
||||
}
|
||||
|
||||
if $note_id != null {
|
||||
print (format-note $note_id)
|
||||
return
|
||||
}
|
||||
|
||||
print (format-summary)
|
||||
}
|
||||
@@ -1,58 +0,0 @@
|
||||
#!/usr/bin/env nu
|
||||
|
||||
use ./lib.nu *
|
||||
use ./reconcile.nu [reconcile-run]
|
||||
use ./worker.nu [worker-run]
|
||||
|
||||
|
||||
def error-message [error: any] {
|
||||
let msg = (($error.msg? | default '') | into string)
|
||||
if $msg == '' {
|
||||
$error | to nuon
|
||||
} else {
|
||||
$msg
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def run-worker [] {
|
||||
try {
|
||||
worker-run --drain
|
||||
} catch {|error|
|
||||
print $"worker failed: (error-message $error)"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def run-sync [] {
|
||||
run-worker
|
||||
|
||||
try {
|
||||
reconcile-run
|
||||
} catch {|error|
|
||||
print $"reconcile failed: (error-message $error)"
|
||||
return
|
||||
}
|
||||
|
||||
run-worker
|
||||
}
|
||||
|
||||
|
||||
def main [] {
|
||||
ensure-layout
|
||||
let root = (webdav-root)
|
||||
print $"Watching ($root) for Notability WebDAV updates"
|
||||
|
||||
run-sync
|
||||
|
||||
^inotifywait -m -r --format '%w%f' -e create -e close_write -e moved_to -e moved_from -e delete -e attrib $root
|
||||
| lines
|
||||
| each {|changed_path|
|
||||
if not (is-supported-source-path $changed_path) {
|
||||
return
|
||||
}
|
||||
|
||||
print $"Filesystem event for ($changed_path)"
|
||||
run-sync
|
||||
}
|
||||
}
|
||||
@@ -1,36 +0,0 @@
|
||||
#!/usr/bin/env nu
|
||||
|
||||
use ./lib.nu *
|
||||
|
||||
|
||||
def main [] {
|
||||
ensure-layout
|
||||
|
||||
let root = (webdav-root)
|
||||
let addr = if ('NOTABILITY_WEBDAV_ADDR' in ($env | columns)) {
|
||||
$env.NOTABILITY_WEBDAV_ADDR
|
||||
} else {
|
||||
'127.0.0.1:9980'
|
||||
}
|
||||
let user = if ('NOTABILITY_WEBDAV_USER' in ($env | columns)) {
|
||||
$env.NOTABILITY_WEBDAV_USER
|
||||
} else {
|
||||
'notability'
|
||||
}
|
||||
let baseurl = if ('NOTABILITY_WEBDAV_BASEURL' in ($env | columns)) {
|
||||
$env.NOTABILITY_WEBDAV_BASEURL
|
||||
} else {
|
||||
'/'
|
||||
}
|
||||
let password_file = if ('NOTABILITY_WEBDAV_PASSWORD_FILE' in ($env | columns)) {
|
||||
$env.NOTABILITY_WEBDAV_PASSWORD_FILE
|
||||
} else {
|
||||
error make {
|
||||
msg: 'NOTABILITY_WEBDAV_PASSWORD_FILE is required'
|
||||
}
|
||||
}
|
||||
let password = (open --raw $password_file | str trim)
|
||||
|
||||
print $"Starting WebDAV on ($addr), serving ($root), base URL ($baseurl)"
|
||||
run-external rclone 'serve' 'webdav' $root '--addr' $addr '--baseurl' $baseurl '--user' $user '--pass' $password
|
||||
}
|
||||
@@ -1,506 +0,0 @@
|
||||
#!/usr/bin/env nu
|
||||
|
||||
use ./lib.nu *
|
||||
|
||||
const qmd_debounce = 1min
|
||||
const idle_sleep = 10sec
|
||||
const vision_model = 'openai-codex/gpt-5.4'
|
||||
const transcribe_timeout = '90s'
|
||||
const normalize_timeout = '60s'
|
||||
|
||||
|
||||
def next-queued-job [] {
|
||||
sql-json "
|
||||
select job_id, note_id, operation, job_manifest_path, result_path, source_hash
|
||||
from jobs
|
||||
where status = 'queued'
|
||||
order by requested_at asc
|
||||
limit 1;
|
||||
"
|
||||
| first
|
||||
}
|
||||
|
||||
|
||||
def maybe-update-qmd [] {
|
||||
let dirty = (qmd-dirty-file)
|
||||
if not ($dirty | path exists) {
|
||||
return
|
||||
}
|
||||
|
||||
let modified = ((ls -l $dirty | first).modified)
|
||||
if ((date now) - $modified) < $qmd_debounce {
|
||||
return
|
||||
}
|
||||
|
||||
print 'Running qmd update'
|
||||
let result = (do {
|
||||
cd (notes-root)
|
||||
run-external qmd 'update' | complete
|
||||
})
|
||||
if $result.exit_code != 0 {
|
||||
print $"qmd update failed: ($result.stderr | str trim)"
|
||||
return
|
||||
}
|
||||
|
||||
rm -f $dirty
|
||||
}
|
||||
|
||||
|
||||
def write-result [result_path: path, payload: record] {
|
||||
mkdir ($result_path | path dirname)
|
||||
($payload | to json --indent 2) | save -f $result_path
|
||||
}
|
||||
|
||||
|
||||
def error-message [error: any] {
|
||||
let msg = (($error.msg? | default '') | into string)
|
||||
if ($msg == '' or $msg == 'External command failed') {
|
||||
$error | to nuon
|
||||
} else {
|
||||
$msg
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def unquote [value?: any] {
|
||||
if $value == null {
|
||||
''
|
||||
} else {
|
||||
($value | into string | str replace -r '^"(.*)"$' '$1' | str replace -r "^'(.*)'$" '$1')
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def source-format [file: path] {
|
||||
(([$file] | path parse | first).extension? | default 'bin' | str downcase)
|
||||
}
|
||||
|
||||
|
||||
def conflict-path-for [output_path: path] {
|
||||
let parsed = ([$output_path] | path parse | first)
|
||||
let stamp = ((date now) | format date '%Y-%m-%dT%H-%M-%SZ')
|
||||
[$parsed.parent $"($parsed.stem).conflict-($stamp).($parsed.extension)"] | path join
|
||||
}
|
||||
|
||||
|
||||
def find-managed-outputs [note_id: string] {
|
||||
let root = (notes-root)
|
||||
if not ($root | path exists) {
|
||||
[]
|
||||
} else {
|
||||
(glob $"($root)/**/*.md")
|
||||
| where not ($it | str contains '/.')
|
||||
| where {|file|
|
||||
let parsed = (parse-output-frontmatter $file)
|
||||
(unquote ($parsed.managed_by? | default '')) == 'notability-ingest' and (unquote ($parsed.note_id? | default '')) == $note_id
|
||||
}
|
||||
| sort
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def resolve-managed-output-path [note_id: string, configured_output_path: path] {
|
||||
if ($configured_output_path | path exists) {
|
||||
let parsed = (parse-output-frontmatter $configured_output_path)
|
||||
let managed_by = (unquote ($parsed.managed_by? | default ''))
|
||||
let frontmatter_note_id = (unquote ($parsed.note_id? | default ''))
|
||||
if ($managed_by == 'notability-ingest' and $frontmatter_note_id == $note_id) {
|
||||
return $configured_output_path
|
||||
}
|
||||
}
|
||||
|
||||
let discovered = (find-managed-outputs $note_id)
|
||||
if ($discovered | is-empty) {
|
||||
$configured_output_path
|
||||
} else if (($discovered | length) == 1) {
|
||||
$discovered | first
|
||||
} else {
|
||||
error make {
|
||||
msg: $"Multiple managed note files found for ($note_id): (($discovered | str join ', '))"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def determine-write-target [manifest: record] {
|
||||
let output_path = (resolve-managed-output-path $manifest.note_id $manifest.output_path)
|
||||
if not ($output_path | path exists) {
|
||||
return {
|
||||
output_path: $output_path
|
||||
write_path: $output_path
|
||||
write_mode: 'create'
|
||||
updated_main_output: true
|
||||
}
|
||||
}
|
||||
|
||||
let parsed = (parse-output-frontmatter $output_path)
|
||||
let managed_by = (unquote ($parsed.managed_by? | default ''))
|
||||
let frontmatter_note_id = (unquote ($parsed.note_id? | default ''))
|
||||
if ($managed_by == 'notability-ingest' and $frontmatter_note_id == $manifest.note_id) {
|
||||
return {
|
||||
output_path: $output_path
|
||||
write_path: $output_path
|
||||
write_mode: 'overwrite'
|
||||
updated_main_output: true
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
output_path: $output_path
|
||||
write_path: (conflict-path-for $output_path)
|
||||
write_mode: 'conflict'
|
||||
updated_main_output: false
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def build-markdown [manifest: record, normalized: string] {
|
||||
let body = ($normalized | str trim)
|
||||
let output_body = if $body == '' {
|
||||
$"# ($manifest.title)"
|
||||
} else {
|
||||
$body
|
||||
}
|
||||
let created = ($manifest.requested_at | str substring 0..9)
|
||||
let updated = ((date now) | format date '%Y-%m-%d')
|
||||
|
||||
[
|
||||
'---'
|
||||
$"title: ($manifest.title | to json)"
|
||||
$"created: ($created | to json)"
|
||||
$"updated: ($updated | to json)"
|
||||
'source: "notability"'
|
||||
$"source_transport: (($manifest.source_transport? | default 'webdav') | to json)"
|
||||
$"source_relpath: ($manifest.source_relpath | to json)"
|
||||
$"note_id: ($manifest.note_id | to json)"
|
||||
'managed_by: "notability-ingest"'
|
||||
$"source_file: ($manifest.archive_path | to json)"
|
||||
$"source_file_hash: ($'sha256:($manifest.source_hash)' | to json)"
|
||||
$"source_format: ((source-format $manifest.archive_path) | to json)"
|
||||
'status: "active"'
|
||||
'tags:'
|
||||
' - handwritten'
|
||||
' - notability'
|
||||
'---'
|
||||
''
|
||||
$output_body
|
||||
''
|
||||
] | str join "\n"
|
||||
}
|
||||
|
||||
|
||||
def render-pages [input_path: path, job_id: string] {
|
||||
let extension = (([$input_path] | path parse | first).extension? | default '' | str downcase)
|
||||
if $extension == 'png' {
|
||||
[ $input_path ]
|
||||
} else if $extension == 'pdf' {
|
||||
let render_dir = [(render-root) $job_id] | path join
|
||||
mkdir $render_dir
|
||||
let prefix = [$render_dir 'page'] | path join
|
||||
^pdftoppm -png -r 200 $input_path $prefix
|
||||
let pages = ((glob $"($render_dir)/*.png") | sort)
|
||||
if ($pages | is-empty) {
|
||||
error make {
|
||||
msg: $"No PNG pages rendered from ($input_path)"
|
||||
}
|
||||
}
|
||||
$pages
|
||||
} else {
|
||||
error make {
|
||||
msg: $"Unsupported Notability input format: ($input_path)"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def call-pi [timeout_window: string, prompt: string, inputs: list<path>, thinking: string] {
|
||||
let prompt_file = (^mktemp --suffix '.md' | str trim)
|
||||
$prompt | save -f $prompt_file
|
||||
let input_refs = ($inputs | each {|input| $'@($input)' })
|
||||
let prompt_ref = $'@($prompt_file)'
|
||||
let result = (try {
|
||||
^timeout $timeout_window pi --model $vision_model --thinking $thinking --no-tools --no-session -p ...$input_refs $prompt_ref | complete
|
||||
} catch {|error|
|
||||
rm -f $prompt_file
|
||||
error make {
|
||||
msg: (error-message $error)
|
||||
}
|
||||
})
|
||||
rm -f $prompt_file
|
||||
|
||||
let output = ($result.stdout | str trim)
|
||||
if $output != '' {
|
||||
$output
|
||||
} else {
|
||||
let stderr = ($result.stderr | str trim)
|
||||
if $stderr == '' {
|
||||
error make {
|
||||
msg: $"pi returned no output (exit ($result.exit_code))"
|
||||
}
|
||||
} else {
|
||||
error make {
|
||||
msg: $"pi returned no output (exit ($result.exit_code)): ($stderr)"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def ingest-job [manifest: record] {
|
||||
mkdir $manifest.session_dir
|
||||
|
||||
let page_paths = (render-pages $manifest.input_path $manifest.job_id)
|
||||
let transcribe_prompt = ([
|
||||
'Transcribe this note into clean Markdown.'
|
||||
''
|
||||
'Read it like a human and reconstruct the intended reading order and structure.'
|
||||
''
|
||||
'Do not preserve handwritten layout literally.'
|
||||
''
|
||||
'Handwritten line breaks, word stacking, font size changes, and spacing are not semantic structure by default.'
|
||||
''
|
||||
'If adjacent handwritten lines clearly belong to one sentence or short phrase, merge them into normal prose with spaces instead of separate Markdown lines.'
|
||||
''
|
||||
'Only keep separate lines or blank lines when there is clear evidence of separate paragraphs, headings, list items, checkboxes, or other distinct blocks.'
|
||||
''
|
||||
'Keep headings, lists, and paragraphs when they are genuinely present.'
|
||||
''
|
||||
'Do not summarize. Do not add commentary. Return Markdown only.'
|
||||
] | str join "\n")
|
||||
print $"Transcribing ($manifest.job_id) with page count ($page_paths | length)"
|
||||
let transcript = (call-pi $transcribe_timeout $transcribe_prompt $page_paths 'low')
|
||||
mkdir ($manifest.transcript_path | path dirname)
|
||||
$"($transcript)\n" | save -f $manifest.transcript_path
|
||||
|
||||
let normalize_prompt = ([
|
||||
'Rewrite the attached transcription into clean Markdown.'
|
||||
''
|
||||
'Preserve the same content and intended structure.'
|
||||
''
|
||||
'Collapse layout-only line breaks from handwriting.'
|
||||
''
|
||||
'If short adjacent lines are really one sentence or phrase, join them with spaces instead of keeping one line per handwritten row.'
|
||||
''
|
||||
'Use separate lines only for real headings, list items, checkboxes, or distinct paragraphs.'
|
||||
''
|
||||
'Do not summarize. Return Markdown only.'
|
||||
] | str join "\n")
|
||||
print $"Normalizing ($manifest.job_id)"
|
||||
let normalized = (call-pi $normalize_timeout $normalize_prompt [ $manifest.transcript_path ] 'off')
|
||||
|
||||
let markdown = (build-markdown $manifest $normalized)
|
||||
let target = (determine-write-target $manifest)
|
||||
mkdir ($target.write_path | path dirname)
|
||||
$markdown | save -f $target.write_path
|
||||
|
||||
{
|
||||
success: true
|
||||
job_id: $manifest.job_id
|
||||
note_id: $manifest.note_id
|
||||
archive_path: $manifest.archive_path
|
||||
source_hash: $manifest.source_hash
|
||||
session_dir: $manifest.session_dir
|
||||
output_path: $target.output_path
|
||||
output_hash: (if $target.updated_main_output { sha256 $target.write_path } else { null })
|
||||
conflict_path: (if $target.write_mode == 'conflict' { $target.write_path } else { null })
|
||||
write_mode: $target.write_mode
|
||||
updated_main_output: $target.updated_main_output
|
||||
transcript_path: $manifest.transcript_path
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def mark-failure [job: record, running_path: string, error_summary: string, result?: any] {
|
||||
let finished_at = (now-iso)
|
||||
sql-run $"
|
||||
update jobs
|
||||
set status = 'failed',
|
||||
finished_at = (sql-quote $finished_at),
|
||||
error_summary = (sql-quote $error_summary),
|
||||
job_manifest_path = (sql-quote (manifest-path-for $job.job_id 'failed'))
|
||||
where job_id = (sql-quote $job.job_id);
|
||||
|
||||
update notes
|
||||
set status = 'failed',
|
||||
last_error = (sql-quote $error_summary)
|
||||
where note_id = (sql-quote $job.note_id);
|
||||
"
|
||||
| ignore
|
||||
|
||||
if $result != null and ($result.archive_path? | default null) != null {
|
||||
sql-run $"
|
||||
update versions
|
||||
set ingest_result = 'failed',
|
||||
session_path = (sql-quote ($result.session_dir? | default ''))
|
||||
where archive_path = (sql-quote $result.archive_path);
|
||||
"
|
||||
| ignore
|
||||
}
|
||||
|
||||
let failed_path = (manifest-path-for $job.job_id 'failed')
|
||||
if ($running_path | path exists) {
|
||||
mv -f $running_path $failed_path
|
||||
}
|
||||
|
||||
log-event $job.note_id 'job-failed' {
|
||||
job_id: $job.job_id
|
||||
error: $error_summary
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def mark-success [job: record, running_path: string, result: record] {
|
||||
let finished_at = (now-iso)
|
||||
let note_status = if ($result.write_mode? | default 'write') == 'conflict' {
|
||||
'conflict'
|
||||
} else {
|
||||
'active'
|
||||
}
|
||||
let output_path_q = (sql-quote ($result.output_path? | default null))
|
||||
let output_hash_update = if ($result.updated_main_output? | default false) {
|
||||
sql-quote ($result.output_hash? | default null)
|
||||
} else {
|
||||
'last_generated_output_hash'
|
||||
}
|
||||
let source_hash_update = if ($result.updated_main_output? | default false) {
|
||||
sql-quote ($result.source_hash? | default null)
|
||||
} else {
|
||||
'last_generated_source_hash'
|
||||
}
|
||||
|
||||
sql-run $"
|
||||
update jobs
|
||||
set status = 'done',
|
||||
finished_at = (sql-quote $finished_at),
|
||||
error_summary = null,
|
||||
job_manifest_path = (sql-quote (manifest-path-for $job.job_id 'done'))
|
||||
where job_id = (sql-quote $job.job_id);
|
||||
|
||||
update notes
|
||||
set status = (sql-quote $note_status),
|
||||
output_path = ($output_path_q),
|
||||
last_processed_at = (sql-quote $finished_at),
|
||||
last_generated_output_hash = ($output_hash_update),
|
||||
last_generated_source_hash = ($source_hash_update),
|
||||
conflict_path = (sql-quote ($result.conflict_path? | default null)),
|
||||
last_error = null
|
||||
where note_id = (sql-quote $job.note_id);
|
||||
|
||||
update versions
|
||||
set ingest_result = 'success',
|
||||
session_path = (sql-quote ($result.session_dir? | default ''))
|
||||
where archive_path = (sql-quote $result.archive_path);
|
||||
"
|
||||
| ignore
|
||||
|
||||
let done_path = (manifest-path-for $job.job_id 'done')
|
||||
if ($running_path | path exists) {
|
||||
mv -f $running_path $done_path
|
||||
}
|
||||
|
||||
^touch (qmd-dirty-file)
|
||||
|
||||
log-event $job.note_id 'job-finished' {
|
||||
job_id: $job.job_id
|
||||
write_mode: ($result.write_mode? | default 'write')
|
||||
output_path: ($result.output_path? | default '')
|
||||
conflict_path: ($result.conflict_path? | default '')
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def recover-running-jobs [] {
|
||||
let jobs = (sql-json "
|
||||
select job_id, note_id, job_manifest_path, result_path
|
||||
from jobs
|
||||
where status = 'running'
|
||||
order by started_at asc;
|
||||
")
|
||||
|
||||
for job in $jobs {
|
||||
let running_path = (manifest-path-for $job.job_id 'running')
|
||||
let result = if ($job.result_path | path exists) {
|
||||
open $job.result_path
|
||||
} else {
|
||||
null
|
||||
}
|
||||
mark-failure $job $running_path 'worker interrupted before completion' $result
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def process-job [job: record] {
|
||||
let running_path = (manifest-path-for $job.job_id 'running')
|
||||
mv -f $job.job_manifest_path $running_path
|
||||
sql-run $"
|
||||
update jobs
|
||||
set status = 'running',
|
||||
started_at = (sql-quote (now-iso)),
|
||||
job_manifest_path = (sql-quote $running_path)
|
||||
where job_id = (sql-quote $job.job_id);
|
||||
"
|
||||
| ignore
|
||||
|
||||
print $"Processing ($job.job_id) for ($job.note_id)"
|
||||
|
||||
let manifest = (open $running_path)
|
||||
try {
|
||||
let result = (ingest-job $manifest)
|
||||
write-result $job.result_path $result
|
||||
mark-success $job $running_path $result
|
||||
} catch {|error|
|
||||
let message = (error-message $error)
|
||||
let result = {
|
||||
success: false
|
||||
job_id: $manifest.job_id
|
||||
note_id: $manifest.note_id
|
||||
archive_path: $manifest.archive_path
|
||||
source_hash: $manifest.source_hash
|
||||
session_dir: $manifest.session_dir
|
||||
error: $message
|
||||
}
|
||||
write-result $job.result_path $result
|
||||
mark-failure $job $running_path $message $result
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def drain-queued-jobs [] {
|
||||
loop {
|
||||
let job = (next-queued-job)
|
||||
if $job == null {
|
||||
maybe-update-qmd
|
||||
break
|
||||
}
|
||||
|
||||
process-job $job
|
||||
maybe-update-qmd
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
export def worker-run [--drain] {
|
||||
ensure-layout
|
||||
recover-running-jobs
|
||||
if $drain {
|
||||
drain-queued-jobs
|
||||
return
|
||||
}
|
||||
|
||||
while true {
|
||||
let job = (next-queued-job)
|
||||
if $job == null {
|
||||
maybe-update-qmd
|
||||
sleep $idle_sleep
|
||||
continue
|
||||
}
|
||||
|
||||
process-job $job
|
||||
maybe-update-qmd
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def main [--drain] {
|
||||
worker-run --drain=$drain
|
||||
}
|
||||
@@ -6,6 +6,7 @@
|
||||
- `jj tug` is an alias for `jj bookmark move --from closest_bookmark(@-) --to @-`.
|
||||
- Never attempt historically destructive Git commands.
|
||||
- Make small, frequent commits.
|
||||
- "Commit" means `jj commit`, not `jj desc`; `desc` stays on the same working copy.
|
||||
|
||||
## Scripting
|
||||
|
||||
85
modules/_opencode/agent/moonshot.md
Normal file
85
modules/_opencode/agent/moonshot.md
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
description: Autonomous deep worker — explores thoroughly, acts decisively, finishes the job
|
||||
mode: primary
|
||||
model: openai/gpt-5.4
|
||||
temperature: 0.3
|
||||
color: "#D97706"
|
||||
reasoningEffort: xhigh
|
||||
---
|
||||
You are an autonomous deep worker for software engineering.
|
||||
|
||||
Build context by examining the codebase first. Do not assume. Think through the nuances of the code you encounter. Complete tasks end-to-end within the current turn. Persevere when tool calls fail. Only end your turn when the problem is solved and verified.
|
||||
|
||||
When blocked: try a different approach, decompose the problem, challenge assumptions, explore how others solved it. Asking the user is the last resort after exhausting alternatives.
|
||||
|
||||
## Do Not Ask — Just Do
|
||||
|
||||
FORBIDDEN:
|
||||
- Asking permission ("Should I proceed?", "Would you like me to...?") — JUST DO IT
|
||||
- "Do you want me to run tests?" — RUN THEM
|
||||
- "I noticed Y, should I fix it?" — FIX IT
|
||||
- Stopping after partial implementation — finish or don't start
|
||||
- Answering a question then stopping — questions imply action, DO THE ACTION
|
||||
- "I'll do X" then ending turn — you committed to X, DO X NOW
|
||||
- Explaining findings without acting on them — ACT immediately
|
||||
|
||||
CORRECT:
|
||||
- Keep going until COMPLETELY done
|
||||
- Run verification without asking
|
||||
- Make decisions; course-correct on concrete failure
|
||||
- Note assumptions in your final message, not as questions mid-work
|
||||
|
||||
## Intent Extraction
|
||||
|
||||
Every message has surface form and true intent. Extract true intent BEFORE doing anything:
|
||||
|
||||
- "Did you do X?" (and you didn't) → Acknowledge, DO X immediately
|
||||
- "How does X work?" → Explore, then implement/fix
|
||||
- "Can you look into Y?" → Investigate AND resolve
|
||||
- "What's the best way to do Z?" → Decide, then implement
|
||||
- "Why is A broken?" → Diagnose, then fix
|
||||
|
||||
A message is pure question ONLY when the user explicitly says "just explain" or "don't change anything". Default: message implies action.
|
||||
|
||||
## Task Classification
|
||||
|
||||
Classify before acting:
|
||||
|
||||
- **Trivial**: Single file, known location, <10 lines — use tools directly, no exploration needed
|
||||
- **Explicit**: Specific file/line given, clear instruction — execute directly
|
||||
- **Exploratory**: "How does X work?", "Find Y" — fire parallel searches, then act on findings
|
||||
- **Open-ended**: "Improve", "Refactor", "Add feature" — full execution loop required
|
||||
- **Ambiguous**: Unclear scope, multiple interpretations — explore first (search, read, grep), ask only if exploration fails
|
||||
|
||||
Default bias: explore before asking. Exhaust tools before asking a clarifying question.
|
||||
|
||||
## Execution
|
||||
|
||||
1. **EXPLORE**: Search the codebase in parallel — fire multiple reads and searches simultaneously
|
||||
2. **PLAN**: Identify files to modify, specific changes, dependencies
|
||||
3. **EXECUTE**: Make the changes
|
||||
4. **VERIFY**: Check diagnostics on all modified files, run tests, run build
|
||||
|
||||
If verification fails, return to step 1. After 3 failed approaches, stop edits, revert to last working state, and explain what you tried.
|
||||
|
||||
## Verification Is Mandatory
|
||||
|
||||
Before ending your turn, you MUST have:
|
||||
- All requested functionality fully implemented
|
||||
- Diagnostics clean on all modified files
|
||||
- Build passing (if applicable)
|
||||
- Tests passing (or pre-existing failures documented)
|
||||
- Evidence for each verification step — "it should work" is not evidence
|
||||
|
||||
## Progress
|
||||
|
||||
Report what you're doing every ~30 seconds. One or two sentences with a concrete detail — a file path, a pattern found, a decision made.
|
||||
|
||||
## Self-Check Before Ending
|
||||
|
||||
1. Did the user's message imply action you haven't taken?
|
||||
2. Did you commit to something ("I'll do X") without doing it?
|
||||
3. Did you offer to do something instead of doing it?
|
||||
4. Did you answer a question and stop when work was implied?
|
||||
|
||||
If any of these are true, you are not done. Continue working.
|
||||
49
modules/_opencode/command/albanian-lesson.md
Normal file
49
modules/_opencode/command/albanian-lesson.md
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
description: Turn pasted Albanian lesson into translated notes and solved exercises in zk
|
||||
---
|
||||
|
||||
Process the pasted Albanian lesson content and create two `zk` notes: one for lesson material and one for exercises.
|
||||
|
||||
<lesson-material>
|
||||
$ARGUMENTS
|
||||
</lesson-material>
|
||||
|
||||
Requirements:
|
||||
|
||||
1. Parse the lesson content and produce two markdown outputs:
|
||||
- `material` output: lesson material only.
|
||||
- `exercises` output: exercises and solutions.
|
||||
2. Use today's date in both notes (date in title and inside content).
|
||||
3. In the `material` output:
|
||||
- Keep clean markdown structure with headings and bullet points.
|
||||
- Do not add a top-level title heading (no `# ...`) because `zk new --title` already sets the note title.
|
||||
- Translate examples, dialogues, and all lesson texts into English when not already translated.
|
||||
- For bigger reading passages, include a word-by-word breakdown.
|
||||
- For declension/conjugation/grammar tables, provide a complete table of possibilities relevant to the topic.
|
||||
- Spell out numbers only when the source token is Albanian; do not spell out English numbers.
|
||||
4. In the `exercises` output:
|
||||
- Include every exercise in markdown.
|
||||
- Do not add a top-level title heading (no `# ...`) because `zk new --title` already sets the note title.
|
||||
- Translate each exercise to English.
|
||||
- Solve all non-free-writing tasks (multiple choice, fill in the blanks, etc.) and include example solutions.
|
||||
- For free-writing tasks, provide expanded examples using basic vocabulary from the lesson (if prompted for 3, provide 10).
|
||||
- Translate free-writing example answers into English.
|
||||
- Spell out numbers only when the source token is Albanian; do not spell out English numbers.
|
||||
|
||||
Execution steps:
|
||||
|
||||
1. Generate two markdown contents in memory (do not create temporary files):
|
||||
- `MATERIAL_CONTENT`
|
||||
- `EXERCISES_CONTENT`
|
||||
2. Set `TODAY="$(date +%F)"` once and reuse it for both notes.
|
||||
3. Create note 1 with `zk` by piping markdown directly to stdin:
|
||||
- Title format: `Albanian Lesson Material - YYYY-MM-DD`
|
||||
- Command pattern:
|
||||
- `printf "%s\n" "$MATERIAL_CONTENT" | zk new --interactive --title "Albanian Lesson Material - $TODAY" --date "$TODAY" --print-path`
|
||||
4. Create note 2 with `zk` by piping markdown directly to stdin:
|
||||
- Title format: `Albanian Lesson Exercises - YYYY-MM-DD`
|
||||
- Command pattern:
|
||||
- `printf "%s\n" "$EXERCISES_CONTENT" | zk new --interactive --title "Albanian Lesson Exercises - $TODAY" --date "$TODAY" --print-path`
|
||||
5. Print both created note paths and a short checklist of what was included.
|
||||
|
||||
If no lesson material was provided in `$ARGUMENTS`, stop and ask the user to paste it.
|
||||
108
modules/_opencode/command/inbox-triage.md
Normal file
108
modules/_opencode/command/inbox-triage.md
Normal file
@@ -0,0 +1,108 @@
|
||||
---
|
||||
description: Triage inbox one message at a time with himalaya only
|
||||
---
|
||||
|
||||
Process email with strict manual triage using Himalaya only.
|
||||
|
||||
Hard requirements:
|
||||
- Use `himalaya` for every mailbox interaction (folders, listing, reading, moving, deleting, attachments).
|
||||
- Process exactly one message ID at a time. Never run bulk actions on multiple IDs.
|
||||
- Do not use pattern-matching commands or searches (`grep`, `rg`, `awk`, `sed`, `himalaya envelope list` query filters, etc.).
|
||||
- Always inspect current folders first, then triage.
|
||||
- Treat this as a single deterministic run over a snapshot of message IDs discovered during this run.
|
||||
- Ingest valuable document attachments into Paperless (see Document Ingestion section below).
|
||||
|
||||
Workflow:
|
||||
1. Run `himalaya folder list` first and use those folders as the primary taxonomy.
|
||||
2. Use this existing folder set as defaults when it fits:
|
||||
- `INBOX`
|
||||
- `Correspondence`
|
||||
- `Orders and Invoices`
|
||||
- `Payments`
|
||||
- `Outgoing Shipments`
|
||||
- `Newsletters and Marketing`
|
||||
- `Junk`
|
||||
- `Deleted Messages`
|
||||
3. Determine source folder:
|
||||
- If `$ARGUMENTS` is a single known folder name (matches a folder from step 1), use that as source.
|
||||
- Otherwise use `INBOX`.
|
||||
4. Build a run scope safely:
|
||||
- List with fixed page size `20` and JSON output: `himalaya envelope list -f "<source>" -p 1 -s 20 --output json`.
|
||||
- Start at page `1`. Enumerate IDs in returned order.
|
||||
- Process each ID fully before touching the next ID.
|
||||
- Keep an in-memory reviewed set for this run to avoid reprocessing IDs already handled or intentionally left untouched.
|
||||
- When all IDs on the current page are in the reviewed set, advance to the next page.
|
||||
- Stop when a page returns fewer results than the page size (end of folder) and all its IDs are in the reviewed set.
|
||||
5. For each single envelope ID, do all checks before any move/delete:
|
||||
- Check envelope flags from the JSON listing (seen/answered/flagged) before reading.
|
||||
- Read the message: `himalaya message read -f "<source>" <id>`.
|
||||
- If needed for classification or ingestion, download attachments: `himalaya attachment download -f "<source>" <id> --dir /tmp/himalaya-triage`.
|
||||
- If the message qualifies for document ingestion (see Document Ingestion below), copy eligible attachments to the Paperless consume directory before cleanup.
|
||||
- Always `rm` downloaded files from `/tmp/himalaya-triage` after processing (whether ingested or not).
|
||||
- Move: `himalaya message move -f "<source>" "<destination>" <id>`.
|
||||
- Delete: `himalaya message delete -f "<source>" <id>`.
|
||||
6. Classification precedence (higher rule wins on conflict):
|
||||
- **Actionable and unhandled** — if the message needs a reply, requires manual payment, needs a confirmation, or demands any human action, AND has NOT been replied to (no `answered` flag), leave it in the source folder untouched. This is the highest-priority rule: anything that still needs attention stays in `INBOX`.
|
||||
- Human correspondence already handled — freeform natural-language messages written by a human that have been replied to (`answered` flag set): move to `Correspondence`.
|
||||
- Human communication not yet replied to but not clearly actionable — when in doubt whether a human message requires action, leave it untouched.
|
||||
- Clearly ephemeral automated/system message (alerts, bot/status updates, OTP/2FA, password reset codes, login codes) with no archival value: move to `Deleted Messages`.
|
||||
- Automatic payment transaction notifications (charge/payment confirmations, receipts, failed-payment notices, provider payment events such as Klarna/PayPal/Stripe) that are purely informational and require no action: move to `Payments`.
|
||||
- Subscription renewal notifications (auto-renew reminders, "will renew soon", price-change notices without a concrete transaction) are operational alerts, not payment records: move to `Deleted Messages`.
|
||||
- Installment plan activation notifications (e.g. Barclays installment purchase confirmations) are operational confirmations, not payment records: move to `Deleted Messages`.
|
||||
- "Kontoauszug verfügbar/ist online" notifications are availability alerts, not payment records: move to `Deleted Messages`.
|
||||
- Orders/invoices/business records: move to `Orders and Invoices`.
|
||||
- Shipping/tracking notifications (dispatch confirmations, carrier updates, delivery ETAs) without invoice or order-document value: move to `Deleted Messages`.
|
||||
- Marketing/newsletters: move to `Newsletters and Marketing`.
|
||||
- Delivery/submission confirmations for items you shipped outbound: move to `Outgoing Shipments`.
|
||||
- Long-term but uncategorized messages: create a concise new folder and move there.
|
||||
7. Folder creation rule:
|
||||
- Create a new folder only if no existing folder fits and the message should be kept.
|
||||
- Naming constraints: concise topic name, avoid duplicates, and avoid broad catch-all names.
|
||||
- Command: `himalaya folder add "<new-folder>"`.
|
||||
|
||||
Document Ingestion (Paperless):
|
||||
- **Purpose**: Automatically archive valuable document attachments into Paperless via its consumption directory.
|
||||
- **Ingestion path**: `/var/lib/paperless/consume/inbox-triage/`
|
||||
- **When to ingest**: Only for messages whose attachments have long-term archival value. Eligible categories:
|
||||
- Invoices, receipts, and billing statements (messages going to `Orders and Invoices` or `Payments`)
|
||||
- Contracts, agreements, and legal documents
|
||||
- Tax documents, account statements, and financial summaries
|
||||
- Insurance documents and policy papers
|
||||
- Official correspondence with document attachments (government, institutions)
|
||||
- **When NOT to ingest**:
|
||||
- Marketing emails, newsletters, promotional material
|
||||
- Shipping/tracking notifications without invoice attachments
|
||||
- OTP codes, login alerts, password resets, ephemeral notifications
|
||||
- Subscription renewal reminders without actual invoices
|
||||
- Duplicate documents already seen in this run
|
||||
- Inline images, email signatures, logos, and non-document attachments
|
||||
- **Eligible file types**: PDF, PNG, JPG/JPEG, TIFF, WEBP (documents and scans only). Skip archive files (ZIP, etc.), calendar invites (ICS), and other non-document formats.
|
||||
- **Procedure**:
|
||||
1. After downloading attachments to `/tmp/himalaya-triage`, check if any are eligible documents.
|
||||
2. Copy eligible files: `cp /tmp/himalaya-triage/<filename> /var/lib/paperless/consume/inbox-triage/`
|
||||
3. If multiple messages could produce filename collisions, prefix the filename with the message ID: `<id>-<filename>`.
|
||||
4. Log each ingested file in the action log at the end of the run.
|
||||
- **Conservative rule**: When in doubt whether an attachment is worth archiving, skip it. Paperless storage is cheap, but noise degrades searchability. Prefer false negatives over false positives for marketing material, but prefer false positives over false negatives for anything that looks like a financial or legal document.
|
||||
|
||||
Execution rules:
|
||||
- Never perform bulk operations. One message ID per `read`, `move`, `delete`, and attachment command.
|
||||
- Always use page size 20 for envelope listing (`-s 20`).
|
||||
- If any single-ID command fails, log the error and continue with the next unreviewed ID.
|
||||
- Never skip reading message content before deciding.
|
||||
- Keep decisions conservative: when in doubt about whether something needs action, leave it in `INBOX`.
|
||||
- Never move or delete unhandled actionable messages.
|
||||
- Never move human communications that haven't been replied to, unless clearly non-actionable.
|
||||
- Define "processed" as "reviewed once in this run" (including intentionally untouched human messages).
|
||||
- Include only messages observed during this run's listings; if new mail arrives mid-run, leave it for the next run.
|
||||
- Report a compact action log at the end with:
|
||||
- source folder,
|
||||
- total reviewed IDs,
|
||||
- counts by action (untouched/moved-to-folder/deleted),
|
||||
- per-destination-folder counts,
|
||||
- created folders,
|
||||
- documents ingested to Paperless (count and filenames),
|
||||
- short rationale for non-obvious classifications.
|
||||
|
||||
<user-request>
|
||||
$ARGUMENTS
|
||||
</user-request>
|
||||
17
modules/_opencode/command/session-export.md
Normal file
17
modules/_opencode/command/session-export.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
description: Add AI session summary to GitHub PR or GitLab MR description
|
||||
---
|
||||
|
||||
Update the PR/MR description with an AI session export summary.
|
||||
|
||||
First, invoke the skill tool to load the session-export skill:
|
||||
|
||||
```
|
||||
skill({ name: 'session-export' })
|
||||
```
|
||||
|
||||
Then follow the skill instructions to export the session summary.
|
||||
|
||||
<user-request>
|
||||
$ARGUMENTS
|
||||
</user-request>
|
||||
49
modules/_opencode/plugin/block-git.ts
Normal file
49
modules/_opencode/plugin/block-git.ts
Normal file
@@ -0,0 +1,49 @@
|
||||
import type { Plugin } from "@opencode-ai/plugin";
|
||||
|
||||
const COMMAND_PREFIXES = new Set([
|
||||
"env",
|
||||
"command",
|
||||
"builtin",
|
||||
"time",
|
||||
"sudo",
|
||||
"nohup",
|
||||
"nice",
|
||||
]);
|
||||
|
||||
function findCommandWord(words: string[]): string | undefined {
|
||||
for (const word of words) {
|
||||
if (COMMAND_PREFIXES.has(word)) continue;
|
||||
if (/^[A-Za-z_][A-Za-z0-9_]*=/.test(word)) continue;
|
||||
return word;
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
function segmentHasGit(words: string[]): boolean {
|
||||
const cmd = findCommandWord(words);
|
||||
return cmd === "git";
|
||||
}
|
||||
|
||||
function containsBlockedGit(command: string): boolean {
|
||||
const segments = command.split(/\s*(?:&&|\|\||[;&|]|\$\(|`)\s*/);
|
||||
for (const segment of segments) {
|
||||
const words = segment.trim().split(/\s+/).filter(Boolean);
|
||||
if (segmentHasGit(words)) return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
export const BlockGitPlugin: Plugin = async () => {
|
||||
return {
|
||||
"tool.execute.before": async (input, output) => {
|
||||
if (input.tool === "bash") {
|
||||
const command = output.args.command as string;
|
||||
if (containsBlockedGit(command)) {
|
||||
throw new Error(
|
||||
"This project uses jj, only use `jj` commands, not `git`.",
|
||||
);
|
||||
}
|
||||
}
|
||||
},
|
||||
};
|
||||
};
|
||||
19
modules/_opencode/plugin/block-scripting.ts
Normal file
19
modules/_opencode/plugin/block-scripting.ts
Normal file
@@ -0,0 +1,19 @@
|
||||
import type { Plugin } from "@opencode-ai/plugin";
|
||||
|
||||
const SCRIPTING_PATTERN =
|
||||
/(?:^|[;&|]\s*|&&\s*|\|\|\s*|\$\(\s*|`\s*)(?:python[23]?|perl|ruby|php|lua|node\s+-e|bash\s+-c|sh\s+-c)\s/;
|
||||
|
||||
export const BlockScriptingPlugin: Plugin = async () => {
|
||||
return {
|
||||
"tool.execute.before": async (input, output) => {
|
||||
if (input.tool === "bash") {
|
||||
const command = output.args.command as string;
|
||||
if (SCRIPTING_PATTERN.test(command)) {
|
||||
throw new Error(
|
||||
"Do not use python, perl, ruby, php, lua, or inline bash/sh for scripting. Use `nu -c` instead.",
|
||||
);
|
||||
}
|
||||
}
|
||||
},
|
||||
};
|
||||
};
|
||||
18
modules/_opencode/plugin/direnv.ts
Normal file
18
modules/_opencode/plugin/direnv.ts
Normal file
@@ -0,0 +1,18 @@
|
||||
import type { Plugin } from "@opencode-ai/plugin";
|
||||
|
||||
export const DirenvPlugin: Plugin = async ({ $ }) => {
|
||||
return {
|
||||
"shell.env": async (input, output) => {
|
||||
try {
|
||||
const exported = await $`direnv export json`
|
||||
.cwd(input.cwd)
|
||||
.quiet()
|
||||
.json();
|
||||
|
||||
Object.assign(output.env, exported);
|
||||
} catch (error) {
|
||||
console.warn("[direnv] failed to export env:", error);
|
||||
}
|
||||
},
|
||||
};
|
||||
};
|
||||
1268
modules/_opencode/plugin/review.ts
Normal file
1268
modules/_opencode/plugin/review.ts
Normal file
File diff suppressed because it is too large
Load Diff
41
modules/_opencode/skill/frontend-design/SKILL.md
Normal file
41
modules/_opencode/skill/frontend-design/SKILL.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
name: frontend-design
|
||||
description: Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, artifacts, posters, or applications (examples include websites, landing pages, dashboards, React components, HTML/CSS layouts, or when styling/beautifying any web UI). Generates creative, polished code and UI design that avoids generic AI aesthetics.
|
||||
---
|
||||
|
||||
This skill guides creation of distinctive, production-grade frontend interfaces that avoid generic "AI slop" aesthetics. Implement real working code with exceptional attention to aesthetic details and creative choices.
|
||||
|
||||
The user provides frontend requirements: a component, page, application, or interface to build. They may include context about the purpose, audience, or technical constraints.
|
||||
|
||||
## Design Thinking
|
||||
|
||||
Before coding, understand the context and commit to a BOLD aesthetic direction:
|
||||
- **Purpose**: What problem does this interface solve? Who uses it?
|
||||
- **Tone**: Pick an extreme: brutally minimal, maximalist chaos, retro-futuristic, organic/natural, luxury/refined, playful/toy-like, editorial/magazine, brutalist/raw, art deco/geometric, soft/pastel, industrial/utilitarian, etc. There are so many flavors to choose from. Use these for inspiration but design one that is true to the aesthetic direction.
|
||||
- **Constraints**: Technical requirements (framework, performance, accessibility).
|
||||
- **Differentiation**: What makes this UNFORGETTABLE? What's the one thing someone will remember?
|
||||
|
||||
**CRITICAL**: Choose a clear conceptual direction and execute it with precision. Bold maximalism and refined minimalism both work - the key is intentionality, not intensity.
|
||||
|
||||
Then implement working code (HTML/CSS/JS, React, Vue, etc.) that is:
|
||||
- Production-grade and functional
|
||||
- Visually striking and memorable
|
||||
- Cohesive with a clear aesthetic point-of-view
|
||||
- Meticulously refined in every detail
|
||||
|
||||
## Frontend Aesthetics Guidelines
|
||||
|
||||
Focus on:
|
||||
- **Typography**: Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics; unexpected, characterful font choices. Pair a distinctive display font with a refined body font.
|
||||
- **Color & Theme**: Commit to a cohesive aesthetic. Use CSS variables for consistency. Dominant colors with sharp accents outperform timid, evenly-distributed palettes.
|
||||
- **Motion**: Use animations for effects and micro-interactions. Prioritize CSS-only solutions for HTML. Use Motion library for React when available. Focus on high-impact moments: one well-orchestrated page load with staggered reveals (animation-delay) creates more delight than scattered micro-interactions. Use scroll-triggering and hover states that surprise.
|
||||
- **Spatial Composition**: Unexpected layouts. Asymmetry. Overlap. Diagonal flow. Grid-breaking elements. Generous negative space OR controlled density.
|
||||
- **Backgrounds & Visual Details**: Create atmosphere and depth rather than defaulting to solid colors. Add contextual effects and textures that match the overall aesthetic. Apply creative forms like gradient meshes, noise textures, geometric patterns, layered transparencies, dramatic shadows, decorative borders, custom cursors, and grain overlays.
|
||||
|
||||
NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character.
|
||||
|
||||
Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics. NEVER converge on common choices (Space Grotesk, for example) across generations.
|
||||
|
||||
**IMPORTANT**: Match implementation complexity to the aesthetic vision. Maximalist designs need elaborate code with extensive animations and effects. Minimalist or refined designs need restraint, precision, and careful attention to spacing, typography, and subtle details. Elegance comes from executing the vision well.
|
||||
|
||||
Remember: Claude is capable of extraordinary creative work. Don't hold back, show what can truly be created when thinking outside the box and committing fully to a distinctive vision.
|
||||
123
modules/_opencode/skill/librarian/SKILL.md
Normal file
123
modules/_opencode/skill/librarian/SKILL.md
Normal file
@@ -0,0 +1,123 @@
|
||||
---
|
||||
name: librarian
|
||||
description: Multi-repository codebase exploration. Research library internals, find code patterns, understand architecture, compare implementations across GitHub/npm/PyPI/crates. Use when needing deep understanding of how libraries work, finding implementations across open source, or exploring remote repository structure.
|
||||
references:
|
||||
- references/tool-routing.md
|
||||
- references/opensrc-api.md
|
||||
- references/opensrc-examples.md
|
||||
- references/linking.md
|
||||
- references/diagrams.md
|
||||
---
|
||||
|
||||
# Librarian Skill
|
||||
|
||||
Deep codebase exploration across remote repositories.
|
||||
|
||||
## How to Use This Skill
|
||||
|
||||
### Reference Structure
|
||||
|
||||
| File | Purpose | When to Read |
|
||||
|------|---------|--------------|
|
||||
| `tool-routing.md` | Tool selection decision trees | **Always read first** |
|
||||
| `opensrc-api.md` | API reference, types | Writing opensrc code |
|
||||
| `opensrc-examples.md` | JavaScript patterns, workflows | Implementation examples |
|
||||
| `linking.md` | GitHub URL patterns | Formatting responses |
|
||||
| `diagrams.md` | Mermaid patterns | Visualizing architecture |
|
||||
|
||||
### Reading Order
|
||||
|
||||
1. **Start** with `tool-routing.md` → choose tool strategy
|
||||
2. **If using opensrc:**
|
||||
- Read `opensrc-api.md` for API details
|
||||
- Read `opensrc-examples.md` for patterns
|
||||
3. **Before responding:** `linking.md` + `diagrams.md` for output formatting
|
||||
|
||||
## Tool Arsenal
|
||||
|
||||
| Tool | Best For | Limitations |
|
||||
|------|----------|-------------|
|
||||
| **grep_app** | Find patterns across ALL public GitHub | Literal search only |
|
||||
| **context7** | Library docs, API examples, usage | Known libraries only |
|
||||
| **opensrc** | Fetch full source for deep exploration | Must fetch before read |
|
||||
|
||||
## Quick Decision Trees
|
||||
|
||||
### "How does X work?"
|
||||
|
||||
```
|
||||
Known library?
|
||||
├─ Yes → context7.resolve-library-id → context7.query-docs
|
||||
│ └─ Need internals? → opensrc.fetch → read source
|
||||
└─ No → grep_app search → opensrc.fetch top result
|
||||
```
|
||||
|
||||
### "Find pattern X"
|
||||
|
||||
```
|
||||
Specific repo?
|
||||
├─ Yes → opensrc.fetch → opensrc.grep → read matches
|
||||
└─ No → grep_app (broad) → opensrc.fetch interesting repos
|
||||
```
|
||||
|
||||
### "Explore repo structure"
|
||||
|
||||
```
|
||||
1. opensrc.fetch(target)
|
||||
2. opensrc.tree(source.name) → quick overview
|
||||
3. opensrc.files(source.name, "**/*.ts") → detailed listing
|
||||
4. Read: README, package.json, src/index.*
|
||||
5. Create architecture diagram (see diagrams.md)
|
||||
```
|
||||
|
||||
### "Compare X vs Y"
|
||||
|
||||
```
|
||||
1. opensrc.fetch(["X", "Y"])
|
||||
2. Use source.name from results for subsequent calls
|
||||
3. opensrc.grep(pattern, { sources: [nameX, nameY] })
|
||||
4. Read comparable files, synthesize differences
|
||||
```
|
||||
|
||||
## Critical: Source Naming Convention
|
||||
|
||||
**After fetching, always use `source.name` for subsequent calls:**
|
||||
|
||||
```javascript
|
||||
const [{ source }] = await opensrc.fetch("vercel/ai");
|
||||
const files = await opensrc.files(source.name, "**/*.ts");
|
||||
```
|
||||
|
||||
| Type | Fetch Spec | Source Name |
|
||||
|------|------------|-------------|
|
||||
| npm | `"zod"` | `"zod"` |
|
||||
| npm scoped | `"@tanstack/react-query"` | `"@tanstack/react-query"` |
|
||||
| pypi | `"pypi:requests"` | `"requests"` |
|
||||
| crates | `"crates:serde"` | `"serde"` |
|
||||
| GitHub | `"vercel/ai"` | `"github.com/vercel/ai"` |
|
||||
| GitLab | `"gitlab:org/repo"` | `"gitlab.com/org/repo"` |
|
||||
|
||||
## When NOT to Use opensrc
|
||||
|
||||
| Scenario | Use Instead |
|
||||
|----------|-------------|
|
||||
| Simple library API questions | context7 |
|
||||
| Finding examples across many repos | grep_app |
|
||||
| Very large monorepos (>10GB) | Clone locally |
|
||||
| Private repositories | Direct access |
|
||||
|
||||
## Output Guidelines
|
||||
|
||||
1. **Comprehensive final message** - only last message returns to main agent
|
||||
2. **Parallel tool calls** - maximize efficiency
|
||||
3. **Link every file reference** - see `linking.md`
|
||||
4. **Diagram complex relationships** - see `diagrams.md`
|
||||
5. **Never mention tool names** - say "I'll search" not "I'll use opensrc"
|
||||
|
||||
## References
|
||||
|
||||
- [Tool Routing Decision Trees](references/tool-routing.md)
|
||||
- [opensrc API Reference](references/opensrc-api.md)
|
||||
- [opensrc Code Examples](references/opensrc-examples.md)
|
||||
- [GitHub Linking Patterns](references/linking.md)
|
||||
- [Mermaid Diagram Patterns](references/diagrams.md)
|
||||
51
modules/_opencode/skill/librarian/references/diagrams.md
Normal file
51
modules/_opencode/skill/librarian/references/diagrams.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# Mermaid Diagram Patterns
|
||||
|
||||
Create diagrams for:
|
||||
- Architecture (component relationships)
|
||||
- Data flow (request → response)
|
||||
- Dependencies (import graph)
|
||||
- Sequences (step-by-step processes)
|
||||
|
||||
## Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Client] --> B[API Gateway]
|
||||
B --> C[Auth Service]
|
||||
B --> D[Data Service]
|
||||
D --> E[(Database)]
|
||||
```
|
||||
|
||||
## Flow
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
Input --> Parse --> Validate --> Transform --> Output
|
||||
```
|
||||
|
||||
## Sequence
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
Client->>+Server: Request
|
||||
Server->>+DB: Query
|
||||
DB-->>-Server: Result
|
||||
Server-->>-Client: Response
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
| Type | Use For |
|
||||
|------|---------|
|
||||
| `graph TD` | Component hierarchy, dependencies |
|
||||
| `flowchart LR` | Data transformation, pipelines |
|
||||
| `sequenceDiagram` | Request/response, multi-party interaction |
|
||||
| `classDiagram` | Type relationships, inheritance |
|
||||
| `stateDiagram` | State machines, lifecycle |
|
||||
|
||||
## Tips
|
||||
|
||||
- Keep nodes short (3-4 words max)
|
||||
- Use subgraphs for grouping related components
|
||||
- Arrow labels for relationship types
|
||||
- Prefer LR (left-right) for flows, TD (top-down) for hierarchies
|
||||
61
modules/_opencode/skill/librarian/references/linking.md
Normal file
61
modules/_opencode/skill/librarian/references/linking.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# GitHub Linking Patterns
|
||||
|
||||
All file/dir/code refs → fluent markdown links. Never raw URLs.
|
||||
|
||||
## URL Formats
|
||||
|
||||
### File
|
||||
```
|
||||
https://github.com/{owner}/{repo}/blob/{ref}/{path}
|
||||
```
|
||||
|
||||
### File + Lines
|
||||
```
|
||||
https://github.com/{owner}/{repo}/blob/{ref}/{path}#L{start}-L{end}
|
||||
```
|
||||
|
||||
### Directory
|
||||
```
|
||||
https://github.com/{owner}/{repo}/tree/{ref}/{path}
|
||||
```
|
||||
|
||||
### GitLab (note `/-/blob/`)
|
||||
```
|
||||
https://gitlab.com/{owner}/{repo}/-/blob/{ref}/{path}
|
||||
```
|
||||
|
||||
## Ref Resolution
|
||||
|
||||
| Source | Use as ref |
|
||||
|--------|------------|
|
||||
| Known version | `v{version}` |
|
||||
| Default branch | `main` or `master` |
|
||||
| opensrc fetch | ref from result |
|
||||
| Specific commit | full SHA |
|
||||
|
||||
## Examples
|
||||
|
||||
### Correct
|
||||
```markdown
|
||||
The [`parseAsync`](https://github.com/colinhacks/zod/blob/main/src/types.ts#L450-L480) method handles...
|
||||
```
|
||||
|
||||
### Wrong
|
||||
```markdown
|
||||
See https://github.com/colinhacks/zod/blob/main/src/types.ts#L100
|
||||
The parseAsync method in src/types.ts handles...
|
||||
```
|
||||
|
||||
## Line Numbers
|
||||
|
||||
- Single: `#L42`
|
||||
- Range: `#L42-L50`
|
||||
- Prefer ranges for context (2-5 lines around key code)
|
||||
|
||||
## Registry → GitHub
|
||||
|
||||
| Registry | Find repo in |
|
||||
|----------|--------------|
|
||||
| npm | `package.json` → `repository` |
|
||||
| PyPI | `pyproject.toml` or setup.py |
|
||||
| crates | `Cargo.toml` |
|
||||
235
modules/_opencode/skill/librarian/references/opensrc-api.md
Normal file
235
modules/_opencode/skill/librarian/references/opensrc-api.md
Normal file
@@ -0,0 +1,235 @@
|
||||
# opensrc API Reference
|
||||
|
||||
## Tool
|
||||
|
||||
Use the **opensrc MCP server** via single tool:
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `opensrc_execute` | All operations (fetch, read, grep, files, remove, etc.) |
|
||||
|
||||
Takes a `code` parameter: JavaScript async arrow function executed server-side. Source trees stay on server, only results return.
|
||||
|
||||
## API Surface
|
||||
|
||||
### Read Operations
|
||||
|
||||
```typescript
|
||||
// List all fetched sources
|
||||
opensrc.list(): Source[]
|
||||
|
||||
// Check if source exists
|
||||
opensrc.has(name: string, version?: string): boolean
|
||||
|
||||
// Get source metadata
|
||||
opensrc.get(name: string): Source | undefined
|
||||
|
||||
// List files with optional glob
|
||||
opensrc.files(sourceName: string, glob?: string): Promise<FileEntry[]>
|
||||
|
||||
// Get directory tree structure (default depth: 3)
|
||||
opensrc.tree(sourceName: string, options?: { depth?: number }): Promise<TreeNode>
|
||||
|
||||
// Regex search file contents
|
||||
opensrc.grep(pattern: string, options?: GrepOptions): Promise<GrepResult[]>
|
||||
|
||||
// AST-based semantic code search
|
||||
opensrc.astGrep(sourceName: string, pattern: string, options?: AstGrepOptions): Promise<AstGrepMatch[]>
|
||||
|
||||
// Read single file
|
||||
opensrc.read(sourceName: string, filePath: string): Promise<string>
|
||||
|
||||
// Batch read multiple files (supports globs!)
|
||||
opensrc.readMany(sourceName: string, paths: string[]): Promise<Record<string, string>>
|
||||
|
||||
// Parse fetch spec
|
||||
opensrc.resolve(spec: string): Promise<ParsedSpec>
|
||||
```
|
||||
|
||||
### Mutation Operations
|
||||
|
||||
```typescript
|
||||
// Fetch packages/repos
|
||||
opensrc.fetch(specs: string | string[], options?: { modify?: boolean }): Promise<FetchedSource[]>
|
||||
|
||||
// Remove sources
|
||||
opensrc.remove(names: string[]): Promise<RemoveResult>
|
||||
|
||||
// Clean by type
|
||||
opensrc.clean(options?: CleanOptions): Promise<RemoveResult>
|
||||
```
|
||||
|
||||
## Types
|
||||
|
||||
### Source
|
||||
|
||||
```typescript
|
||||
interface Source {
|
||||
type: "npm" | "pypi" | "crates" | "repo";
|
||||
name: string; // Use this for all subsequent calls
|
||||
version?: string;
|
||||
ref?: string;
|
||||
path: string;
|
||||
fetchedAt: string;
|
||||
repository: string;
|
||||
}
|
||||
```
|
||||
|
||||
### FetchedSource
|
||||
|
||||
```typescript
|
||||
interface FetchedSource {
|
||||
source: Source; // IMPORTANT: use source.name for subsequent calls
|
||||
alreadyExists: boolean;
|
||||
}
|
||||
```
|
||||
|
||||
### GrepOptions
|
||||
|
||||
```typescript
|
||||
interface GrepOptions {
|
||||
sources?: string[]; // Filter to specific sources
|
||||
include?: string; // File glob pattern (e.g., "*.ts")
|
||||
maxResults?: number; // Limit results (default: 100)
|
||||
}
|
||||
```
|
||||
|
||||
### GrepResult
|
||||
|
||||
```typescript
|
||||
interface GrepResult {
|
||||
source: string;
|
||||
file: string;
|
||||
line: number;
|
||||
content: string;
|
||||
}
|
||||
```
|
||||
|
||||
### AstGrepOptions
|
||||
|
||||
```typescript
|
||||
interface AstGrepOptions {
|
||||
glob?: string; // File glob pattern (e.g., "**/*.ts")
|
||||
lang?: string | string[]; // Language(s): "js", "ts", "tsx", "html", "css"
|
||||
limit?: number; // Max results (default: 1000)
|
||||
}
|
||||
```
|
||||
|
||||
### AstGrepMatch
|
||||
|
||||
```typescript
|
||||
interface AstGrepMatch {
|
||||
file: string;
|
||||
line: number;
|
||||
column: number;
|
||||
endLine: number;
|
||||
endColumn: number;
|
||||
text: string; // Matched code text
|
||||
metavars: Record<string, string>; // Captured $VAR → text
|
||||
}
|
||||
```
|
||||
|
||||
#### AST Pattern Syntax
|
||||
|
||||
| Pattern | Matches |
|
||||
|---------|---------|
|
||||
| `$NAME` | Single node, captures to metavars |
|
||||
| `$$$ARGS` | Zero or more nodes (variadic), captures |
|
||||
| `$_` | Single node, no capture |
|
||||
| `$$$` | Zero or more nodes, no capture |
|
||||
|
||||
### FileEntry
|
||||
|
||||
```typescript
|
||||
interface FileEntry {
|
||||
path: string;
|
||||
size: number;
|
||||
isDirectory: boolean;
|
||||
}
|
||||
```
|
||||
|
||||
### TreeNode
|
||||
|
||||
```typescript
|
||||
interface TreeNode {
|
||||
name: string;
|
||||
type: "file" | "dir";
|
||||
children?: TreeNode[]; // only for dirs
|
||||
}
|
||||
```
|
||||
|
||||
### CleanOptions
|
||||
|
||||
```typescript
|
||||
interface CleanOptions {
|
||||
packages?: boolean;
|
||||
repos?: boolean;
|
||||
npm?: boolean;
|
||||
pypi?: boolean;
|
||||
crates?: boolean;
|
||||
}
|
||||
```
|
||||
|
||||
### RemoveResult
|
||||
|
||||
```typescript
|
||||
interface RemoveResult {
|
||||
success: boolean;
|
||||
removed: string[];
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
Operations throw on errors. Wrap in try/catch if needed:
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
try {
|
||||
const content = await opensrc.read("zod", "missing.ts");
|
||||
return content;
|
||||
} catch (e) {
|
||||
return { error: e.message };
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`readMany` returns errors as string values prefixed with `[Error:`:
|
||||
|
||||
```javascript
|
||||
const files = await opensrc.readMany("zod", ["exists.ts", "missing.ts"]);
|
||||
// { "exists.ts": "content...", "missing.ts": "[Error: ENOENT...]" }
|
||||
|
||||
// Filter successful reads
|
||||
const successful = Object.entries(files)
|
||||
.filter(([_, content]) => !content.startsWith("[Error:"));
|
||||
```
|
||||
|
||||
## Package Spec Formats
|
||||
|
||||
| Format | Example | Source Name After Fetch |
|
||||
|--------|---------|------------------------|
|
||||
| `<name>` | `"zod"` | `"zod"` |
|
||||
| `<name>@<version>` | `"zod@3.22.0"` | `"zod"` |
|
||||
| `pypi:<name>` | `"pypi:requests"` | `"requests"` |
|
||||
| `crates:<name>` | `"crates:serde"` | `"serde"` |
|
||||
| `owner/repo` | `"vercel/ai"` | `"github.com/vercel/ai"` |
|
||||
| `owner/repo@ref` | `"vercel/ai@v1.0.0"` | `"github.com/vercel/ai"` |
|
||||
| `gitlab:owner/repo` | `"gitlab:org/repo"` | `"gitlab.com/org/repo"` |
|
||||
|
||||
## Critical Pattern
|
||||
|
||||
**Always capture `source.name` from fetch results:**
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const [{ source }] = await opensrc.fetch("vercel/ai");
|
||||
|
||||
// GitHub repos: "vercel/ai" → "github.com/vercel/ai"
|
||||
const sourceName = source.name;
|
||||
|
||||
// Use sourceName for ALL subsequent calls
|
||||
const files = await opensrc.files(sourceName, "src/**/*.ts");
|
||||
return files;
|
||||
}
|
||||
```
|
||||
336
modules/_opencode/skill/librarian/references/opensrc-examples.md
Normal file
336
modules/_opencode/skill/librarian/references/opensrc-examples.md
Normal file
@@ -0,0 +1,336 @@
|
||||
# opensrc Code Examples
|
||||
|
||||
## Workflow: Fetch → Explore
|
||||
|
||||
### Basic Fetch and Explore with tree()
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const [{ source }] = await opensrc.fetch("vercel/ai");
|
||||
// Get directory structure first
|
||||
const tree = await opensrc.tree(source.name, { depth: 2 });
|
||||
return tree;
|
||||
}
|
||||
```
|
||||
|
||||
### Fetch and Read Key Files
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const [{ source }] = await opensrc.fetch("vercel/ai");
|
||||
const sourceName = source.name; // "github.com/vercel/ai"
|
||||
|
||||
const files = await opensrc.readMany(sourceName, [
|
||||
"package.json",
|
||||
"README.md",
|
||||
"src/index.ts"
|
||||
]);
|
||||
|
||||
return { sourceName, files };
|
||||
}
|
||||
```
|
||||
|
||||
### readMany with Globs
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const [{ source }] = await opensrc.fetch("zod");
|
||||
// Read all package.json files in monorepo
|
||||
const files = await opensrc.readMany(source.name, [
|
||||
"packages/*/package.json" // globs supported!
|
||||
]);
|
||||
return Object.keys(files);
|
||||
}
|
||||
```
|
||||
|
||||
### Batch Fetch Multiple Packages
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const results = await opensrc.fetch(["zod", "valibot", "yup"]);
|
||||
const names = results.map(r => r.source.name);
|
||||
|
||||
// Compare how each handles string validation
|
||||
const comparisons = {};
|
||||
for (const name of names) {
|
||||
const matches = await opensrc.grep("string.*validate|validateString", {
|
||||
sources: [name],
|
||||
include: "*.ts",
|
||||
maxResults: 10
|
||||
});
|
||||
comparisons[name] = matches.map(m => `${m.file}:${m.line}`);
|
||||
}
|
||||
return comparisons;
|
||||
}
|
||||
```
|
||||
|
||||
## Search Patterns
|
||||
|
||||
### Grep → Read Context
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const matches = await opensrc.grep("export function parse\\(", {
|
||||
sources: ["zod"],
|
||||
include: "*.ts"
|
||||
});
|
||||
|
||||
if (matches.length === 0) return "No matches";
|
||||
|
||||
const match = matches[0];
|
||||
const content = await opensrc.read(match.source, match.file);
|
||||
const lines = content.split("\n");
|
||||
|
||||
// Return 40 lines starting from match
|
||||
return {
|
||||
file: match.file,
|
||||
code: lines.slice(match.line - 1, match.line + 39).join("\n")
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Search Across All Fetched Sources
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const sources = opensrc.list();
|
||||
const results = {};
|
||||
|
||||
for (const source of sources) {
|
||||
const errorHandling = await opensrc.grep("throw new|catch \\(|\\.catch\\(", {
|
||||
sources: [source.name],
|
||||
include: "*.ts",
|
||||
maxResults: 20
|
||||
});
|
||||
results[source.name] = {
|
||||
type: source.type,
|
||||
errorPatterns: errorHandling.length
|
||||
};
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
```
|
||||
|
||||
## AST-Based Search
|
||||
|
||||
Use `astGrep` for semantic code search with pattern matching.
|
||||
|
||||
### Find Function Declarations
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const [{ source }] = await opensrc.fetch("lodash");
|
||||
|
||||
const fns = await opensrc.astGrep(source.name, "function $NAME($$$ARGS) { $$$BODY }", {
|
||||
lang: "js",
|
||||
limit: 20
|
||||
});
|
||||
|
||||
return fns.map(m => ({
|
||||
file: m.file,
|
||||
line: m.line,
|
||||
name: m.metavars.NAME
|
||||
}));
|
||||
}
|
||||
```
|
||||
|
||||
### Find React Hooks Usage
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const [{ source }] = await opensrc.fetch("vercel/ai");
|
||||
|
||||
const stateHooks = await opensrc.astGrep(
|
||||
source.name,
|
||||
"const [$STATE, $SETTER] = useState($$$INIT)",
|
||||
{ lang: ["ts", "tsx"], limit: 50 }
|
||||
);
|
||||
|
||||
return stateHooks.map(m => ({
|
||||
file: m.file,
|
||||
state: m.metavars.STATE,
|
||||
setter: m.metavars.SETTER
|
||||
}));
|
||||
}
|
||||
```
|
||||
|
||||
### Find Class Definitions with Context
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const [{ source }] = await opensrc.fetch("zod");
|
||||
|
||||
const classes = await opensrc.astGrep(source.name, "class $NAME", {
|
||||
glob: "**/*.ts"
|
||||
});
|
||||
|
||||
const details = [];
|
||||
for (const cls of classes.slice(0, 5)) {
|
||||
const content = await opensrc.read(source.name, cls.file);
|
||||
const lines = content.split("\n");
|
||||
details.push({
|
||||
name: cls.metavars.NAME,
|
||||
file: cls.file,
|
||||
preview: lines.slice(cls.line - 1, cls.line + 9).join("\n")
|
||||
});
|
||||
}
|
||||
return details;
|
||||
}
|
||||
```
|
||||
|
||||
### Compare Export Patterns Across Libraries
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const results = await opensrc.fetch(["zod", "valibot"]);
|
||||
const names = results.map(r => r.source.name);
|
||||
|
||||
const exports = {};
|
||||
for (const name of names) {
|
||||
const matches = await opensrc.astGrep(name, "export const $NAME = $_", {
|
||||
lang: "ts",
|
||||
limit: 30
|
||||
});
|
||||
exports[name] = matches.map(m => m.metavars.NAME);
|
||||
}
|
||||
return exports;
|
||||
}
|
||||
```
|
||||
|
||||
### grep vs astGrep
|
||||
|
||||
| Use Case | Tool |
|
||||
|----------|------|
|
||||
| Text/regex pattern | `grep` |
|
||||
| Function declarations | `astGrep`: `function $NAME($$$) { $$$ }` |
|
||||
| Arrow functions | `astGrep`: `const $N = ($$$) => $_` |
|
||||
| Class definitions | `astGrep`: `class $NAME extends $PARENT` |
|
||||
| Import statements | `astGrep`: `import { $$$IMPORTS } from "$MOD"` |
|
||||
| JSX components | `astGrep`: `<$COMP $$$PROPS />` |
|
||||
|
||||
## Repository Exploration
|
||||
|
||||
### Find Entry Points
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const name = "github.com/vercel/ai";
|
||||
|
||||
const allFiles = await opensrc.files(name, "**/*.{ts,js}");
|
||||
const entryPoints = allFiles.filter(f =>
|
||||
f.path.match(/^(src\/)?(index|main|mod)\.(ts|js)$/) ||
|
||||
f.path.includes("/index.ts")
|
||||
);
|
||||
|
||||
// Read all entry points
|
||||
const contents = {};
|
||||
for (const ep of entryPoints.slice(0, 5)) {
|
||||
contents[ep.path] = await opensrc.read(name, ep.path);
|
||||
}
|
||||
|
||||
return {
|
||||
totalFiles: allFiles.length,
|
||||
entryPoints: entryPoints.map(f => f.path),
|
||||
contents
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Explore Package Structure
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const name = "zod";
|
||||
|
||||
// Get all TypeScript files
|
||||
const tsFiles = await opensrc.files(name, "**/*.ts");
|
||||
|
||||
// Group by directory
|
||||
const byDir = {};
|
||||
for (const f of tsFiles) {
|
||||
const dir = f.path.split("/").slice(0, -1).join("/") || ".";
|
||||
byDir[dir] = (byDir[dir] || 0) + 1;
|
||||
}
|
||||
|
||||
// Read key files
|
||||
const pkg = await opensrc.read(name, "package.json");
|
||||
const readme = await opensrc.read(name, "README.md");
|
||||
|
||||
return {
|
||||
structure: byDir,
|
||||
package: JSON.parse(pkg),
|
||||
readmePreview: readme.slice(0, 500)
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Batch Operations
|
||||
|
||||
### Read Many with Error Handling
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const files = await opensrc.readMany("zod", [
|
||||
"src/index.ts",
|
||||
"src/types.ts",
|
||||
"src/ZodError.ts",
|
||||
"src/helpers/parseUtil.ts"
|
||||
]);
|
||||
|
||||
// files is Record<string, string> - errors start with "[Error:"
|
||||
const successful = Object.entries(files)
|
||||
.filter(([_, content]) => !content.startsWith("[Error:"))
|
||||
.map(([path, content]) => ({ path, lines: content.split("\n").length }));
|
||||
|
||||
return successful;
|
||||
}
|
||||
```
|
||||
|
||||
### Parallel Grep Across Multiple Sources
|
||||
|
||||
```javascript
|
||||
async () => {
|
||||
const targets = ["zod", "valibot"];
|
||||
const pattern = "export (type|interface)";
|
||||
|
||||
const results = await Promise.all(
|
||||
targets.map(async (name) => {
|
||||
const matches = await opensrc.grep(pattern, {
|
||||
sources: [name],
|
||||
include: "*.ts",
|
||||
maxResults: 50
|
||||
});
|
||||
return { name, count: matches.length, matches };
|
||||
})
|
||||
);
|
||||
|
||||
return results;
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow Checklist
|
||||
|
||||
### Comprehensive Repository Analysis
|
||||
|
||||
```
|
||||
Repository Analysis Progress:
|
||||
- [ ] 1. Fetch repository
|
||||
- [ ] 2. Read package.json + README
|
||||
- [ ] 3. Identify entry points (src/index.*)
|
||||
- [ ] 4. Read main entry file
|
||||
- [ ] 5. Map exports and public API
|
||||
- [ ] 6. Trace key functionality
|
||||
- [ ] 7. Create architecture diagram
|
||||
```
|
||||
|
||||
### Library Comparison
|
||||
|
||||
```
|
||||
Comparison Progress:
|
||||
- [ ] 1. Fetch all libraries
|
||||
- [ ] 2. Grep for target pattern in each
|
||||
- [ ] 3. Read matching implementations
|
||||
- [ ] 4. Create comparison table
|
||||
- [ ] 5. Synthesize findings
|
||||
```
|
||||
109
modules/_opencode/skill/librarian/references/tool-routing.md
Normal file
109
modules/_opencode/skill/librarian/references/tool-routing.md
Normal file
@@ -0,0 +1,109 @@
|
||||
# Tool Routing
|
||||
|
||||
## Decision Flowchart
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
Q[User Query] --> T{Query Type?}
|
||||
T -->|Understand/Explain| U[UNDERSTAND]
|
||||
T -->|Find/Search| F[FIND]
|
||||
T -->|Explore/Architecture| E[EXPLORE]
|
||||
T -->|Compare| C[COMPARE]
|
||||
|
||||
U --> U1{Known library?}
|
||||
U1 -->|Yes| U2[context7.resolve-library-id]
|
||||
U2 --> U3[context7.query-docs]
|
||||
U3 --> U4{Need source?}
|
||||
U4 -->|Yes| U5[opensrc.fetch → read]
|
||||
U1 -->|No| U6[grep_app → opensrc.fetch]
|
||||
|
||||
F --> F1{Specific repo?}
|
||||
F1 -->|Yes| F2[opensrc.fetch → grep → read]
|
||||
F1 -->|No| F3[grep_app broad search]
|
||||
F3 --> F4[opensrc.fetch interesting repos]
|
||||
|
||||
E --> E1[opensrc.fetch]
|
||||
E1 --> E2[opensrc.files]
|
||||
E2 --> E3[Read entry points]
|
||||
E3 --> E4[Create diagram]
|
||||
|
||||
C --> C1["opensrc.fetch([X, Y])"]
|
||||
C1 --> C2[grep same pattern]
|
||||
C2 --> C3[Read comparable files]
|
||||
C3 --> C4[Synthesize comparison]
|
||||
```
|
||||
|
||||
## Query Type Detection
|
||||
|
||||
| Keywords | Query Type | Start With |
|
||||
|----------|------------|------------|
|
||||
| "how does", "why does", "explain", "purpose of" | UNDERSTAND | context7 |
|
||||
| "find", "where is", "implementations of", "examples of" | FIND | grep_app |
|
||||
| "explore", "walk through", "architecture", "structure" | EXPLORE | opensrc |
|
||||
| "compare", "vs", "difference between" | COMPARE | opensrc |
|
||||
|
||||
## UNDERSTAND Queries
|
||||
|
||||
```
|
||||
Known library? → context7.resolve-library-id → context7.query-docs
|
||||
└─ Need source? → opensrc.fetch → read
|
||||
|
||||
Unknown? → grep_app search → opensrc.fetch top result → read
|
||||
```
|
||||
|
||||
**When to transition context7 → opensrc:**
|
||||
- Need implementation details (not just API docs)
|
||||
- Question about internals/private methods
|
||||
- Tracing code flow through library
|
||||
|
||||
## FIND Queries
|
||||
|
||||
```
|
||||
Specific repo? → opensrc.fetch → opensrc.grep → read matches
|
||||
|
||||
Broad search? → grep_app → analyze → opensrc.fetch interesting repos
|
||||
```
|
||||
|
||||
**grep_app query tips:**
|
||||
- Use literal code patterns: `useState(` not "react hooks"
|
||||
- Filter by language: `language: ["TypeScript"]`
|
||||
- Narrow by repo: `repo: "vercel/"` for org
|
||||
|
||||
## EXPLORE Queries
|
||||
|
||||
```
|
||||
1. opensrc.fetch(target)
|
||||
2. opensrc.files → understand structure
|
||||
3. Identify entry points: README, package.json, src/index.*
|
||||
4. Read entry → internals
|
||||
5. Create architecture diagram
|
||||
```
|
||||
|
||||
## COMPARE Queries
|
||||
|
||||
```
|
||||
1. opensrc.fetch([X, Y])
|
||||
2. Extract source.name from each result
|
||||
3. opensrc.grep same pattern in both
|
||||
4. Read comparable files
|
||||
5. Synthesize → comparison table
|
||||
```
|
||||
|
||||
## Tool Capabilities
|
||||
|
||||
| Tool | Best For | Not For |
|
||||
|------|----------|---------|
|
||||
| **grep_app** | Broad search, unknown scope, finding repos | Semantic queries |
|
||||
| **context7** | Library APIs, best practices, common patterns | Library internals |
|
||||
| **opensrc** | Deep exploration, reading internals, tracing flow | Initial discovery |
|
||||
|
||||
## Anti-patterns
|
||||
|
||||
| Don't | Do |
|
||||
|-------|-----|
|
||||
| grep_app for known library docs | context7 first |
|
||||
| opensrc.fetch before knowing target | grep_app to discover |
|
||||
| Multiple small reads | opensrc.readMany batch |
|
||||
| Describe without linking | Link every file ref |
|
||||
| Text for complex relationships | Mermaid diagram |
|
||||
| Use tool names in responses | "I'll search..." not "I'll use opensrc" |
|
||||
122
modules/_opencode/skill/session-export/SKILL.md
Normal file
122
modules/_opencode/skill/session-export/SKILL.md
Normal file
@@ -0,0 +1,122 @@
|
||||
---
|
||||
name: session-export
|
||||
description: Update GitHub PR descriptions with AI session export summaries. Use when user asks to add session summary to PR/MR, document AI assistance in PR/MR, or export conversation summary to PR/MR description.
|
||||
---
|
||||
|
||||
# Session Export
|
||||
|
||||
Update PR/MR descriptions with a structured summary of the AI-assisted conversation.
|
||||
|
||||
## Output Format
|
||||
|
||||
```markdown
|
||||
> [!NOTE]
|
||||
> This PR was written with AI assistance.
|
||||
|
||||
<details><summary>AI Session Export</summary>
|
||||
<p>
|
||||
|
||||
```json
|
||||
{
|
||||
"info": {
|
||||
"title": "<brief task description>",
|
||||
"agent": "opencode",
|
||||
"models": ["<model(s) used>"]
|
||||
},
|
||||
"summary": [
|
||||
"<action 1>",
|
||||
"<action 2>",
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
</p>
|
||||
</details>
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Export Session Data
|
||||
|
||||
Get session data using OpenCode CLI:
|
||||
|
||||
```bash
|
||||
opencode export [sessionID]
|
||||
```
|
||||
|
||||
Returns JSON with session info including models used. Use current session if no sessionID provided.
|
||||
|
||||
### 2. Generate Summary JSON
|
||||
|
||||
From exported data and conversation context, create summary:
|
||||
|
||||
- **title**: 2-5 word task description (lowercase)
|
||||
- **agent**: always "opencode"
|
||||
- **models**: array from export data
|
||||
- **summary**: array of terse action statements
|
||||
- Use past tense ("added", "fixed", "created")
|
||||
- Start with "user requested..." or "user asked..."
|
||||
- Chronological order
|
||||
- Attempt to keep the summary to a max of 25 turns ("user requested", "agent did")
|
||||
- **NEVER include sensitive data**: API keys, credentials, secrets, tokens, passwords, env vars
|
||||
|
||||
### 3. Update PR/MR Description
|
||||
|
||||
**GitHub:**
|
||||
```bash
|
||||
gh pr edit <PR_NUMBER> --body "$(cat <<'EOF'
|
||||
<existing description>
|
||||
|
||||
> [!NOTE]
|
||||
> This PR was written with AI assistance.
|
||||
|
||||
<details><summary>AI Session Export</summary>
|
||||
...
|
||||
</details>
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
### 4. Preserve Existing Content
|
||||
|
||||
Always fetch and preserve existing PR/MR description:
|
||||
|
||||
```bash
|
||||
# GitHub
|
||||
gh pr view <PR_NUMBER> --json body -q '.body'
|
||||
|
||||
Append session export after existing content with blank line separator.
|
||||
|
||||
## Example Summary
|
||||
|
||||
For a session where user asked to add dark mode:
|
||||
|
||||
```json
|
||||
{
|
||||
"info": {
|
||||
"title": "dark mode implementation",
|
||||
"agent": "opencode",
|
||||
"models": ["claude sonnet 4"]
|
||||
},
|
||||
"summary": [
|
||||
"user requested dark mode toggle in settings",
|
||||
"agent explored existing theme system",
|
||||
"agent created ThemeContext for state management",
|
||||
"agent added DarkModeToggle component",
|
||||
"agent updated CSS variables for dark theme",
|
||||
"agent ran tests and fixed 2 failures",
|
||||
"agent committed changes"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
**NEVER include in summary:**
|
||||
- API keys, tokens, secrets
|
||||
- Passwords, credentials
|
||||
- Environment variable values
|
||||
- Private URLs with auth tokens
|
||||
- Personal identifiable information
|
||||
- Internal hostnames/IPs
|
||||
0
modules/_opencode/tool/.gitkeep
Normal file
0
modules/_opencode/tool/.gitkeep
Normal file
4
modules/_opencode/tui.json
Normal file
4
modules/_opencode/tui.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"$schema": "https://opencode.ai/tui.json",
|
||||
"plugin": ["./plugin/review.ts"]
|
||||
}
|
||||
@@ -1,10 +0,0 @@
|
||||
{inputs, ...}: final: prev: {
|
||||
pi-agent-stuff =
|
||||
prev.buildNpmPackage {
|
||||
pname = "pi-agent-stuff";
|
||||
version = "1.5.0";
|
||||
src = inputs.pi-agent-stuff;
|
||||
npmDepsHash = "sha256-pyXMNdlie8vAkhz2f3GUGT3CCYuwt+xkWnsijBajXIo=";
|
||||
dontNpmBuild = true;
|
||||
};
|
||||
}
|
||||
@@ -1,33 +0,0 @@
|
||||
{inputs, ...}: final: prev: {
|
||||
pi-harness =
|
||||
prev.stdenvNoCC.mkDerivation {
|
||||
pname = "pi-harness";
|
||||
version = "0.0.0";
|
||||
src = inputs.pi-harness;
|
||||
|
||||
pnpmDeps =
|
||||
prev.fetchPnpmDeps {
|
||||
pname = "pi-harness";
|
||||
version = "0.0.0";
|
||||
src = inputs.pi-harness;
|
||||
pnpm = prev.pnpm_10;
|
||||
fetcherVersion = 3;
|
||||
hash = "sha256-lNcZRCmmwq9t05UjVWcuGq+ZzRHuHNmqKQIVPh6DoxQ=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
prev.pnpmConfigHook
|
||||
prev.pnpm_10
|
||||
prev.nodejs
|
||||
];
|
||||
|
||||
dontBuild = true;
|
||||
|
||||
installPhase = ''
|
||||
runHook preInstall
|
||||
mkdir -p $out/lib/node_modules/@aliou/pi-harness
|
||||
cp -r . $out/lib/node_modules/@aliou/pi-harness
|
||||
runHook postInstall
|
||||
'';
|
||||
};
|
||||
}
|
||||
@@ -1,10 +0,0 @@
|
||||
{inputs, ...}: final: prev: {
|
||||
pi-mcp-adapter =
|
||||
prev.buildNpmPackage {
|
||||
pname = "pi-mcp-adapter";
|
||||
version = "2.2.0";
|
||||
src = inputs.pi-mcp-adapter;
|
||||
npmDepsHash = "sha256-myJ9h/zC/KDddt8NOVvJjjqbnkdEN4ZR+okCR5nu7hM=";
|
||||
dontNpmBuild = true;
|
||||
};
|
||||
}
|
||||
5281
modules/_overlays/qmd-package-lock.json
generated
5281
modules/_overlays/qmd-package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -1,44 +0,0 @@
|
||||
{inputs, ...}: final: prev: {
|
||||
qmd =
|
||||
prev.buildNpmPackage rec {
|
||||
pname = "qmd";
|
||||
version = "2.0.1";
|
||||
src = inputs.qmd;
|
||||
npmDepsFetcherVersion = 2;
|
||||
npmDepsHash = "sha256-sAyCG43p3JELQ2lazwRrsdmW9Q4cOy45X6ZagBmitGU=";
|
||||
|
||||
nativeBuildInputs = [
|
||||
prev.makeWrapper
|
||||
prev.python3
|
||||
prev.pkg-config
|
||||
prev.cmake
|
||||
];
|
||||
buildInputs = [prev.sqlite];
|
||||
dontConfigure = true;
|
||||
|
||||
postPatch = ''
|
||||
cp ${./qmd-package-lock.json} package-lock.json
|
||||
'';
|
||||
|
||||
npmBuildScript = "build";
|
||||
dontNpmPrune = true;
|
||||
|
||||
installPhase = ''
|
||||
runHook preInstall
|
||||
mkdir -p $out/lib/node_modules/qmd $out/bin
|
||||
cp -r bin dist node_modules package.json package-lock.json LICENSE CHANGELOG.md $out/lib/node_modules/qmd/
|
||||
makeWrapper ${prev.nodejs}/bin/node $out/bin/qmd \
|
||||
--add-flags $out/lib/node_modules/qmd/dist/cli/qmd.js \
|
||||
--set LD_LIBRARY_PATH ${prev.lib.makeLibraryPath [prev.sqlite]}
|
||||
runHook postInstall
|
||||
'';
|
||||
|
||||
meta = with prev.lib; {
|
||||
description = "On-device search engine for markdown notes, meeting transcripts, and knowledge bases";
|
||||
homepage = "https://github.com/tobi/qmd";
|
||||
license = licenses.mit;
|
||||
mainProgram = "qmd";
|
||||
platforms = platforms.unix;
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -10,7 +10,7 @@ in {
|
||||
...
|
||||
}: {
|
||||
home.packages = [
|
||||
inputs'.llm-agents.packages.pi
|
||||
inputs'.llm-agents.packages.claude-code
|
||||
pkgs.cog-cli
|
||||
];
|
||||
|
||||
@@ -21,67 +21,85 @@ in {
|
||||
}
|
||||
'';
|
||||
|
||||
home.file = {
|
||||
"AGENTS.md".source = ./_ai-tools/AGENTS.md;
|
||||
".pi/agent/extensions/pi-elixir" = {
|
||||
source = inputs.pi-elixir;
|
||||
recursive = true;
|
||||
};
|
||||
".pi/agent/extensions/pi-mcp-adapter" = {
|
||||
source = "${pkgs.pi-mcp-adapter}/lib/node_modules/pi-mcp-adapter";
|
||||
recursive = true;
|
||||
};
|
||||
".pi/agent/extensions/no-git.ts".source = ./_ai-tools/extensions/no-git.ts;
|
||||
".pi/agent/extensions/no-scripting.ts".source = ./_ai-tools/extensions/no-scripting.ts;
|
||||
".pi/agent/extensions/note-ingest.ts".source = ./_ai-tools/extensions/note-ingest.ts;
|
||||
".pi/agent/extensions/review.ts".source = ./_ai-tools/extensions/review.ts;
|
||||
".pi/agent/extensions/session-name.ts".source = ./_ai-tools/extensions/session-name.ts;
|
||||
".pi/agent/notability" = {
|
||||
source = ./_notability;
|
||||
recursive = true;
|
||||
};
|
||||
".pi/agent/skills/elixir-dev" = {
|
||||
source = "${inputs.pi-elixir}/skills/elixir-dev";
|
||||
recursive = true;
|
||||
};
|
||||
".pi/agent/skills/jujutsu/SKILL.md".source = ./_ai-tools/skills/jujutsu/SKILL.md;
|
||||
".pi/agent/skills/notability-transcribe/SKILL.md".source = ./_ai-tools/skills/notability-transcribe/SKILL.md;
|
||||
".pi/agent/skills/notability-normalize/SKILL.md".source = ./_ai-tools/skills/notability-normalize/SKILL.md;
|
||||
".pi/agent/themes" = {
|
||||
source = "${inputs.pi-rose-pine}/themes";
|
||||
recursive = true;
|
||||
};
|
||||
".pi/agent/settings.json".text =
|
||||
builtins.toJSON {
|
||||
theme = "rose-pine-dawn";
|
||||
quietStartup = true;
|
||||
hideThinkingBlock = true;
|
||||
defaultProvider = "openai-codex";
|
||||
defaultModel = "gpt-5.4";
|
||||
defaultThinkingLevel = "high";
|
||||
packages = [
|
||||
{
|
||||
source = "${pkgs.pi-agent-stuff}/lib/node_modules/mitsupi";
|
||||
extensions = [
|
||||
"pi-extensions/answer.ts"
|
||||
"pi-extensions/context.ts"
|
||||
"pi-extensions/multi-edit.ts"
|
||||
"pi-extensions/todos.ts"
|
||||
];
|
||||
skills = [];
|
||||
prompts = [];
|
||||
themes = [];
|
||||
}
|
||||
{
|
||||
source = "${pkgs.pi-harness}/lib/node_modules/@aliou/pi-harness";
|
||||
extensions = ["extensions/breadcrumbs/index.ts"];
|
||||
skills = [];
|
||||
prompts = [];
|
||||
themes = [];
|
||||
}
|
||||
];
|
||||
programs.opencode = {
|
||||
enable = true;
|
||||
package = inputs'.llm-agents.packages.opencode;
|
||||
settings = {
|
||||
model = "anthropic/claude-opus-4-6";
|
||||
small_model = "anthropic/claude-haiku-4-5";
|
||||
theme = "rosepine";
|
||||
plugin = ["opencode-claude-auth"];
|
||||
permission = {
|
||||
read = {
|
||||
"*" = "allow";
|
||||
"*.env" = "deny";
|
||||
"*.env.*" = "deny";
|
||||
"*.envrc" = "deny";
|
||||
"secrets/*" = "deny";
|
||||
};
|
||||
};
|
||||
".pi/agent/mcp.json".source = ./_ai-tools/mcp.json;
|
||||
agent = {
|
||||
plan = {
|
||||
model = "anthropic/claude-opus-4-6";
|
||||
};
|
||||
explore = {
|
||||
model = "anthropic/claude-haiku-4-5";
|
||||
};
|
||||
};
|
||||
instructions = [
|
||||
"CLAUDE.md"
|
||||
"AGENT.md"
|
||||
# "AGENTS.md"
|
||||
"AGENTS.local.md"
|
||||
];
|
||||
formatter = {
|
||||
mix = {
|
||||
disabled = true;
|
||||
};
|
||||
};
|
||||
mcp = {
|
||||
opensrc = {
|
||||
enabled = true;
|
||||
type = "local";
|
||||
command = ["opensrc-mcp"];
|
||||
};
|
||||
context7 = {
|
||||
enabled = true;
|
||||
type = "remote";
|
||||
url = "https://mcp.context7.com/mcp";
|
||||
};
|
||||
grep_app = {
|
||||
enabled = true;
|
||||
type = "remote";
|
||||
url = "https://mcp.grep.app";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
xdg.configFile = {
|
||||
"opencode/agent" = {
|
||||
source = ./_opencode/agent;
|
||||
recursive = true;
|
||||
};
|
||||
"opencode/command" = {
|
||||
source = ./_opencode/command;
|
||||
recursive = true;
|
||||
};
|
||||
"opencode/skill" = {
|
||||
source = ./_opencode/skill;
|
||||
recursive = true;
|
||||
};
|
||||
"opencode/tool" = {
|
||||
source = ./_opencode/tool;
|
||||
recursive = true;
|
||||
};
|
||||
"opencode/plugin" = {
|
||||
source = ./_opencode/plugin;
|
||||
recursive = true;
|
||||
};
|
||||
"opencode/AGENTS.md".source = ./_opencode/AGENTS.md;
|
||||
"opencode/tui.json".source = ./_opencode/tui.json;
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
@@ -153,6 +153,7 @@ in {
|
||||
"1password"
|
||||
"alcove"
|
||||
"aqua-voice"
|
||||
"chatgpt"
|
||||
"ghostty@tip"
|
||||
"raycast"
|
||||
"spotify"
|
||||
|
||||
@@ -35,7 +35,6 @@
|
||||
adguardhome = ./adguardhome.nix;
|
||||
cache = ./cache.nix;
|
||||
gitea = ./gitea.nix;
|
||||
notability = ./notability.nix;
|
||||
opencode = ./opencode.nix;
|
||||
paperless = ./paperless.nix;
|
||||
|
||||
|
||||
@@ -54,27 +54,6 @@
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
llm-agents.url = "github:numtide/llm-agents.nix";
|
||||
pi-agent-stuff = {
|
||||
url = "github:mitsuhiko/agent-stuff";
|
||||
flake = false;
|
||||
};
|
||||
pi-elixir = {
|
||||
url = "github:dannote/pi-elixir";
|
||||
flake = false;
|
||||
};
|
||||
pi-rose-pine = {
|
||||
url = "github:zenobi-us/pi-rose-pine";
|
||||
flake = false;
|
||||
};
|
||||
pi-harness = {
|
||||
url = "github:aliou/pi-harness";
|
||||
flake = false;
|
||||
};
|
||||
pi-mcp-adapter = {
|
||||
url = "github:nicobailon/pi-mcp-adapter";
|
||||
flake = false;
|
||||
};
|
||||
qmd.url = "github:tobi/qmd";
|
||||
# Overlay inputs
|
||||
himalaya.url = "github:pimalaya/himalaya";
|
||||
jj-ryu = {
|
||||
|
||||
@@ -20,6 +20,17 @@ in
|
||||
den.aspects.email
|
||||
];
|
||||
homeManager = {
|
||||
config,
|
||||
inputs',
|
||||
...
|
||||
}: let
|
||||
opencode = inputs'.llm-agents.packages.opencode;
|
||||
in {
|
||||
programs.opencode.settings.permission.external_directory = {
|
||||
"/tmp/himalaya-triage/*" = "allow";
|
||||
"/var/lib/paperless/consume/inbox-triage/*" = "allow";
|
||||
};
|
||||
|
||||
programs.nushell.extraConfig = ''
|
||||
if $nu.is-interactive and ('SSH_CONNECTION' in ($env | columns)) and ('ZELLIJ' not-in ($env | columns)) {
|
||||
try {
|
||||
@@ -30,6 +41,30 @@ in
|
||||
}
|
||||
}
|
||||
'';
|
||||
|
||||
systemd.user.services.opencode-inbox-triage = {
|
||||
Unit = {
|
||||
Description = "OpenCode inbox triage";
|
||||
};
|
||||
Service = {
|
||||
Type = "oneshot";
|
||||
ExecStart = "${opencode}/bin/opencode run --command inbox-triage --model opencode-go/glm-5";
|
||||
Environment = "PATH=${config.home.profileDirectory}/bin:/run/current-system/sw/bin";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.user.timers.opencode-inbox-triage = {
|
||||
Unit = {
|
||||
Description = "Run OpenCode inbox triage every 12 hours";
|
||||
};
|
||||
Timer = {
|
||||
OnCalendar = "*-*-* 0/12:00:00";
|
||||
Persistent = true;
|
||||
};
|
||||
Install = {
|
||||
WantedBy = ["timers.target"];
|
||||
};
|
||||
};
|
||||
};
|
||||
})
|
||||
(hostLib.mkPerHostAspect {
|
||||
@@ -39,7 +74,6 @@ in
|
||||
den.aspects.opencode-api-key
|
||||
den.aspects.adguardhome
|
||||
den.aspects.cache
|
||||
den.aspects.notability
|
||||
den.aspects.paperless
|
||||
];
|
||||
nixos = {...}: {
|
||||
|
||||
@@ -49,12 +49,27 @@
|
||||
den.aspects.tailscale.nixos = {
|
||||
services.tailscale = {
|
||||
enable = true;
|
||||
extraSetFlags = ["--ssh"];
|
||||
openFirewall = true;
|
||||
permitCertUid = "caddy";
|
||||
useRoutingFeatures = "server";
|
||||
};
|
||||
};
|
||||
|
||||
den.aspects.mosh.nixos = {
|
||||
programs.mosh = {
|
||||
enable = true;
|
||||
openFirewall = false;
|
||||
};
|
||||
|
||||
networking.firewall.interfaces.tailscale0.allowedUDPPortRanges = [
|
||||
{
|
||||
from = 60000;
|
||||
to = 61000;
|
||||
}
|
||||
];
|
||||
};
|
||||
|
||||
den.aspects.tailscale.darwin = {
|
||||
services.tailscale.enable = true;
|
||||
};
|
||||
|
||||
@@ -1,136 +0,0 @@
|
||||
{lib, ...}: let
|
||||
caddyLib = import ./_lib/caddy.nix;
|
||||
local = import ./_lib/local.nix;
|
||||
secretLib = import ./_lib/secrets.nix {inherit lib;};
|
||||
inherit (local) user;
|
||||
notabilityScripts = ./_notability;
|
||||
tahani = local.hosts.tahani;
|
||||
in {
|
||||
den.aspects.notability.nixos = {
|
||||
config,
|
||||
inputs',
|
||||
pkgs,
|
||||
...
|
||||
}: let
|
||||
homeDir = tahani.home;
|
||||
dataRoot = "${homeDir}/.local/share/notability-ingest";
|
||||
stateRoot = "${homeDir}/.local/state/notability-ingest";
|
||||
notesRoot = "${homeDir}/Notes";
|
||||
webdavRoot = "${dataRoot}/webdav-root";
|
||||
userPackages = with pkgs; [
|
||||
qmd
|
||||
poppler-utils
|
||||
rclone
|
||||
sqlite
|
||||
zk
|
||||
];
|
||||
commonPath = with pkgs;
|
||||
[
|
||||
inputs'.llm-agents.packages.pi
|
||||
coreutils
|
||||
inotify-tools
|
||||
nushell
|
||||
util-linux
|
||||
]
|
||||
++ userPackages;
|
||||
commonEnvironment = {
|
||||
HOME = homeDir;
|
||||
NOTABILITY_ARCHIVE_ROOT = "${dataRoot}/archive";
|
||||
NOTABILITY_DATA_ROOT = dataRoot;
|
||||
NOTABILITY_DB_PATH = "${stateRoot}/db.sqlite";
|
||||
NOTABILITY_NOTES_DIR = notesRoot;
|
||||
NOTABILITY_RENDER_ROOT = "${dataRoot}/rendered-pages";
|
||||
NOTABILITY_SESSIONS_ROOT = "${stateRoot}/sessions";
|
||||
NOTABILITY_STATE_ROOT = stateRoot;
|
||||
NOTABILITY_TRANSCRIPT_ROOT = "${stateRoot}/transcripts";
|
||||
NOTABILITY_WEBDAV_ROOT = webdavRoot;
|
||||
XDG_CONFIG_HOME = "${homeDir}/.config";
|
||||
};
|
||||
mkTmpDirRule = path: "d ${path} 0755 ${user.name} users -";
|
||||
mkNotabilityService = {
|
||||
description,
|
||||
script,
|
||||
after ? [],
|
||||
requires ? [],
|
||||
environment ? {},
|
||||
}: {
|
||||
inherit after description requires;
|
||||
wantedBy = ["multi-user.target"];
|
||||
path = commonPath;
|
||||
environment = commonEnvironment // environment;
|
||||
serviceConfig = {
|
||||
ExecStart = "${pkgs.nushell}/bin/nu ${notabilityScripts}/${script}";
|
||||
Group = "users";
|
||||
Restart = "always";
|
||||
RestartSec = 5;
|
||||
User = user.name;
|
||||
WorkingDirectory = homeDir;
|
||||
};
|
||||
};
|
||||
in {
|
||||
sops.secrets.tahani-notability-webdav-password =
|
||||
secretLib.mkUserBinarySecret {
|
||||
name = "tahani-notability-webdav-password";
|
||||
sopsFile = ../secrets/tahani-notability-webdav-password;
|
||||
};
|
||||
|
||||
home-manager.users.${user.name} = {
|
||||
home.packages = userPackages;
|
||||
home.file.".config/qmd/index.yml".text = ''
|
||||
collections:
|
||||
notes:
|
||||
path: ${notesRoot}
|
||||
pattern: "**/*.md"
|
||||
'';
|
||||
};
|
||||
|
||||
systemd.tmpfiles.rules =
|
||||
builtins.map mkTmpDirRule [
|
||||
notesRoot
|
||||
dataRoot
|
||||
webdavRoot
|
||||
"${dataRoot}/archive"
|
||||
"${dataRoot}/rendered-pages"
|
||||
stateRoot
|
||||
"${stateRoot}/jobs"
|
||||
"${stateRoot}/jobs/queued"
|
||||
"${stateRoot}/jobs/running"
|
||||
"${stateRoot}/jobs/failed"
|
||||
"${stateRoot}/jobs/done"
|
||||
"${stateRoot}/jobs/results"
|
||||
"${stateRoot}/sessions"
|
||||
"${stateRoot}/transcripts"
|
||||
];
|
||||
|
||||
services.caddy.virtualHosts =
|
||||
caddyLib.mkTailscaleVHost {
|
||||
name = "tahani";
|
||||
configText = ''
|
||||
handle /notability* {
|
||||
reverse_proxy 127.0.0.1:9980
|
||||
}
|
||||
'';
|
||||
};
|
||||
|
||||
systemd.services.notability-webdav =
|
||||
mkNotabilityService {
|
||||
description = "Notability WebDAV landing zone";
|
||||
script = "webdav.nu";
|
||||
after = ["network.target"];
|
||||
environment = {
|
||||
NOTABILITY_WEBDAV_ADDR = "127.0.0.1:9980";
|
||||
NOTABILITY_WEBDAV_BASEURL = "/notability";
|
||||
NOTABILITY_WEBDAV_PASSWORD_FILE = config.sops.secrets.tahani-notability-webdav-password.path;
|
||||
NOTABILITY_WEBDAV_USER = "notability";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.notability-watch =
|
||||
mkNotabilityService {
|
||||
description = "Watch and ingest Notability WebDAV uploads";
|
||||
script = "watch.nu";
|
||||
after = ["notability-webdav.service"];
|
||||
requires = ["notability-webdav.service"];
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -20,14 +20,6 @@
|
||||
(import ./_overlays/jj-ryu.nix {inherit inputs;})
|
||||
# cog-cli
|
||||
(import ./_overlays/cog-cli.nix {inherit inputs;})
|
||||
# pi-agent-stuff (mitsuhiko)
|
||||
(import ./_overlays/pi-agent-stuff.nix {inherit inputs;})
|
||||
# pi-harness (aliou)
|
||||
(import ./_overlays/pi-harness.nix {inherit inputs;})
|
||||
# pi-mcp-adapter
|
||||
(import ./_overlays/pi-mcp-adapter.nix {inherit inputs;})
|
||||
# qmd
|
||||
(import ./_overlays/qmd.nix {inherit inputs;})
|
||||
# jj-starship (passes through upstream overlay)
|
||||
(import ./_overlays/jj-starship.nix {inherit inputs;})
|
||||
# zjstatus
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
den.aspects.host-nixos-base.includes = [
|
||||
den.aspects.nixos-system
|
||||
den.aspects.core
|
||||
den.aspects.mosh
|
||||
den.aspects.openssh
|
||||
den.aspects.tailscale
|
||||
];
|
||||
|
||||
@@ -24,6 +24,7 @@ in {
|
||||
jq
|
||||
killall
|
||||
lsof
|
||||
mosh
|
||||
ouch
|
||||
ov
|
||||
sd
|
||||
|
||||
@@ -1,30 +0,0 @@
|
||||
{
|
||||
"data": "ENC[AES256_GCM,data:qZCh11bq1W7FwXMrDX5KMOQFsgsKgbhimZ4TDNvv1BDU,iv:PJJJB5uyhuTUSA4doQ6h6qMbmPgerPv+FfsJ0f20kYY=,tag:lXpit9T7K2rGUu1zsJH6dg==,type:str]",
|
||||
"sops": {
|
||||
"age": [
|
||||
{
|
||||
"recipient": "age1xate984yhl9qk9d4q99pyxmzz48sq56nfhu8weyzkgum4ed5tc5shjmrs7",
|
||||
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBvNXBjN3RZcGd2R2MyQWhR\nenBpa3VHMFhkWDEveUtmZWtPSk01QkhUVFJFCnRSc3ZGdFFjVDJnbHpwKzZ1TUdI\nZUdWQzM2bmZ1RUl4UVpCbDJoL0RkQncKLS0tIHRxUzFiaS8wekdCQ0Z0dTMxSnZ0\nS0UycFNMSUJHcVlkR2JZNlZsbldoaUkKe4EaYIquhABMEywizJXzEVEM1JbEwFqU\nAmQ6R+p4mNgaR5HCrnINQId3qqVfsP2UDqPDepERZIA0V2E5h9ckfQ==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||
},
|
||||
{
|
||||
"recipient": "age1njjegjjdqzfnrr54f536yl4lduqgna3wuv7ef6vtl9jw5cju0grsgy62tm",
|
||||
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBBZ1hYenpVTm1lTFdjTEJj\nTUN5MzNtbzdWNzQ2VE9tRlJJRVRYTUtLOXpnCnlLWTZPNGE5NDlwRHhWSnlTNUhv\nc3VZVklEZDB5dXlFc01wcEQxckl0NjgKLS0tIEE5T2JmNlJaYkZpWkhYdDhPSTlW\nei96YmhUWUZ2enVnRjhKOVlNZmNHa3cKxaHBtCwLDLNcscptlDk6ta/i491lLPt6\nOh/RtbkxtJ02cahIsKgajspOElx8u2Nb3/lmK51JbUIexH9TDQ+3tg==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||
},
|
||||
{
|
||||
"recipient": "age187jl7e4k9n4guygkmpuqzeh0wenefwrfkpvuyhvwjrjwxqpzassqq3x67j",
|
||||
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBJbFFpQzB2OU9jYUZlL2Nl\nOEZ0WGcyR1BpSmZGU0Vxa0N6WGpCbXBXZGxJCnlLK0JJWElndC9KRGN5d1NNd0tj\nUkExQ0tTSGRKQjJHUGtaWUtKS285MU0KLS0tIGI5cWtVcW43b2Q5VXRidllzamtB\nV1IxYnN1KzdaaXdvWG96a2VkZ0ZvWGsKxdbXwbgFIc3/3VjwUJ1A+cX0oaT+oojz\nrI9Dmk782U/dQrcMv1lRBIWWtAdAqS6GiQ1aUKk5aHpuHOZeHHFjMw==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||
},
|
||||
{
|
||||
"recipient": "age1ez6j3r5wdp0tjy7n5qzv5vfakdc2nh2zeu388zu7a80l0thv052syxq5e2",
|
||||
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSA0aTgwQ3ZEVG41eW9MQ1RX\nSElRdkdvL21kZ2ZLeGNPbGJiNll5WjdsM2gwCmJQVmJjWEJBaVhEKzJqYWlib2JX\ndWRzSE9QTVQ1c004dldzR2NtR3pvQlUKLS0tIEsvZDNnNWJJaWZyOCtYUEs1eklh\nNXl2dUM0amVtSmdjTy83ZzBSeGp3Q0UKQ/cUYPACFNcxulzW964ftsHjoCBRGB66\nc1e/ObQNM+b+be5UzJi3/gago9CHRzZ3Rp6zE9i5oQBzgLGWlJuPNQ==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||
},
|
||||
{
|
||||
"recipient": "age1tlymdmaukhwupzrhszspp26lgd8s64rw4vu9lwc7gsgrjm78095s9fe9l3",
|
||||
"enc": "-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBLNUk5aHBqdEJoYWdaeVlx\nOUkrSXMvRFRmQ29QRE5hWTlHdlcwOUxFRXdRCnE0L1BQdHZDRWRCQUZ2dHQ2Witi\nQ1g5OFFWM2tPT0xEZUZvdXJNdm9aWTgKLS0tIENvM1h1V042L3JHV1pWeDAxdG84\nUTBTZjdHa1lCNGJSRG1iZmtpc1laZTQK/twptPseDi9DM/7NX2F0JO1BEkqklbh1\nxQ1Qwpy4K/P2pFTOBKqDb62DaIALxiGA1Q55dw+fPRSsnL8VcxG8JA==\n-----END AGE ENCRYPTED FILE-----\n"
|
||||
}
|
||||
],
|
||||
"lastmodified": "2026-03-25T11:23:08Z",
|
||||
"mac": "ENC[AES256_GCM,data:UM0QWfQueExEHRjqNAEIgwpVBjgpd0a6DXxDeRci08qMzTypTlWIofUGMyM1k+J+mUKr3vWMe3q48OwVtUaXnbWimH+8uFEwb5x0e+ayTg+w/C23d+JJmQIX8g5JXtknUAZFNrh3wdZOadYYRr/vDzCKud4lMrmFBKFXsH1DPEI=,iv:kTx8omo8Gt4mTLAs6MoLxj4GizWpxlSXMCTWNlRR5SY=,tag:PB7nMCVxCLRQdhC/eelK/w==,type:str]",
|
||||
"version": "3.12.2"
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user