Compare commits

..

107 Commits

Author SHA1 Message Date
09cbb308d2 up
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-09 17:31:21 +00:00
c120b3d4d2 opencode: add Cog memory skill 2026-02-09 17:28:35 +00:00
3381945cea opencode: add Cog MCP server 2026-02-09 17:18:36 +00:00
d12aabdccc opencode: disable oh-my-opencode commit footer and co-author 2026-02-09 17:16:58 +00:00
d38a348a06 add openusage
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-09 17:15:10 +00:00
42873f4d2d opencode: add oh-my-opencode config, remove custom oracle agent 2026-02-09 15:00:24 +00:00
c1bb006292 Remove mindy host and orphaned derek secret 2026-02-09 09:26:34 +00:00
9bdaaeb295 flake
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-09 09:17:45 +00:00
6596ec2d9b opencode: add oracle research tools, spec-planner question tool, deny mcp-auth read 2026-02-08 17:46:29 +00:00
0103aa8c16 Remove watchman package and file watcher config 2026-02-08 08:45:17 +00:00
37b13cfd6a flake
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-08 08:41:54 +00:00
29d27dccfb nushell: Remove deprecated use_ls_colors config option 2026-02-08 08:39:13 +00:00
cca27aa971 Replace fish with nushell 2026-02-08 08:37:09 +00:00
75bbb322d3 up
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-06 16:11:06 +00:00
1a79b5fa9f Update Claude model to opus 4-6 2026-02-05 18:37:52 +00:00
9288aef5c7 flake
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-05 18:26:06 +00:00
29a2dfc606 solidjs
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-05 17:08:21 +00:00
2999325de9 up
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-05 14:39:06 +00:00
06584ffedc rm appsignal 2026-02-05 08:46:42 +00:00
90f91bd017 rm custom profile
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-05 08:40:06 +00:00
2b880be833 flake
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-05 08:23:11 +00:00
64a5a29809 Add overseer host package for UI support 2026-02-04 20:30:26 +00:00
c1bae690b3 Use local overseer binary for MCP server 2026-02-04 20:20:01 +00:00
f8e912e201 Add overseer CLI for task management 2026-02-04 20:17:49 +00:00
ff8650bedf oc
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-04 20:04:32 +00:00
13586f5c44 up
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-04 19:42:17 +00:00
ead1e8d57c Remove Zed editor configuration 2026-02-04 10:40:59 +00:00
87d3044959 Update zed.nix 2026-02-04 10:37:58 +00:00
cc2dc49511 up
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-04 10:34:12 +00:00
79e72505f8 Add Zed remote development setup with vim keybindings 2026-02-04 10:17:34 +00:00
21c8f95f86 flake
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-04 10:13:45 +00:00
6a402795b9 -jj
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-03 19:51:59 +00:00
f342d99650 starship: fix indentation 2026-02-03 16:57:59 +00:00
9d476ee209 Add claude-code profile from llm-agents input 2026-02-03 16:53:06 +00:00
88ff7d0077 starship: use readable labels for git status symbols 2026-02-03 10:38:49 +00:00
d91ea80bef update flake
Signed-off-by: Christoph Schmatzler <christoph@schmatzler.com>
2026-02-03 10:31:10 +00:00
682889f878 Replace jj with git, use lazygit in neovim 2026-02-03 10:26:22 +00:00
f07e0be31d flake 2026-02-03 07:09:59 +00:00
6ec2bbe02d aerospace: assign apps to workspaces and monitors 2026-02-02 20:47:33 +00:00
da68435673 Add nono profile for OpenCode 2026-02-02 20:47:33 +00:00
b6fdd922ba Add nono sandbox for AI agents 2026-02-02 20:47:33 +00:00
5648ea6c54 flake 2026-02-02 12:28:20 +00:00
d672ccd433 flake 2026-02-02 08:58:27 +00:00
a0614609a2 flake deps 2026-02-02 08:58:27 +00:00
0ca29a894a flake 2026-01-31 08:07:21 +00:00
5537385dad darwin: hide desktop widgets 2026-01-30 09:22:47 +00:00
dbe9193d21 flake 2026-01-30 09:22:47 +00:00
1a3559f72b flake 2026-01-29 19:38:31 +00:00
871bc28a19 lol 2026-01-29 19:38:31 +00:00
2e719ca06d flake 2026-01-29 19:38:31 +00:00
b9e1c9546f flake 2026-01-28 11:40:18 +00:00
4672e75bcf oc 2026-01-28 11:40:18 +00:00
d33e943dd4 adguard 2026-01-28 11:40:18 +00:00
c4eaabaddc fix networking 2026-01-28 11:40:18 +00:00
3dd7840b06 up 2026-01-28 11:40:18 +00:00
37cd721066 fix 2026-01-27 18:36:18 +00:00
797dd1044b fix 2026-01-27 18:34:15 +00:00
95aef784e1 flake + oc 2026-01-27 18:12:57 +00:00
9c5ee08284 oc experimental 2026-01-27 09:44:14 +01:00
8b047db9cc aerospace 2026-01-27 09:44:14 +01:00
2351d799a7 opencode plan 2026-01-26 19:36:53 +00:00
8be92eda71 moar grammars 2026-01-26 19:36:53 +00:00
9f68bcffb5 flake 2026-01-26 16:43:03 +00:00
c3e06350dd disable mix fmt 2026-01-26 16:43:03 +00:00
b509eff5b3 flake 2026-01-26 16:43:03 +00:00
dacedb417f up 2026-01-25 19:57:06 +00:00
8874fad520 rm pearcleaner 2026-01-25 09:46:40 +00:00
493eb495a2 darwin: add alcove package 2026-01-25 09:46:40 +00:00
e3a27e7779 flake 2026-01-25 09:36:16 +00:00
566557f8b0 email skill 2026-01-24 20:22:18 +00:00
4e602e1783 fmt 2026-01-24 13:50:19 +00:00
081b8ae6ff derek 2026-01-24 13:49:19 +00:00
a0d959bdce flake 2026-01-24 13:48:52 +00:00
aa322301fb harden 2026-01-24 13:48:52 +00:00
139b1defe7 flake 2026-01-24 13:48:52 +00:00
d499727050 more skills 2026-01-23 21:45:57 +00:00
647943abbc opencode stuff 2026-01-23 21:45:57 +00:00
e0b317cdf3 glm-4.7 2026-01-23 09:32:10 +00:00
f1a4fa002b declutter michael 2026-01-23 09:29:52 +00:00
385f92458f fix(neovim): update jj-diffconflicts hash 2026-01-23 09:29:52 +00:00
ed9f98493f flake 2026-01-23 09:29:52 +00:00
eaa68c0355 gh auth 2026-01-22 16:22:09 +00:00
1d24b113fd export overlays 2026-01-22 10:35:51 +00:00
3c0a2f0a11 flake 2026-01-22 09:16:56 +00:00
ce5b8a19ee enable AS 2026-01-22 09:16:56 +00:00
837d1c6a5d profiles/aerospace: add Ghostty tiling workaround for macOS 2026-01-21 15:09:38 +00:00
0822bc9eac agents.md 2026-01-21 14:33:59 +00:00
70e7817f33 mise: rename settings to globalConfig.settings and remove zsh integration 2026-01-21 14:33:59 +00:00
439e8bd489 aerospace 2026-01-21 14:06:56 +00:00
7f1cfa3c98 hide spotlight 2026-01-21 14:06:56 +00:00
8e46dfb3ac flake 2026-01-21 14:06:56 +00:00
83d99ba809 flake 2026-01-20 18:04:00 +00:00
5cbb6906a1 fix author 2026-01-20 18:04:00 +00:00
d84646800c refactor 2026-01-20 17:41:15 +00:00
73f8184b05 jj-starship 2026-01-20 17:41:15 +00:00
183f0b9fd3 as mcp 2026-01-20 17:41:15 +00:00
94127fdae4 flake 2026-01-20 08:27:26 +00:00
f90fa7dbf8 change explore model 2026-01-20 08:27:26 +00:00
6002e48a44 rm permissions 2026-01-20 08:27:26 +00:00
f534782978 update models 2026-01-20 08:27:26 +00:00
2e030ded6c flake 2026-01-20 08:27:26 +00:00
975fa533ca flake 2026-01-20 08:27:26 +00:00
0bbf852776 up 2026-01-17 13:14:45 +00:00
91512af825 flake 2026-01-16 10:42:04 +00:00
380e81014a rm derek 2026-01-16 10:42:04 +00:00
bccee6dd51 up 2026-01-16 10:42:04 +00:00
93a3c88852 up 2026-01-14 21:49:28 +00:00
158 changed files with 15098 additions and 590 deletions

View File

@@ -1,17 +1,13 @@
keys: keys:
- &host_tahani age1njjegjjdqzfnrr54f536yl4lduqgna3wuv7ef6vtl9jw5cju0grsgy62tm - &host_tahani age1njjegjjdqzfnrr54f536yl4lduqgna3wuv7ef6vtl9jw5cju0grsgy62tm
- &host_michael age187jl7e4k9n4guygkmpuqzeh0wenefwrfkpvuyhvwjrjwxqpzassqq3x67j - &host_michael age187jl7e4k9n4guygkmpuqzeh0wenefwrfkpvuyhvwjrjwxqpzassqq3x67j
- &host_mindy age1dqt3znmzcgghsjjzzax0pf0eyu95h0p7kaf5v988ysjv7fl7lumsatl048
- &host_jason age1ez6j3r5wdp0tjy7n5qzv5vfakdc2nh2zeu388zu7a80l0thv052syxq5e2 - &host_jason age1ez6j3r5wdp0tjy7n5qzv5vfakdc2nh2zeu388zu7a80l0thv052syxq5e2
- &host_chidi age1tlymdmaukhwupzrhszspp26lgd8s64rw4vu9lwc7gsgrjm78095s9fe9l3 - &host_chidi age1tlymdmaukhwupzrhszspp26lgd8s64rw4vu9lwc7gsgrjm78095s9fe9l3
- &host_derek age1h537hhl5qgew5sswjp7xf7d4j4aq0gg9s5flnr8twm2smnqyudhqmum8uy
creation_rules: creation_rules:
- path_regex: secrets/[^/]+$ - path_regex: secrets/[^/]+$
key_groups: key_groups:
- age: - age:
- *host_tahani - *host_tahani
- *host_michael - *host_michael
- *host_mindy
- *host_jason - *host_jason
- *host_chidi - *host_chidi
- *host_derek

View File

@@ -1,2 +0,0 @@
.jj/
.git/

137
AGENTS.md
View File

@@ -1,31 +1,132 @@
# AGENTS.md # AGENTS.md
## ⚠️ VERSION CONTROL: JUJUTSU (jj) ONLY
**NEVER run git commands.** This repo uses Jujutsu (`jj`). Use `jj status`, `jj diff`, `jj commit`, etc.
## Build Commands ## Build Commands
### Local Development
```bash ```bash
nix run .#build # Build current host config nix run .#build # Build current host config
nix run .#build -- <hostname> # Build specific host (chidi, jason, michael, mindy, tahani) nix run .#build -- <hostname> # Build specific host (chidi, jason, michael, tahani)
nix run .#apply # Build and apply locally (darwin-rebuild/nixos-rebuild switch) nix run .#apply # Build and apply locally (darwin-rebuild/nixos-rebuild switch)
nix flake check # Validate flake nix flake check # Validate flake
```
# Remote NixOS deployment (colmena) ### Remote Deployment (NixOS only)
```bash
colmena build # Build all NixOS hosts colmena build # Build all NixOS hosts
colmena apply --on <host> # Deploy to specific NixOS host (michael, mindy, tahani) colmena apply --on <host> # Deploy to specific NixOS host (michael, tahani)
colmena apply # Deploy to all NixOS hosts colmena apply # Deploy to all NixOS hosts
``` ```
## Code Style ### Formatting
- **Formatter**: Alejandra with tabs (run `alejandra .` to format) ```bash
- **Function args**: Destructure on separate lines `{inputs, pkgs, ...}:` alejandra . # Format all Nix files
- **Imports**: Use relative paths from file location (`../../profiles/foo.nix`) ```
- **Attribute sets**: One attribute per line, trailing semicolons
- **Lists**: `with pkgs; [...]` for packages, one item per line for long lists
## Structure ## Code Style
- `hosts/<name>/` - Per-machine configs (darwin: chidi, jason | nixos: michael, mindy, tahani)
- `profiles/` - Reusable program/service configs (imported by hosts) ### Formatter
- `modules/` - Custom NixOS/darwin modules - **Tool**: Alejandra
- `lib/` - Shared constants and utilities - **Config**: `alejandra.toml` specifies tabs for indentation
- `secrets/` - SOPS-encrypted secrets (`.sops.yaml` for config) - **Command**: Run `alejandra .` before committing
### File Structure
- **Hosts**: `hosts/<hostname>/` - Per-machine configurations
- Darwin: `chidi`, `jason`
- NixOS: `michael`, `tahani`
- **Profiles**: `profiles/` - Reusable program/service configurations (imported by hosts)
- **Modules**: `modules/` - Custom NixOS/darwin modules
- **Lib**: `lib/` - Shared constants and utilities
- **Secrets**: `secrets/` - SOPS-encrypted secrets (`.sops.yaml` for config)
### Nix Language Conventions
**Function Arguments**:
```nix
{inputs, pkgs, lib, ...}:
```
Destructure arguments on separate lines. Use `...` to capture remaining args.
**Imports**:
```nix
../../profiles/foo.nix
```
Use relative paths from file location, not absolute paths.
**Attribute Sets**:
```nix
options.my.gitea = {
enable = lib.mkEnableOption "Gitea git hosting service";
bucket = lib.mkOption {
type = lib.types.str;
description = "S3 bucket name";
};
};
```
One attribute per line with trailing semicolons.
**Lists with Packages**:
```nix
with pkgs;
[
age
alejandra
ast-grep
]
```
Use `with pkgs;` for package lists, one item per line.
**Modules**:
```nix
{
config,
lib,
pkgs,
...
}:
with lib; let
cfg = config.my.feature;
in {
options.my.feature = {
enable = mkEnableOption "Feature description";
};
config = mkIf cfg.enable {
# configuration
};
}
```
- Destructure args on separate lines
- Use `with lib;` for brevity with NixOS lib functions
- Define `cfg` for config options
- Use `mkIf`, `mkForce`, `mkDefault` appropriately
**Conditional Platform-Specific Code**:
```nix
++ lib.optionals stdenv.isDarwin [
_1password-gui
dockutil
]
++ lib.optionals stdenv.isLinux [
lm_sensors
]
```
### Naming Conventions
- **Option names**: `my.<feature>.<option>` for custom modules
- **Hostnames**: Lowercase, descriptive (e.g., `michael`, `tahani`)
- **Profile files**: Descriptive, lowercase with hyphens (e.g., `homebrew.nix`)
### Secrets Management
- Use SOPS for secrets (see `.sops.yaml`)
- Never commit unencrypted secrets
- Secrets files in `hosts/<host>/secrets.nix` import SOPS-generated files
### Imports Pattern
Host configs import:
1. System modules (`modulesPath + "/..."`)
2. Host-specific files (`./disk-config.nix`, `./hardware-configuration.nix`)
3. SOPS secrets (`./secrets.nix`)
4. Custom modules (`../../modules/*.nix`)
5. Base profiles (`../../profiles/*.nix`)
6. Input modules (`inputs.<module>.xxxModules.module`)
Home-manager users import profiles in a similar manner.

197
flake.lock generated
View File

@@ -9,11 +9,11 @@
"systems": "systems" "systems": "systems"
}, },
"locked": { "locked": {
"lastModified": 1767386128, "lastModified": 1769353768,
"narHash": "sha256-BJDu7dIMauO2nYRSL4aI8wDNtEm2KOb7lDKP3hxdrpo=", "narHash": "sha256-zI+7cbMI4wMIR57jMjDSEsVb3grapTnURDxxJPYFIW0=",
"owner": "numtide", "owner": "numtide",
"repo": "blueprint", "repo": "blueprint",
"rev": "0ed984d51a3031065925ab08812a5434f40b93d4", "rev": "c7da5c70ad1c9b60b6f5d4f674fbe205d48d8f6c",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -25,16 +25,16 @@
"brew-src": { "brew-src": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1763638478, "lastModified": 1769363988,
"narHash": "sha256-n/IMowE9S23ovmTkKX7KhxXC2Yq41EAVFR2FBIXPcT8=", "narHash": "sha256-BiGPeulrDVetXP+tjxhMcGLUROZAtZIhU5m4MqawCfM=",
"owner": "Homebrew", "owner": "Homebrew",
"repo": "brew", "repo": "brew",
"rev": "fbfdbaba008189499958a7aeb1e2c36ab10c067d", "rev": "d01011cac6d72032c75fd2cd9489909e95d9faf2",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "Homebrew", "owner": "Homebrew",
"ref": "5.0.3", "ref": "5.0.12",
"repo": "brew", "repo": "brew",
"type": "github" "type": "github"
} }
@@ -85,11 +85,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1768220509, "lastModified": 1770184146,
"narHash": "sha256-8wMrJP/Xk5Dkm0TxzaERLt3eGFEhHTWaJKUpK3AoL4o=", "narHash": "sha256-DsqnN6LvXmohTRaal7tVZO/AKBuZ02kPBiZKSU4qa/k=",
"owner": "LnL7", "owner": "LnL7",
"repo": "nix-darwin", "repo": "nix-darwin",
"rev": "7b1d394e7d9112d4060e12ef3271b38a7c43e83b", "rev": "0d7874ef7e3ba02d58bebb871e6e29da36fa1b37",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -106,11 +106,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1766150702, "lastModified": 1769524058,
"narHash": "sha256-P0kM+5o+DKnB6raXgFEk3azw8Wqg5FL6wyl9jD+G5a4=", "narHash": "sha256-zygdD6X1PcVNR2PsyK4ptzrVEiAdbMqLos7utrMDEWE=",
"owner": "nix-community", "owner": "nix-community",
"repo": "disko", "repo": "disko",
"rev": "916506443ecd0d0b4a0f4cf9d40a3c22ce39b378", "rev": "71a3fc97d80881e91710fe721f1158d3b96ae14d",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -128,11 +128,11 @@
"rust-analyzer-src": "rust-analyzer-src" "rust-analyzer-src": "rust-analyzer-src"
}, },
"locked": { "locked": {
"lastModified": 1767941162, "lastModified": 1768113825,
"narHash": "sha256-7qJDycrXto4xrQWHbj5BkrRWt/hcfZtjlCstEJTyfJ8=", "narHash": "sha256-f09fAifGPEuRrz1DFY910jexq0DaBuQBbq7WcxQIUgs=",
"owner": "nix-community", "owner": "nix-community",
"repo": "fenix", "repo": "fenix",
"rev": "80b1a19a713e2558c411f3259fecb1edd4b5b327", "rev": "55106e04d905c6a7726d0f6be77ed39a99f66a61",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -162,11 +162,11 @@
"nixpkgs-lib": "nixpkgs-lib" "nixpkgs-lib": "nixpkgs-lib"
}, },
"locked": { "locked": {
"lastModified": 1768135262, "lastModified": 1769996383,
"narHash": "sha256-PVvu7OqHBGWN16zSi6tEmPwwHQ4rLPU9Plvs8/1TUBY=", "narHash": "sha256-AnYjnFWgS49RlqX7LrC4uA+sCCDBj0Ry/WOJ5XWAsa0=",
"owner": "hercules-ci", "owner": "hercules-ci",
"repo": "flake-parts", "repo": "flake-parts",
"rev": "80daad04eddbbf5a4d883996a73f3f542fa437ac", "rev": "57928607ea566b5db3ad13af0e57e921e6b12381",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -183,11 +183,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1765835352, "lastModified": 1769996383,
"narHash": "sha256-XswHlK/Qtjasvhd1nOa1e8MgZ8GS//jBoTqWtrS1Giw=", "narHash": "sha256-AnYjnFWgS49RlqX7LrC4uA+sCCDBj0Ry/WOJ5XWAsa0=",
"owner": "hercules-ci", "owner": "hercules-ci",
"repo": "flake-parts", "repo": "flake-parts",
"rev": "a34fae9c08a15ad73f295041fec82323541400a9", "rev": "57928607ea566b5db3ad13af0e57e921e6b12381",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -254,11 +254,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1768366276, "lastModified": 1770586272,
"narHash": "sha256-NUdsaB6H1wvbOC7oh1UZ7Ojg1I+mYBQv8ovlMB6FbHk=", "narHash": "sha256-Ucci8mu8QfxwzyfER2DQDbvW9t1BnTUJhBmY7ybralo=",
"owner": "nix-community", "owner": "nix-community",
"repo": "home-manager", "repo": "home-manager",
"rev": "4e235a8746b195e335306d898f0cc93ad6c4564c", "rev": "b1f916ba052341edc1f80d4b2399f1092a4873ca",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -270,11 +270,11 @@
"homebrew-cask": { "homebrew-cask": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1768377904, "lastModified": 1770623639,
"narHash": "sha256-e3iYl1dxSuNFaRpFCBEGROh5i9PRhZGxwqWZN47ejtU=", "narHash": "sha256-LNLzbnhp5IEizTMMapF2FtLVD21sFzBfVgXcwNz7fKU=",
"owner": "homebrew", "owner": "homebrew",
"repo": "homebrew-cask", "repo": "homebrew-cask",
"rev": "e6ce2fb4e105e8736c8df83bd58aa1c79f1c7e13", "rev": "c3bb7aedf0881187cbeb55ad2873240feba21603",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -286,11 +286,11 @@
"homebrew-core": { "homebrew-core": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1768381952, "lastModified": 1770627860,
"narHash": "sha256-Jv9ZOq8PRLfXZ7VDCMJoPVYZvLjJDzgaiKflU0fj6Qk=", "narHash": "sha256-ihOndNFECGtZhkrtynP8nDJ8fbSxhNd2zWcq3CLDnQA=",
"owner": "homebrew", "owner": "homebrew",
"repo": "homebrew-core", "repo": "homebrew-core",
"rev": "ba0786407a5cb72d3adad8431af343d32882c31e", "rev": "a12e59e6d202fc64aee013f8574c043a4c00a271",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -299,22 +299,6 @@
"type": "github" "type": "github"
} }
}, },
"jj-ryu": {
"flake": false,
"locked": {
"lastModified": 1768252399,
"narHash": "sha256-jafGP3gseSTHI20TqWsbTKLxqNKIpamopwA+0hQtnSs=",
"owner": "dmmulroy",
"repo": "jj-ryu",
"rev": "f4266e2e67cd34e50c552709f87e1506ad27e278",
"type": "github"
},
"original": {
"owner": "dmmulroy",
"repo": "jj-ryu",
"type": "github"
}
},
"llm-agents": { "llm-agents": {
"inputs": { "inputs": {
"blueprint": "blueprint", "blueprint": "blueprint",
@@ -322,11 +306,11 @@
"treefmt-nix": "treefmt-nix" "treefmt-nix": "treefmt-nix"
}, },
"locked": { "locked": {
"lastModified": 1768370489, "lastModified": 1770616720,
"narHash": "sha256-/tZo3ePuv6gbJ+OUAtn/vIL/NHwXmVdmTqwpRKKYuW4=", "narHash": "sha256-NY7yFg3ZG0fzseC4SK/TQjgaODczuvCDtJZNsBmN2QU=",
"owner": "numtide", "owner": "numtide",
"repo": "llm-agents.nix", "repo": "llm-agents.nix",
"rev": "41130668102a77795069d950e001926dd7542c99", "rev": "09019dadd541051fc11f5008b56f4e8a14d2df4c",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -344,11 +328,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1768073227, "lastModified": 1769947964,
"narHash": "sha256-tmr6CNYSa0qoNe+5z39+as3Z0baKmF9pe485Z3DVVNU=", "narHash": "sha256-DElM5gwipT82puD7w5KMxG3PGiwozJ2VVXtwwPbwV5g=",
"owner": "jnsahaj", "owner": "jnsahaj",
"repo": "lumen", "repo": "lumen",
"rev": "dd570ede2d65052ebedb265127c01b1423a67827", "rev": "af5fa88eba126dc4508ddd307fd0a2c78f77c898",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -383,11 +367,11 @@
"brew-src": "brew-src" "brew-src": "brew-src"
}, },
"locked": { "locked": {
"lastModified": 1764473698, "lastModified": 1769437432,
"narHash": "sha256-C91gPgv6udN5WuIZWNehp8qdLqlrzX6iF/YyboOj6XI=", "narHash": "sha256-8d7KnCpT2LweRvSzZYEGd9IM3eFX+A78opcnDM0+ndk=",
"owner": "zhaofengli-wip", "owner": "zhaofengli-wip",
"repo": "nix-homebrew", "repo": "nix-homebrew",
"rev": "6a8ab60bfd66154feeaa1021fc3b32684814a62a", "rev": "a5409abd0d5013d79775d3419bcac10eacb9d8c5",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -398,11 +382,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1768302833, "lastModified": 1770537093,
"narHash": "sha256-h5bRFy9bco+8QcK7rGoOiqMxMbmn21moTACofNLRMP4=", "narHash": "sha256-pF1quXG5wsgtyuPOHcLfYg/ft/QMr8NnX0i6tW2187s=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "61db79b0c6b838d9894923920b612048e1201926", "rev": "fef9403a3e4d31b0a23f0bacebbec52c248fbb51",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -414,11 +398,11 @@
}, },
"nixpkgs-lib": { "nixpkgs-lib": {
"locked": { "locked": {
"lastModified": 1765674936, "lastModified": 1769909678,
"narHash": "sha256-k00uTP4JNfmejrCLJOwdObYC9jHRrr/5M/a/8L2EIdo=", "narHash": "sha256-cBEymOf4/o3FD5AZnzC3J9hLbiZ+QDT/KDuyHXVJOpM=",
"owner": "nix-community", "owner": "nix-community",
"repo": "nixpkgs.lib", "repo": "nixpkgs.lib",
"rev": "2075416fcb47225d9b68ac469a5c4801a9c4dd85", "rev": "72716169fe93074c333e8d0173151350670b824c",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -429,11 +413,11 @@
}, },
"nixpkgs_2": { "nixpkgs_2": {
"locked": { "locked": {
"lastModified": 1768381560, "lastModified": 1770627848,
"narHash": "sha256-iBGGNRRhSRUwk3YXVTqV1yo9OIo77GMXvH24JXPRQ8s=", "narHash": "sha256-pWVT4wjh+HKIdvGhph0vU1Kh48OSaSutPGpXxGNxSxw=",
"owner": "nixos", "owner": "nixos",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "afcce51e9741862bb9381853a94f7580a4ad1978", "rev": "fe776c9fe2c37f51546bb50ced285ea2a365e7d9",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -445,11 +429,11 @@
}, },
"nixpkgs_3": { "nixpkgs_3": {
"locked": { "locked": {
"lastModified": 1767026758, "lastModified": 1770380644,
"narHash": "sha256-7fsac/f7nh/VaKJ/qm3I338+wAJa/3J57cOGpXi0Sbg=", "narHash": "sha256-P7dWMHRUWG5m4G+06jDyThXO7kwSk46C1kgjEWcybkE=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "346dd96ad74dc4457a9db9de4f4f57dab2e5731d", "rev": "ae67888ff7ef9dff69b3cf0cc0fbfbcd3a722abe",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -482,11 +466,11 @@
"systems": "systems_3" "systems": "systems_3"
}, },
"locked": { "locked": {
"lastModified": 1767906546, "lastModified": 1770627083,
"narHash": "sha256-AoSWS8+P+7hQ/jIdv0wBjgH1MvnerdWBFXO4GV3JoQs=", "narHash": "sha256-Js8WrUwQ3lLRjWb8jGGE5npRN96E4mtPwyuNDuCDkcg=",
"owner": "nix-community", "owner": "nix-community",
"repo": "nixvim", "repo": "nixvim",
"rev": "7eb8f36f085b85a2aeff929aff52d0f6aa14e000", "rev": "d354487c4692de3d0918170c45bde05175b12e30",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -495,6 +479,55 @@
"type": "github" "type": "github"
} }
}, },
"nono": {
"flake": false,
"locked": {
"lastModified": 1770553882,
"narHash": "sha256-yEXw+rtuhoZvx1eO2Q+qPeGpvVbyASh7D9YEVAteoo8=",
"owner": "lukehinds",
"repo": "nono",
"rev": "e80983bb6a4058335e96e02eeabe17314f771a9c",
"type": "github"
},
"original": {
"owner": "lukehinds",
"repo": "nono",
"type": "github"
}
},
"openusage": {
"flake": false,
"locked": {
"lastModified": 1770543295,
"narHash": "sha256-DvgEPZhFm06igalUPgnQ8VLkl0gk/3rm+lbEJ2/s7gM=",
"owner": "robinebers",
"repo": "openusage",
"rev": "22a7bd5f7856397400e60dd787ad82b23c763969",
"type": "github"
},
"original": {
"owner": "robinebers",
"ref": "v0.5.1",
"repo": "openusage",
"type": "github"
}
},
"overseer": {
"flake": false,
"locked": {
"lastModified": 1770303305,
"narHash": "sha256-NM1haQAk1mWdmewgIv6tzApaIQxWKrIrri0+uXHY3Zc=",
"owner": "dmmulroy",
"repo": "overseer",
"rev": "5880d97939744ff72eb552c671da2fae1789041e",
"type": "github"
},
"original": {
"owner": "dmmulroy",
"repo": "overseer",
"type": "github"
}
},
"root": { "root": {
"inputs": { "inputs": {
"colmena": "colmena", "colmena": "colmena",
@@ -504,12 +537,14 @@
"home-manager": "home-manager", "home-manager": "home-manager",
"homebrew-cask": "homebrew-cask", "homebrew-cask": "homebrew-cask",
"homebrew-core": "homebrew-core", "homebrew-core": "homebrew-core",
"jj-ryu": "jj-ryu",
"llm-agents": "llm-agents", "llm-agents": "llm-agents",
"lumen": "lumen", "lumen": "lumen",
"nix-homebrew": "nix-homebrew", "nix-homebrew": "nix-homebrew",
"nixpkgs": "nixpkgs_2", "nixpkgs": "nixpkgs_2",
"nixvim": "nixvim", "nixvim": "nixvim",
"nono": "nono",
"openusage": "openusage",
"overseer": "overseer",
"sops-nix": "sops-nix", "sops-nix": "sops-nix",
"zjstatus": "zjstatus" "zjstatus": "zjstatus"
} }
@@ -517,11 +552,11 @@
"rust-analyzer-src": { "rust-analyzer-src": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1767905519, "lastModified": 1768083390,
"narHash": "sha256-mRU9VEhGQE9dnOU3pu1Rx3dZO4NpZO+cnC0rPMFcCqE=", "narHash": "sha256-TGWPJq2mXwxfAe83iZ18DIqXC4sOSj7RkW9b59h6Ox4=",
"owner": "rust-lang", "owner": "rust-lang",
"repo": "rust-analyzer", "repo": "rust-analyzer",
"rev": "ff9a2e88b14907562294838f83963e5966f717de", "rev": "e42e8ff582ba12a88b6845525d08b6428e6d0fb9",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -559,11 +594,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1768271704, "lastModified": 1770526836,
"narHash": "sha256-jJqlW8A3OZ5tYbXphF7U8P8g/3Cn8PPwPa4YlJ/9agg=", "narHash": "sha256-xbvX5Ik+0inJcLJtJ/AajAt7xCk6FOCrm5ogpwwvVDg=",
"owner": "Mic92", "owner": "Mic92",
"repo": "sops-nix", "repo": "sops-nix",
"rev": "691b8b6713855d0fe463993867291c158472fc6f", "rev": "d6e0e666048a5395d6ea4283143b7c9ac704720d",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -656,11 +691,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1768158989, "lastModified": 1770228511,
"narHash": "sha256-67vyT1+xClLldnumAzCTBvU0jLZ1YBcf4vANRWP3+Ak=", "narHash": "sha256-wQ6NJSuFqAEmIg2VMnLdCnUc0b7vslUohqqGGD+Fyxk=",
"owner": "numtide", "owner": "numtide",
"repo": "treefmt-nix", "repo": "treefmt-nix",
"rev": "e96d59dff5c0d7fddb9d113ba108f03c3ef99eca", "rev": "337a4fe074be1042a35086f15481d763b8ddc0e7",
"type": "github" "type": "github"
}, },
"original": { "original": {

View File

@@ -40,8 +40,16 @@
url = "github:jnsahaj/lumen"; url = "github:jnsahaj/lumen";
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";
}; };
jj-ryu = { nono = {
url = "github:dmmulroy/jj-ryu"; url = "github:lukehinds/nono";
flake = false;
};
overseer = {
url = "github:dmmulroy/overseer";
flake = false;
};
openusage = {
url = "github:robinebers/openusage/v0.5.1";
flake = false; flake = false;
}; };
}; };
@@ -54,7 +62,7 @@
inherit (constants) user; inherit (constants) user;
darwinHosts = ["chidi" "jason"]; darwinHosts = ["chidi" "jason"];
nixosHosts = ["derek" "michael" "tahani"]; nixosHosts = ["michael" "tahani"];
overlays = import ./overlays {inherit inputs;}; overlays = import ./overlays {inherit inputs;};
nixpkgsConfig = hostPlatform: { nixpkgsConfig = hostPlatform: {
@@ -130,6 +138,11 @@
pgbackrest = ./modules/pgbackrest.nix; pgbackrest = ./modules/pgbackrest.nix;
}; };
flake.overlays = {
default = lib.composeManyExtensions overlays;
list = overlays;
};
flake.lib = {inherit constants;}; flake.lib = {inherit constants;};
perSystem = { perSystem = {

View File

@@ -21,22 +21,22 @@
home-manager.users.${user} = { home-manager.users.${user} = {
imports = [ imports = [
../../profiles/atuin.nix ../../profiles/atuin.nix
../../profiles/aerospace.nix
../../profiles/bash.nix ../../profiles/bash.nix
../../profiles/bat.nix ../../profiles/bat.nix
../../profiles/direnv.nix ../../profiles/direnv.nix
../../profiles/eza.nix ../../profiles/nushell.nix
../../profiles/fish.nix
../../profiles/fzf.nix ../../profiles/fzf.nix
../../profiles/ghostty.nix ../../profiles/ghostty.nix
../../profiles/git.nix ../../profiles/git.nix
../../profiles/home.nix ../../profiles/home.nix
../../profiles/jjui.nix
../../profiles/jujutsu.nix
../../profiles/lazygit.nix ../../profiles/lazygit.nix
../../profiles/lumen.nix ../../profiles/lumen.nix
../../profiles/mise.nix ../../profiles/mise.nix
../../profiles/nono.nix
../../profiles/neovim ../../profiles/neovim
../../profiles/opencode.nix ../../profiles/opencode.nix
../../profiles/claude-code.nix
../../profiles/ripgrep.nix ../../profiles/ripgrep.nix
../../profiles/ssh.nix ../../profiles/ssh.nix
../../profiles/starship.nix ../../profiles/starship.nix

View File

@@ -1,54 +0,0 @@
{...}: {
programs.vdirsyncer = {
enable = true;
};
programs.khal = {
enable = true;
locale = {
local_timezone = "Europe/Berlin";
default_timezone = "Europe/Berlin";
timeformat = "%H:%M";
dateformat = "%d/%m/%Y";
longdateformat = "%d/%m/%Y";
datetimeformat = "%d/%m/%Y %H:%M";
longdatetimeformat = "%d/%m/%Y %H:%M";
};
};
accounts.calendar = {
basePath = ".local/share/calendars";
accounts.icloud = {
primary = true;
primaryCollection = "home";
remote = {
type = "caldav";
url = "https://caldav.icloud.com/";
userName = "christoph@schmatzler.com";
passwordCommand = ["cat" "/run/secrets/derek-icloud-password"];
};
local = {
type = "filesystem";
fileExt = ".ics";
};
vdirsyncer = {
enable = true;
collections = ["from a" "from b"];
metadata = ["color" "displayname"];
};
khal = {
enable = true;
type = "discover";
};
};
};
services.vdirsyncer = {
enable = true;
frequency = "*:0/15";
};
}

View File

@@ -1,53 +0,0 @@
{
pkgs,
inputs,
user,
hostname,
modulesPath,
...
}: {
imports = [
(modulesPath + "/installer/scan/not-detected.nix")
(modulesPath + "/profiles/qemu-guest.nix")
./disk-config.nix
./hardware-configuration.nix
./secrets.nix
../../profiles/core.nix
../../profiles/fail2ban.nix
../../profiles/nixos.nix
../../profiles/openssh.nix
../../profiles/tailscale.nix
inputs.disko.nixosModules.disko
inputs.sops-nix.nixosModules.sops
];
networking.hostName = hostname;
environment.systemPackages = with pkgs; [
chromium
playwright-driver.browsers
];
home-manager.users.${user} = {
imports = [
../../profiles/bash.nix
../../profiles/bat.nix
../../profiles/direnv.nix
../../profiles/eza.nix
../../profiles/fish.nix
../../profiles/fzf.nix
../../profiles/git.nix
../../profiles/home.nix
../../profiles/jjui.nix
../../profiles/jujutsu.nix
../../profiles/lazygit.nix
../../profiles/neovim
../../profiles/ripgrep.nix
../../profiles/ssh.nix
../../profiles/starship.nix
../../profiles/zoxide.nix
./calendar.nix
inputs.nixvim.homeModules.nixvim
];
};
}

View File

@@ -1,37 +0,0 @@
{
disko.devices = {
disk = {
main = {
type = "disk";
device = "/dev/sda";
content = {
type = "gpt";
partitions = {
boot = {
size = "1M";
type = "EF02";
};
ESP = {
size = "512M";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = ["umask=0077"];
};
};
root = {
size = "100%";
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/";
};
};
};
};
};
};
};
}

View File

@@ -1,18 +0,0 @@
{
lib,
modulesPath,
...
}: {
imports = [
(modulesPath + "/profiles/qemu-guest.nix")
];
boot.initrd.availableKernelModules = ["ahci" "xhci_pci" "virtio_pci" "virtio_scsi" "sd_mod" "sr_mod"];
boot.initrd.kernelModules = [];
boot.kernelModules = [];
boot.extraModulePackages = [];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
networking.useDHCP = lib.mkDefault true;
}

View File

@@ -1,9 +0,0 @@
{user, ...}: {
sops.secrets = {
derek-icloud-password = {
sopsFile = ../../secrets/derek-icloud-password;
format = "binary";
owner = user;
};
};
}

View File

@@ -20,22 +20,22 @@
home-manager.users.${user} = { home-manager.users.${user} = {
imports = [ imports = [
../../profiles/atuin.nix ../../profiles/atuin.nix
../../profiles/aerospace.nix
../../profiles/bash.nix ../../profiles/bash.nix
../../profiles/bat.nix ../../profiles/bat.nix
../../profiles/direnv.nix ../../profiles/direnv.nix
../../profiles/eza.nix ../../profiles/nushell.nix
../../profiles/fish.nix
../../profiles/fzf.nix ../../profiles/fzf.nix
../../profiles/ghostty.nix ../../profiles/ghostty.nix
../../profiles/git.nix ../../profiles/git.nix
../../profiles/home.nix ../../profiles/home.nix
../../profiles/jjui.nix
../../profiles/jujutsu.nix
../../profiles/lazygit.nix ../../profiles/lazygit.nix
../../profiles/lumen.nix ../../profiles/lumen.nix
../../profiles/mise.nix ../../profiles/mise.nix
../../profiles/nono.nix
../../profiles/neovim ../../profiles/neovim
../../profiles/opencode.nix ../../profiles/opencode.nix
../../profiles/claude-code.nix
../../profiles/ripgrep.nix ../../profiles/ripgrep.nix
../../profiles/ssh.nix ../../profiles/ssh.nix
../../profiles/starship.nix ../../profiles/starship.nix

View File

@@ -39,22 +39,9 @@
home-manager.users.${user} = { home-manager.users.${user} = {
imports = [ imports = [
../../profiles/bash.nix ../../profiles/nushell.nix
../../profiles/bat.nix
../../profiles/direnv.nix
../../profiles/eza.nix
../../profiles/fish.nix
../../profiles/fzf.nix
../../profiles/git.nix
../../profiles/home.nix ../../profiles/home.nix
../../profiles/jjui.nix
../../profiles/jujutsu.nix
../../profiles/lazygit.nix
../../profiles/neovim
../../profiles/ripgrep.nix
../../profiles/ssh.nix ../../profiles/ssh.nix
../../profiles/starship.nix
../../profiles/zoxide.nix
inputs.nixvim.homeModules.nixvim inputs.nixvim.homeModules.nixvim
]; ];
}; };

View File

@@ -1,6 +1,7 @@
{...}: { {
services.adguardhome = { services.adguardhome = {
enable = true; enable = true;
host = "0.0.0.0";
port = 10000; port = 10000;
settings = { settings = {
dns = { dns = {
@@ -15,7 +16,42 @@
safe_search = { safe_search = {
enabled = false; enabled = false;
}; };
safebrowsing_enabled = true;
blocked_response_ttl = 10;
filters_update_interval = 24;
blocked_services = {
ids = [
"reddit"
"twitter"
];
};
}; };
filters = [
{
enabled = true;
url = "https://cdn.jsdelivr.net/gh/hagezi/dns-blocklists@latest/adblock/pro.txt";
name = "HaGeZi Multi PRO";
id = 1;
}
{
enabled = true;
url = "https://cdn.jsdelivr.net/gh/hagezi/dns-blocklists@latest/adblock/tif.txt";
name = "HaGeZi Threat Intelligence Feeds";
id = 2;
}
{
enabled = true;
url = "https://cdn.jsdelivr.net/gh/hagezi/dns-blocklists@latest/adblock/gambling.txt";
name = "HaGeZi Gambling";
id = 3;
}
{
enabled = true;
url = "https://cdn.jsdelivr.net/gh/hagezi/dns-blocklists@latest/adblock/nsfw.txt";
name = "HaGeZi NSFW";
id = 4;
}
];
}; };
}; };
} }

View File

@@ -26,18 +26,18 @@
../../profiles/bash.nix ../../profiles/bash.nix
../../profiles/bat.nix ../../profiles/bat.nix
../../profiles/direnv.nix ../../profiles/direnv.nix
../../profiles/eza.nix ../../profiles/nushell.nix
../../profiles/fish.nix
../../profiles/fzf.nix ../../profiles/fzf.nix
../../profiles/git.nix ../../profiles/git.nix
../../profiles/home.nix ../../profiles/home.nix
../../profiles/jjui.nix
../../profiles/jujutsu.nix
../../profiles/lazygit.nix ../../profiles/lazygit.nix
../../profiles/lumen.nix ../../profiles/lumen.nix
../../profiles/mise.nix ../../profiles/mise.nix
../../profiles/nono.nix
../../profiles/neovim ../../profiles/neovim
../../profiles/opencode.nix ../../profiles/opencode.nix
../../profiles/overseer.nix
../../profiles/claude-code.nix
../../profiles/ripgrep.nix ../../profiles/ripgrep.nix
../../profiles/ssh.nix ../../profiles/ssh.nix
../../profiles/starship.nix ../../profiles/starship.nix
@@ -52,6 +52,8 @@
virtualisation.docker.enable = true; virtualisation.docker.enable = true;
users.users.${user}.extraGroups = ["docker"];
swapDevices = [ swapDevices = [
{ {
device = "/swapfile"; device = "/swapfile";

View File

@@ -1,4 +1,6 @@
{config, ...}: { {config, ...}: {
services.tailscale.extraSetFlags = ["--accept-routes=false"];
networking = { networking = {
useDHCP = false; useDHCP = false;
interfaces.eno1.ipv4.addresses = [ interfaces.eno1.ipv4.addresses = [
@@ -12,8 +14,14 @@
firewall = { firewall = {
enable = true; enable = true;
trustedInterfaces = ["eno1" "tailscale0"]; trustedInterfaces = ["eno1" "tailscale0"];
allowedUDPPorts = [config.services.tailscale.port]; allowedUDPPorts = [
allowedTCPPorts = [22]; 53
config.services.tailscale.port
];
allowedTCPPorts = [
22
53
];
checkReversePath = "loose"; checkReversePath = "loose";
}; };
}; };

View File

@@ -0,0 +1,20 @@
{
input,
prev,
}: let
manifest = (prev.lib.importTOML "${input}/Cargo.toml").package;
in
prev.rustPlatform.buildRustPackage {
pname = manifest.name;
version = manifest.version;
cargoLock.lockFile = "${input}/Cargo.lock";
src = input;
nativeBuildInputs = [prev.pkg-config];
buildInputs = [prev.openssl];
OPENSSL_NO_VENDOR = 1;
doCheck = false;
}

View File

@@ -1,18 +0,0 @@
{inputs}: final: prev: let
manifest = (prev.lib.importTOML "${inputs.jj-ryu}/Cargo.toml").package;
in {
jj-ryu = prev.rustPlatform.buildRustPackage {
pname = manifest.name;
version = manifest.version;
cargoLock.lockFile = "${inputs.jj-ryu}/Cargo.lock";
src = inputs.jj-ryu;
nativeBuildInputs = [prev.pkg-config];
buildInputs = [prev.openssl];
OPENSSL_NO_VENDOR = 1;
doCheck = false;
};
}

View File

@@ -1,18 +1,7 @@
{inputs}: final: prev: let {inputs}: final: prev: {
manifest = (prev.lib.importTOML "${inputs.lumen}/Cargo.toml").package; lumen =
in { import ../lib/build-rust-package.nix {
lumen = prev.rustPlatform.buildRustPackage { inherit prev;
pname = manifest.name; input = inputs.lumen;
version = manifest.version; };
cargoLock.lockFile = "${inputs.lumen}/Cargo.lock";
src = inputs.lumen;
nativeBuildInputs = [prev.pkg-config];
buildInputs = [prev.openssl];
OPENSSL_NO_VENDOR = 1;
doCheck = false;
};
} }

19
overlays/nono.nix Normal file
View File

@@ -0,0 +1,19 @@
{inputs}: final: prev: let
manifest = (prev.lib.importTOML "${inputs.nono}/Cargo.toml").package;
in {
nono =
prev.rustPlatform.buildRustPackage {
pname = manifest.name;
version = manifest.version;
cargoLock.lockFile = "${inputs.nono}/Cargo.lock";
src = inputs.nono;
nativeBuildInputs = with prev; [pkg-config];
buildInputs = with prev; [openssl dbus];
OPENSSL_NO_VENDOR = 1;
doCheck = false;
};
}

132
overlays/openusage.nix Normal file
View File

@@ -0,0 +1,132 @@
{inputs}: final: prev: let
version = "0.5.1";
in {
openusage =
prev.rustPlatform.buildRustPackage (finalAttrs: {
pname = "openusage";
inherit version;
src = inputs.openusage;
cargoRoot = "src-tauri";
cargoLock = {
lockFile = "${inputs.openusage}/src-tauri/Cargo.lock";
outputHashes = {
"tauri-nspanel-2.1.0" = "sha256-PLACEHOLDER";
"tauri-plugin-aptabase-1.0.0" = "sha256-PLACEHOLDER";
};
};
buildAndTestSubdir = finalAttrs.cargoRoot;
node_modules =
prev.stdenv.mkDerivation {
inherit (finalAttrs) src version;
pname = "${finalAttrs.pname}-node_modules";
impureEnvVars =
prev.lib.fetchers.proxyImpureEnvVars
++ [
"GIT_PROXY_COMMAND"
"SOCKS_SERVER"
];
nativeBuildInputs = [
prev.bun
prev.writableTmpDirAsHomeHook
];
dontConfigure = true;
dontFixup = true;
dontPatchShebangs = true;
buildPhase = ''
runHook preBuild
export BUN_INSTALL_CACHE_DIR=$(mktemp -d)
bun install \
--no-progress \
--frozen-lockfile \
--ignore-scripts
runHook postBuild
'';
installPhase = ''
runHook preInstall
cp -R ./node_modules $out
runHook postInstall
'';
outputHash = "sha256-PLACEHOLDER";
outputHashMode = "recursive";
};
nativeBuildInputs = [
prev.cargo-tauri.hook
prev.rustPlatform.bindgenHook
prev.bun
prev.nodejs
prev.pkg-config
prev.makeBinaryWrapper
];
buildInputs =
prev.lib.optionals prev.stdenv.isDarwin (
with prev.darwin.apple_sdk.frameworks; [
AppKit
CoreFoundation
CoreServices
Security
WebKit
]
);
# Disable updater artifact generation — we don't have signing keys.
tauriConf = builtins.toJSON {bundle.createUpdaterArtifacts = false;};
passAsFile = ["tauriConf"];
preBuild = ''
tauriBuildFlags+=(
"--config"
"$tauriConfPath"
)
'';
configurePhase = ''
runHook preConfigure
# Copy pre-fetched node_modules
cp -R ${finalAttrs.node_modules} node_modules/
chmod -R u+rw node_modules
chmod -R u+x node_modules/.bin
patchShebangs node_modules
export HOME=$TMPDIR
export PATH="$PWD/node_modules/.bin:$PATH"
# Bundle plugins (copy from plugins/ to src-tauri/resources/bundled_plugins/)
${prev.nodejs}/bin/node copy-bundled.cjs
runHook postConfigure
'';
env = {
OPENSSL_NO_VENDOR = true;
};
doCheck = false;
postInstall =
prev.lib.optionalString prev.stdenv.isDarwin ''
makeWrapper $out/Applications/OpenUsage.app/Contents/MacOS/OpenUsage $out/bin/openusage
'';
meta = {
description = "Track all your AI coding subscriptions in one place";
homepage = "https://github.com/robinebers/openusage";
license = prev.lib.licenses.mit;
platforms = prev.lib.platforms.darwin;
mainProgram = "openusage";
};
});
}

101
overlays/overseer.nix Normal file
View File

@@ -0,0 +1,101 @@
{inputs}: final: prev: let
manifest = (prev.lib.importTOML "${inputs.overseer}/overseer/Cargo.toml").package;
overseer-cli =
prev.rustPlatform.buildRustPackage {
pname = "overseer-cli";
version = manifest.version;
cargoLock.lockFile = "${inputs.overseer}/overseer/Cargo.lock";
src = "${inputs.overseer}/overseer";
nativeBuildInputs = with prev; [
pkg-config
];
buildInputs = with prev; [
openssl
];
OPENSSL_NO_VENDOR = 1;
doCheck = false;
};
overseer-host =
prev.buildNpmPackage {
pname = "overseer-host";
version = manifest.version;
src = "${inputs.overseer}/host";
npmDepsHash = "sha256-WIjx6N8vnH3C6Kxn4tiryi3bM0xnov5ok2k9XrndIS0=";
buildPhase = ''
runHook preBuild
npm run build
runHook postBuild
'';
installPhase = ''
runHook preInstall
mkdir -p $out
cp -r dist $out/
cp -r node_modules $out/
cp package.json $out/
runHook postInstall
'';
};
overseer-ui =
prev.buildNpmPackage {
pname = "overseer-ui";
version = manifest.version;
src = "${inputs.overseer}/ui";
npmDepsHash = "sha256-krOsSd8OAPsdCOCf1bcz9c/Myj6jpHOkaD/l+R7PQpY=";
buildPhase = ''
runHook preBuild
npm run build
runHook postBuild
'';
installPhase = ''
runHook preInstall
mkdir -p $out
cp -r dist $out/
runHook postInstall
'';
};
in {
# The CLI looks for host/dist/index.js and ui/dist relative to the binary
# Using paths like: exe_dir.join("../@dmmulroy/overseer/host/dist/index.js")
# So we create: bin/os and @dmmulroy/overseer/host/dist/index.js
overseer =
prev.runCommand "overseer-${manifest.version}" {
nativeBuildInputs = [prev.makeWrapper];
} ''
# Create npm-like structure that the CLI expects
mkdir -p $out/bin
mkdir -p $out/@dmmulroy/overseer/host
mkdir -p $out/@dmmulroy/overseer/ui
# Copy host files
cp -r ${overseer-host}/dist $out/@dmmulroy/overseer/host/
cp -r ${overseer-host}/node_modules $out/@dmmulroy/overseer/host/
cp ${overseer-host}/package.json $out/@dmmulroy/overseer/host/
# Copy UI files
cp -r ${overseer-ui}/dist $out/@dmmulroy/overseer/ui/
# Copy CLI binary
cp ${overseer-cli}/bin/os $out/bin/os
# Make wrapper that ensures node is available
wrapProgram $out/bin/os \
--prefix PATH : ${prev.nodejs}/bin
'';
}

142
profiles/aerospace.nix Normal file
View File

@@ -0,0 +1,142 @@
{
programs.aerospace = {
enable = true;
launchd.enable = true;
settings = {
start-at-login = true;
accordion-padding = 30;
default-root-container-layout = "tiles";
default-root-container-orientation = "auto";
on-focused-monitor-changed = [
"move-mouse monitor-lazy-center"
];
workspace-to-monitor-force-assignment = {
"1" = "secondary";
"2" = "secondary";
"3" = "secondary";
"4" = "secondary";
"5" = "secondary";
"6" = "secondary";
"7" = "secondary";
"8" = "secondary";
"9" = "main";
};
gaps = {
inner = {
horizontal = 8;
vertical = 8;
};
outer = {
left = 8;
right = 8;
top = 8;
bottom = 8;
};
};
on-window-detected = [
{
"if" = {
"app-id" = "com.apple.systempreferences";
};
run = "layout floating";
}
{
"if" = {
"app-id" = "com.mitchellh.ghostty";
};
run = ["layout tiling" "move-node-to-workspace 3"];
}
{
"if" = {
"app-id" = "net.imput.helium";
};
run = "move-node-to-workspace 2";
}
{
"if" = {
"app-id" = "com.tinyspeck.slackmacgap";
};
run = "move-node-to-workspace 5";
}
{
"if" = {
"app-id" = "net.whatsapp.WhatsApp";
};
run = "move-node-to-workspace 5";
}
{
"if" = {
"app-id" = "com.tidal.desktop";
};
run = "move-node-to-workspace 6";
}
];
mode = {
main.binding = {
"alt-enter" = "exec-and-forget open -a Ghostty";
"alt-h" = "focus left";
"alt-j" = "focus down";
"alt-k" = "focus up";
"alt-l" = "focus right";
"alt-shift-h" = "move left";
"alt-shift-j" = "move down";
"alt-shift-k" = "move up";
"alt-shift-l" = "move right";
"alt-ctrl-h" = "focus-monitor --wrap-around left";
"alt-ctrl-j" = "focus-monitor --wrap-around down";
"alt-ctrl-k" = "focus-monitor --wrap-around up";
"alt-ctrl-l" = "focus-monitor --wrap-around right";
"alt-ctrl-shift-h" = "move-node-to-monitor --focus-follows-window --wrap-around left";
"alt-ctrl-shift-j" = "move-node-to-monitor --focus-follows-window --wrap-around down";
"alt-ctrl-shift-k" = "move-node-to-monitor --focus-follows-window --wrap-around up";
"alt-ctrl-shift-l" = "move-node-to-monitor --focus-follows-window --wrap-around right";
"alt-space" = "layout tiles accordion";
"alt-shift-space" = "layout floating tiling";
"alt-slash" = "layout horizontal vertical";
"alt-f" = "fullscreen";
"alt-tab" = "workspace-back-and-forth";
"alt-shift-tab" = "move-workspace-to-monitor --wrap-around next";
"alt-r" = "mode resize";
"alt-shift-semicolon" = "mode service";
"alt-1" = "workspace 1";
"alt-2" = "workspace 2";
"alt-3" = "workspace 3";
"alt-4" = "workspace 4";
"alt-5" = "workspace 5";
"alt-6" = "workspace 6";
"alt-7" = "workspace 7";
"alt-8" = "workspace 8";
"alt-9" = "workspace 9";
"alt-shift-1" = "move-node-to-workspace --focus-follows-window 1";
"alt-shift-2" = "move-node-to-workspace --focus-follows-window 2";
"alt-shift-3" = "move-node-to-workspace --focus-follows-window 3";
"alt-shift-4" = "move-node-to-workspace --focus-follows-window 4";
"alt-shift-5" = "move-node-to-workspace --focus-follows-window 5";
"alt-shift-6" = "move-node-to-workspace --focus-follows-window 6";
"alt-shift-7" = "move-node-to-workspace --focus-follows-window 7";
"alt-shift-8" = "move-node-to-workspace --focus-follows-window 8";
"alt-shift-9" = "move-node-to-workspace --focus-follows-window 9";
};
resize.binding = {
"h" = "resize width -50";
"j" = "resize height +50";
"k" = "resize height -50";
"l" = "resize width +50";
"enter" = "mode main";
"esc" = "mode main";
};
service.binding = {
"esc" = "mode main";
"r" = ["reload-config" "mode main"];
"b" = ["balance-sizes" "mode main"];
"f" = ["layout floating tiling" "mode main"];
"backspace" = ["close-all-windows-but-current" "mode main"];
};
};
};
};
}

View File

@@ -1,7 +1,7 @@
{ {
programs.atuin = { programs.atuin = {
enable = true; enable = true;
enableFishIntegration = true; enableNushellIntegration = true;
flags = [ flags = [
"--disable-up-arrow" "--disable-up-arrow"
]; ];

9
profiles/claude-code.nix Normal file
View File

@@ -0,0 +1,9 @@
{
inputs,
pkgs,
...
}: {
home.packages = [
inputs.llm-agents.packages.${pkgs.stdenv.hostPlatform.system}.claude-code
];
}

View File

@@ -1,5 +1,6 @@
{pkgs, ...}: { {pkgs, ...}: {
programs.fish.enable = true; programs.fish.enable = true;
environment.shells = [pkgs.nushell];
nixpkgs = { nixpkgs = {
config = { config = {

View File

@@ -81,6 +81,8 @@
spaces.spans-displays = false; spaces.spans-displays = false;
WindowManager.StandardHideWidgets = true;
menuExtraClock = { menuExtraClock = {
Show24Hour = true; Show24Hour = true;
ShowDate = 1; ShowDate = 1;
@@ -96,6 +98,9 @@
"com.apple.AdLib" = { "com.apple.AdLib" = {
allowApplePersonalizedAdvertising = false; allowApplePersonalizedAdvertising = false;
}; };
"com.apple.Spotlight" = {
MenuItemHidden = true;
};
}; };
}; };
}; };
@@ -113,7 +118,7 @@
name = user; name = user;
home = "/Users/${user}"; home = "/Users/${user}";
isHidden = false; isHidden = false;
shell = pkgs.fish; shell = pkgs.nushell;
}; };
home-manager.useGlobalPkgs = true; home-manager.useGlobalPkgs = true;

View File

@@ -1,6 +0,0 @@
{
programs.eza = {
enable = true;
enableFishIntegration = true;
};
}

View File

@@ -1,54 +0,0 @@
{
programs.fish = {
enable = true;
functions = {
open_project = ''
set -l base "$HOME/Projects"
set -l choice (fd -t d -d 1 -a . "$base/Personal" "$base/Work" \
| string replace -r -- "^$base/" "" \
| fzf --prompt "project > ")
test -n "$choice"; and cd "$base/$choice"
'';
};
interactiveShellInit = ''
set fish_greeting
set fish_color_normal 4c4f69
set fish_color_command 1e66f5
set fish_color_param dd7878
set fish_color_keyword d20f39
set fish_color_quote 40a02b
set fish_color_redirection ea76cb
set fish_color_end fe640b
set fish_color_comment 8c8fa1
set fish_color_error d20f39
set fish_color_gray 9ca0b0
set fish_color_selection --background=ccd0da
set fish_color_search_match --background=ccd0da
set fish_color_option 40a02b
set fish_color_operator ea76cb
set fish_color_escape e64553
set fish_color_autosuggestion 9ca0b0
set fish_color_cancel d20f39
set fish_color_cwd df8e1d
set fish_color_user 179299
set fish_color_host 1e66f5
set fish_color_host_remote 40a02b
set fish_color_status d20f39
set fish_pager_color_progress 9ca0b0
set fish_pager_color_prefix ea76cb
set fish_pager_color_completion 4c4f69
set fish_pager_color_description 9ca0b0
set -gx LS_COLORS "$(vivid generate catppuccin-latte)"
set -gx COLORTERM truecolor
set -gx COLORFGBG "15;0"
set -gx TERM_BACKGROUND light
for mode in default insert
bind --mode $mode \cp open_project
end
'';
};
}

View File

@@ -1,7 +1,6 @@
{ {
programs.fzf = { programs.fzf = {
enable = true; enable = true;
enableFishIntegration = true;
}; };
home.sessionVariables = { home.sessionVariables = {

View File

@@ -1,6 +1,6 @@
{pkgs, ...}: { {pkgs, ...}: {
xdg.configFile."ghostty/config".text = '' xdg.configFile."ghostty/config".text = ''
command = ${pkgs.fish}/bin/fish command = ${pkgs.nushell}/bin/nu
theme = Catppuccin Latte theme = Catppuccin Latte
window-padding-x = 12 window-padding-x = 12
window-padding-y = 3 window-padding-y = 3
@@ -10,7 +10,7 @@
cursor-style = block cursor-style = block
mouse-hide-while-typing = true mouse-hide-while-typing = true
mouse-scroll-multiplier = 1.25 mouse-scroll-multiplier = 1.25
shell-integration = detect shell-integration = none
shell-integration-features = no-cursor shell-integration-features = no-cursor
clipboard-read = allow clipboard-read = allow
clipboard-write = allow clipboard-write = allow

View File

@@ -12,6 +12,11 @@ in {
autocrlf = "input"; autocrlf = "input";
pager = "delta"; pager = "delta";
}; };
credential = {
helper = "!gh auth git-credential";
"https://github.com".useHttpPath = true;
"https://gist.github.com".useHttpPath = true;
};
pull.rebase = true; pull.rebase = true;
rebase.autoStash = true; rebase.autoStash = true;
interactive.diffFilter = "delta --color-only"; interactive.diffFilter = "delta --color-only";
@@ -90,15 +95,10 @@ in {
gf = "git fetch"; gf = "git fetch";
gfa = "git fetch --all --tags --prune"; gfa = "git fetch --all --tags --prune";
gfo = "git fetch origin"; gfo = "git fetch origin";
gfg = "git ls-files | grep";
gg = "git gui citool"; gg = "git gui citool";
gga = "git gui citool --amend"; gga = "git gui citool --amend";
ggpull = "git pull origin \"$(git branch --show-current)\"";
ggpush = "git push origin \"$(git branch --show-current)\"";
ggsup = "git branch --set-upstream-to=origin/$(git branch --show-current)";
ghh = "git help"; ghh = "git help";
gignore = "git update-index --assume-unchanged"; gignore = "git update-index --assume-unchanged";
gignored = "git ls-files -v | grep \"^[[:lower:]]\"";
gl = "git pull"; gl = "git pull";
glg = "git log --stat"; glg = "git log --stat";
glgp = "git log --stat --patch"; glgp = "git log --stat --patch";
@@ -113,7 +113,6 @@ in {
glols = "git log --graph --pretty=\"%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%ar) %C(bold blue)<%an>%Creset\" --stat"; glols = "git log --graph --pretty=\"%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%ar) %C(bold blue)<%an>%Creset\" --stat";
glod = "git log --graph --pretty=\"%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%ad) %C(bold blue)<%an>%Creset\""; glod = "git log --graph --pretty=\"%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%ad) %C(bold blue)<%an>%Creset\"";
glods = "git log --graph --pretty=\"%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%ad) %C(bold blue)<%an>%Creset\" --date=short"; glods = "git log --graph --pretty=\"%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%ad) %C(bold blue)<%an>%Creset\" --date=short";
gluc = "git pull upstream $(git branch --show-current)";
glum = "git pull upstream main"; glum = "git pull upstream main";
gm = "git merge"; gm = "git merge";
gma = "git merge --abort"; gma = "git merge --abort";
@@ -128,7 +127,6 @@ in {
gpd = "git push --dry-run"; gpd = "git push --dry-run";
gpf = "git push --force-with-lease"; gpf = "git push --force-with-lease";
gpod = "git push origin --delete"; gpod = "git push origin --delete";
gpoat = "git push origin --all && git push origin --tags";
gpr = "git pull --rebase"; gpr = "git pull --rebase";
gpra = "git pull --rebase --autostash"; gpra = "git pull --rebase --autostash";
gprav = "git pull --rebase --autostash -v"; gprav = "git pull --rebase --autostash -v";
@@ -137,8 +135,6 @@ in {
gprv = "git pull --rebase -v"; gprv = "git pull --rebase -v";
gprum = "git pull --rebase upstream main"; gprum = "git pull --rebase upstream main";
gprumi = "git pull --rebase=interactive upstream main"; gprumi = "git pull --rebase=interactive upstream main";
gpsup = "git push --set-upstream origin $(git branch --show-current)";
gpsupf = "git push --set-upstream origin $(git branch --show-current) --force-with-lease";
gpv = "git push --verbose"; gpv = "git push --verbose";
gpu = "git push upstream"; gpu = "git push upstream";
gr = "git remote"; gr = "git remote";
@@ -164,13 +160,11 @@ in {
grm = "git rm"; grm = "git rm";
grmc = "git rm --cached"; grmc = "git rm --cached";
grmv = "git remote rename"; grmv = "git remote rename";
groh = "git reset origin/$(git branch --show-current) --hard";
grrm = "git remote remove"; grrm = "git remote remove";
grs = "git restore"; grs = "git restore";
grset = "git remote set-url"; grset = "git remote set-url";
grss = "git restore --source"; grss = "git restore --source";
grst = "git restore --staged"; grst = "git restore --staged";
grt = "cd \"$(git rev-parse --show-toplevel || echo .)\"";
gru = "git reset --"; gru = "git reset --";
grup = "git remote update"; grup = "git remote update";
grv = "git remote --verbose"; grv = "git remote --verbose";
@@ -196,16 +190,43 @@ in {
gswm = "git switch main"; gswm = "git switch main";
gta = "git tag --annotate"; gta = "git tag --annotate";
gts = "git tag --sign"; gts = "git tag --sign";
gtv = "git tag | sort -V";
gunignore = "git update-index --no-assume-unchanged"; gunignore = "git update-index --no-assume-unchanged";
gunwip = "git rev-list --max-count=1 --format=\"%s\" HEAD | grep -q \"\\--wip--\" && git reset HEAD~1";
gwch = "git whatchanged -p --abbrev-commit --pretty=medium"; gwch = "git whatchanged -p --abbrev-commit --pretty=medium";
gwipe = "git reset --hard && git clean --force -df";
gwt = "git worktree"; gwt = "git worktree";
gwta = "git worktree add"; gwta = "git worktree add";
gwtls = "git worktree list"; gwtls = "git worktree list";
gwtmv = "git worktree move"; gwtmv = "git worktree move";
gwtrm = "git worktree remove"; gwtrm = "git worktree remove";
gwip = "git add -A; git rm $(git ls-files --deleted) 2> /dev/null; git commit --no-verify --no-gpg-sign --message \"--wip-- [skip ci]\"";
}; };
# Complex git aliases that require pipes/subshells — nushell `alias` can't
# handle these, so they're defined as custom commands instead.
programs.nushell.extraConfig = ''
def ggpull [] { git pull origin (git branch --show-current | str trim) }
def ggpush [] { git push origin (git branch --show-current | str trim) }
def ggsup [] { git branch $"--set-upstream-to=origin/(git branch --show-current | str trim)" }
def gluc [] { git pull upstream (git branch --show-current | str trim) }
def gpsup [] { git push --set-upstream origin (git branch --show-current | str trim) }
def gpsupf [] { git push --set-upstream origin (git branch --show-current | str trim) --force-with-lease }
def groh [] { git reset $"origin/(git branch --show-current | str trim)" --hard }
def --env grt [] {
let toplevel = (do { git rev-parse --show-toplevel } | complete | get stdout | str trim)
if ($toplevel | is-not-empty) { cd $toplevel } else { cd . }
}
def gfg [...pattern: string] { git ls-files | lines | where {|f| $f =~ ($pattern | str join ".*") } }
def gignored [] { git ls-files -v | lines | where {|l| ($l | str substring 0..1) =~ "[a-z]" } }
def gpoat [] { git push origin --all; git push origin --tags }
def gtv [] { git tag | lines | sort }
def gwipe [] { git reset --hard; git clean --force -df }
def gunwip [] {
let msg = (git rev-list --max-count=1 --format="%s" HEAD | lines | get 1)
if ($msg | str contains "--wip--") { git reset HEAD~1 }
}
def gwip [] {
git add -A
let deleted = (git ls-files --deleted | lines)
if ($deleted | is-not-empty) { git rm ...$deleted }
git commit --no-verify --no-gpg-sign --message "--wip-- [skip ci]"
}
'';
} }

View File

@@ -4,7 +4,6 @@
casks = [ casks = [
"ghostty@tip" "ghostty@tip"
"helium-browser" "helium-browser"
"pearcleaner"
"tidal" "tidal"
]; ];
}; };

View File

@@ -1,5 +0,0 @@
{
programs.jjui = {
enable = true;
};
}

View File

@@ -1,49 +0,0 @@
{pkgs, ...}: {
home.packages = [pkgs.jj-ryu];
programs.jujutsu = {
enable = true;
settings = {
user = {
name = "Christoph Schmatzler";
email = "christoph@schmatzler.com";
};
git = {
sign-on-push = true;
subprocess = true;
write-change-id-header = true;
};
diff = {
tool = "delta";
};
ui = {
default-command = "status";
diff-formatter = ":git";
pager = ["delta" "--pager" "less -FRX"];
diff-editor = ["nvim" "-c" "DiffEditor $left $right $output"];
};
aliases = {
n = ["new"];
tug = ["bookmark" "move" "--from" "closest_bookmark(@-)" "--to" "@-"];
stack = ["log" "-r" "ancestors((trunk()..@)::bookmarks() | @, 2)"];
retrunk = ["rebase" "-d" "trunk()"];
};
revset-aliases = {
"closest_bookmark(to)" = "heads(::to & bookmarks())";
};
templates = {
draft_commit_description = ''
concat(
coalesce(description, default_commit_description, "\n"),
surround(
"\nJJ: This commit contains the following changes:\n", "",
indent("JJ: ", diff.stat(72)),
),
"\nJJ: ignore-rest\n",
diff.git(),
)
'';
};
};
};
}

View File

@@ -1,9 +1,8 @@
{ {
programs.mise = { programs.mise = {
enable = true; enable = true;
enableFishIntegration = true; enableNushellIntegration = true;
enableZshIntegration = true; globalConfig.settings = {
settings = {
auto_install = false; auto_install = false;
}; };
}; };

View File

@@ -8,7 +8,6 @@
./plugins/grug-far.nix ./plugins/grug-far.nix
./plugins/harpoon.nix ./plugins/harpoon.nix
./plugins/hunk.nix ./plugins/hunk.nix
./plugins/jj-diffconflicts.nix
./plugins/lsp.nix ./plugins/lsp.nix
./plugins/mini.nix ./plugins/mini.nix
./plugins/oil.nix ./plugins/oil.nix

View File

@@ -118,21 +118,15 @@
options.desc = "Visit paths (cwd)"; options.desc = "Visit paths (cwd)";
} }
# g - git # g - git
{
mode = "n";
key = "<leader>gc";
action = ":JJDiffConflicts<CR>";
options.desc = "Resolve conflicts";
}
{ {
mode = "n"; mode = "n";
key = "<leader>gg"; key = "<leader>gg";
action.__raw = '' action.__raw = ''
function() function()
require('toggleterm.terminal').Terminal:new({ cmd = 'jjui', direction = 'float' }):toggle() require('toggleterm.terminal').Terminal:new({ cmd = 'lazygit', direction = 'float' }):toggle()
end end
''; '';
options.desc = "jjui"; options.desc = "lazygit";
} }
# l - lsp/formatter # l - lsp/formatter
{ {

View File

@@ -1,14 +0,0 @@
{pkgs, ...}: {
programs.nixvim.extraPlugins = [
(pkgs.vimUtils.buildVimPlugin {
name = "jj-diffconflicts";
src =
pkgs.fetchFromGitHub {
owner = "rafikdraoui";
repo = "jj-diffconflicts";
rev = "main";
hash = "sha256-tyRTw3ENV7zlZF3Dp9zO4Huu02K5uyXb3brAJCW4w2M=";
};
})
];
}

View File

@@ -1,22 +1,40 @@
{pkgs, ...}: { {pkgs, ...}: {
programs.nixvim.plugins.treesitter = { programs.nixvim = {
enable = true; plugins.treesitter = {
settings = { enable = true;
highlight.enable = true; nixGrammars = true;
indent.enable = true; grammarPackages = pkgs.vimPlugins.nvim-treesitter.allGrammars;
settings = {
highlight.enable = true;
indent.enable = true;
};
}; };
grammarPackages = with pkgs.vimPlugins.nvim-treesitter.builtGrammars; [
bash # Register missing treesitter predicates for compatibility with newer grammars
elixir extraConfigLuaPre = ''
fish do
heex local query = require("vim.treesitter.query")
json local predicates = query.list_predicates()
markdown if not vim.tbl_contains(predicates, "is-not?") then
nix query.add_predicate("is-not?", function(match, pattern, source, predicate)
toml local dominated_by = predicate[2]
tsx local dominated = false
typescript for _, node in pairs(match) do
yaml if type(node) == "userdata" then
]; local current = node:parent()
while current do
if current:type() == dominated_by then
dominated = true
break
end
current = current:parent()
end
end
end
return not dominated
end, { force = true, all = true })
end
end
'';
}; };
} }

View File

@@ -65,9 +65,8 @@
"sudo" "sudo"
"network" "network"
"systemd-journal" "systemd-journal"
"docker"
]; ];
shell = pkgs.fish; shell = pkgs.nushell;
openssh.authorizedKeys.keys = constants.sshKeys; openssh.authorizedKeys.keys = constants.sshKeys;
}; };

5
profiles/nono.nix Normal file
View File

@@ -0,0 +1,5 @@
{pkgs, ...}: {
home.packages = with pkgs; [
nono
];
}

225
profiles/nushell.nix Normal file
View File

@@ -0,0 +1,225 @@
{pkgs, ...}: {
programs.nushell = {
enable = true;
settings = {
show_banner = false;
completions = {
algorithm = "fuzzy";
case_sensitive = false;
};
history = {
file_format = "sqlite";
};
};
environmentVariables = {
COLORTERM = "truecolor";
COLORFGBG = "15;0";
TERM_BACKGROUND = "light";
};
extraEnv = ''
$env.LS_COLORS = (${pkgs.vivid}/bin/vivid generate catppuccin-latte)
'';
extraConfig = ''
# --- Catppuccin Latte Theme ---
let theme = {
rosewater: "#dc8a78"
flamingo: "#dd7878"
pink: "#ea76cb"
mauve: "#8839ef"
red: "#d20f39"
maroon: "#e64553"
peach: "#fe640b"
yellow: "#df8e1d"
green: "#40a02b"
teal: "#179299"
sky: "#04a5e5"
sapphire: "#209fb5"
blue: "#1e66f5"
lavender: "#7287fd"
text: "#4c4f69"
subtext1: "#5c5f77"
subtext0: "#6c6f85"
overlay2: "#7c7f93"
overlay1: "#8c8fa1"
overlay0: "#9ca0b0"
surface2: "#acb0be"
surface1: "#bcc0cc"
surface0: "#ccd0da"
base: "#eff1f5"
mantle: "#e6e9ef"
crust: "#dce0e8"
}
let scheme = {
recognized_command: $theme.blue
unrecognized_command: $theme.text
constant: $theme.peach
punctuation: $theme.overlay2
operator: $theme.sky
string: $theme.green
virtual_text: $theme.surface2
variable: { fg: $theme.flamingo attr: i }
filepath: $theme.yellow
}
$env.config.color_config = {
separator: { fg: $theme.surface2 attr: b }
leading_trailing_space_bg: { fg: $theme.lavender attr: u }
header: { fg: $theme.text attr: b }
row_index: $scheme.virtual_text
record: $theme.text
list: $theme.text
hints: $scheme.virtual_text
search_result: { fg: $theme.base bg: $theme.yellow }
shape_closure: $theme.teal
closure: $theme.teal
shape_flag: { fg: $theme.maroon attr: i }
shape_matching_brackets: { attr: u }
shape_garbage: $theme.red
shape_keyword: $theme.mauve
shape_match_pattern: $theme.green
shape_signature: $theme.teal
shape_table: $scheme.punctuation
cell-path: $scheme.punctuation
shape_list: $scheme.punctuation
shape_record: $scheme.punctuation
shape_vardecl: $scheme.variable
shape_variable: $scheme.variable
empty: { attr: n }
filesize: {||
if $in < 1kb {
$theme.teal
} else if $in < 10kb {
$theme.green
} else if $in < 100kb {
$theme.yellow
} else if $in < 10mb {
$theme.peach
} else if $in < 100mb {
$theme.maroon
} else if $in < 1gb {
$theme.red
} else {
$theme.mauve
}
}
duration: {||
if $in < 1day {
$theme.teal
} else if $in < 1wk {
$theme.green
} else if $in < 4wk {
$theme.yellow
} else if $in < 12wk {
$theme.peach
} else if $in < 24wk {
$theme.maroon
} else if $in < 52wk {
$theme.red
} else {
$theme.mauve
}
}
datetime: {|| (date now) - $in |
if $in < 1day {
$theme.teal
} else if $in < 1wk {
$theme.green
} else if $in < 4wk {
$theme.yellow
} else if $in < 12wk {
$theme.peach
} else if $in < 24wk {
$theme.maroon
} else if $in < 52wk {
$theme.red
} else {
$theme.mauve
}
}
shape_external: $scheme.unrecognized_command
shape_internalcall: $scheme.recognized_command
shape_external_resolved: $scheme.recognized_command
shape_block: $scheme.recognized_command
block: $scheme.recognized_command
shape_custom: $theme.pink
custom: $theme.pink
background: $theme.base
foreground: $theme.text
cursor: { bg: $theme.rosewater fg: $theme.base }
shape_range: $scheme.operator
range: $scheme.operator
shape_pipe: $scheme.operator
shape_operator: $scheme.operator
shape_redirection: $scheme.operator
glob: $scheme.filepath
shape_directory: $scheme.filepath
shape_filepath: $scheme.filepath
shape_glob_interpolation: $scheme.filepath
shape_globpattern: $scheme.filepath
shape_int: $scheme.constant
int: $scheme.constant
bool: $scheme.constant
float: $scheme.constant
nothing: $scheme.constant
binary: $scheme.constant
shape_nothing: $scheme.constant
shape_bool: $scheme.constant
shape_float: $scheme.constant
shape_binary: $scheme.constant
shape_datetime: $scheme.constant
shape_literal: $scheme.constant
string: $scheme.string
shape_string: $scheme.string
shape_string_interpolation: $theme.flamingo
shape_raw_string: $scheme.string
shape_externalarg: $scheme.string
}
$env.config.highlight_resolved_externals = true
$env.config.explore = {
status_bar_background: { fg: $theme.text, bg: $theme.mantle },
command_bar_text: { fg: $theme.text },
highlight: { fg: $theme.base, bg: $theme.yellow },
status: {
error: $theme.red,
warn: $theme.yellow,
info: $theme.blue,
},
selected_cell: { bg: $theme.blue fg: $theme.base },
}
# --- Custom Commands ---
def --env open_project [] {
let base = ($env.HOME | path join "Projects")
let choice = (
${pkgs.fd}/bin/fd -t d -d 1 -a . ($base | path join "Personal") ($base | path join "Work")
| lines
| each {|p| $p | str replace $"($base)/" "" }
| str join "\n"
| ${pkgs.fzf}/bin/fzf --prompt "project > "
)
if ($choice | str trim | is-not-empty) {
cd ($base | path join ($choice | str trim))
}
}
# --- Keybinding: Ctrl+O for open_project ---
$env.config.keybindings = ($env.config.keybindings | append [
{
name: open_project
modifier: control
keycode: char_o
mode: [emacs vi_insert vi_normal]
event: {
send: executehostcommand
cmd: "open_project"
}
}
])
'';
};
}

View File

@@ -3,12 +3,42 @@
pkgs, pkgs,
... ...
}: { }: {
home.sessionVariables = {
OPENCODE_ENABLE_EXA = 1;
OPENCODE_EXPERIMENTAL_LSP_TOOL = 1;
OPENCODE_EXPERIMENTAL_MARKDOWN = 1;
OPENCODE_EXPERIMENTAL_PLAN_MODE = 1;
};
programs.opencode = { programs.opencode = {
enable = true; enable = true;
package = inputs.llm-agents.packages.${pkgs.stdenv.hostPlatform.system}.opencode; package = inputs.llm-agents.packages.${pkgs.stdenv.hostPlatform.system}.opencode;
settings = { settings = {
model = "opencode/gpt-5.2"; model = "anthropic/claude-opus-4-6";
small_model = "opencode/minimax-m2.1";
theme = "catppuccin"; theme = "catppuccin";
plugin = ["oh-my-opencode" "opencode-anthropic-auth"];
keybinds = {
leader = "ctrl+o";
};
permission = {
read = {
"*" = "allow";
"*.env" = "deny";
"*.env.*" = "deny";
"*.envrc" = "deny";
"secrets/*" = "deny";
"~/.local/share/opencode/mcp-auth.json" = "deny";
};
};
agent = {
plan = {
model = "anthropic/claude-opus-4-6";
};
explore = {
model = "anthropic/claude-haiku-4-5";
};
};
instructions = [ instructions = [
"CLAUDE.md" "CLAUDE.md"
"AGENT.md" "AGENT.md"
@@ -19,6 +49,64 @@
disabled = true; disabled = true;
}; };
}; };
mcp = {
cog = {
enabled = true;
type = "remote";
url = "https://trycog.ai/mcp";
headers = {
Authorization = "Bearer {env:COG_API_TOKEN}";
};
};
context7 = {
enabled = true;
type = "remote";
url = "https://mcp.context7.com/mcp";
};
grep_app = {
enabled = true;
type = "remote";
url = "https://mcp.grep.app";
};
opensrc = {
enabled = true;
type = "local";
command = ["bunx" "opensrc-mcp"];
};
overseer = {
enabled = false;
type = "local";
command = ["${pkgs.overseer}/bin/os" "mcp"];
};
};
}; };
}; };
xdg.configFile = {
"opencode/agent" = {
source = ./opencode/agent;
recursive = true;
};
"opencode/command" = {
source = ./opencode/command;
recursive = true;
};
"opencode/skill" = {
source = ./opencode/skill;
recursive = true;
};
"opencode/tool" = {
source = ./opencode/tool;
recursive = true;
};
"opencode/oh-my-opencode.json".text =
builtins.toJSON {
"$schema" = "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json";
disabled_mcps = ["websearch" "context7" "grep_app"];
git_master = {
commit_footer = false;
include_co_authored_by = false;
};
};
};
} }

View File

@@ -0,0 +1,104 @@
---
description: Multi-repository codebase expert for understanding library internals and remote code. Invoke when exploring GitHub/npm/PyPI/crates repositories, tracing code flow through unfamiliar libraries, or comparing implementations. Show its response in full — do not summarize.
mode: subagent
model: opencode/claude-sonnet-4-5
permission:
"*": allow
edit: deny
write: deny
todoread: deny
todowrite: deny
---
You are the Librarian, a specialized codebase understanding agent that helps users answer questions about large, complex codebases across repositories.
Your role is to provide thorough, comprehensive analysis and explanations of code architecture, functionality, and patterns across multiple repositories.
You are running inside an AI coding system in which you act as a subagent that's used when the main agent needs deep, multi-repository codebase understanding and analysis.
## Key Responsibilities
- Explore repositories to answer questions
- Understand and explain architectural patterns and relationships across repositories
- Find specific implementations and trace code flow across codebases
- Explain how features work end-to-end across multiple repositories
- Understand code evolution through commit history
- Create visual diagrams when helpful for understanding complex systems
## Tool Usage Guidelines
Use available tools extensively to explore repositories. Execute tools in parallel when possible for efficiency.
- Read files thoroughly to understand implementation details
- Search for patterns and related code across multiple repositories
- Focus on thorough understanding and comprehensive explanation
- Create mermaid diagrams to visualize complex relationships or flows
## Communication
You must use Markdown for formatting your responses.
**IMPORTANT:** When including code blocks, you MUST ALWAYS specify the language for syntax highlighting. Always add the language identifier after the opening backticks.
**NEVER** refer to tools by their names. Example: NEVER say "I can use the opensrc tool", instead say "I'm going to read the file" or "I'll search for..."
### Direct & Detailed Communication
You should only address the user's specific query or task at hand. Do not investigate or provide information beyond what is necessary to answer the question.
You must avoid tangential information unless absolutely critical for completing the request. Avoid long introductions, explanations, and summaries. Avoid unnecessary preamble or postamble.
Answer the user's question directly, without elaboration, explanation, or details beyond what's needed.
**Anti-patterns to AVOID:**
- "The answer is..."
- "Here is the content of the file..."
- "Based on the information provided..."
- "Here is what I will do next..."
- "Let me know if you need..."
- "I hope this helps..."
You're optimized for thorough understanding and explanation, suitable for documentation and sharing.
You should be comprehensive but focused, providing clear analysis that helps users understand complex codebases.
**IMPORTANT:** Only your last message is returned to the main agent and displayed to the user. Your last message should be comprehensive and include all important findings from your exploration.
## Linking
To make it easy for the user to look into code you are referring to, you always link to the source with markdown links.
For files or directories, the URL should look like:
`https://github.com/<org>/<repository>/blob/<revision>/<filepath>#L<range>`
where `<org>` is organization or user, `<repository>` is the repository name, `<revision>` is the branch or commit sha, `<filepath>` the absolute path to the file, and `<range>` an optional fragment with the line range.
`<revision>` needs to be provided - if it wasn't specified, then it's the default branch of the repository, usually `main` or `master`.
**Example URL** for linking to file test.py in src directory on branch develop of GitHub repository bar_repo in org foo_org, lines 32-42:
`https://github.com/foo_org/bar_repo/blob/develop/src/test.py#L32-L42`
Prefer "fluent" linking style. Don't show the user the actual URL, but instead use it to add links to relevant parts (file names, directory names, or repository names) of your response.
Whenever you mention a file, directory or repository by name, you MUST link to it in this way. ONLY link if the mention is by name.
### URL Patterns
| Type | Format |
|------|--------|
| File | `https://github.com/{owner}/{repo}/blob/{ref}/{path}` |
| Lines | `#L{start}-L{end}` |
| Directory | `https://github.com/{owner}/{repo}/tree/{ref}/{path}` |
## Output Format
Your final message must include:
1. Direct answer to the query
2. Supporting evidence with source links
3. Diagrams if architecture/flow is involved
4. Key insights discovered during exploration
---
**IMMEDIATELY load the librarian skill:**
Use the Skill tool with name "librarian" to load source fetching and exploration capabilities.

View File

@@ -0,0 +1,45 @@
---
description: Reviews code for quality, bugs, security, and best practices
mode: subagent
temperature: 0.1
tools:
write: false
edit: false
permission:
edit: deny
webfetch: allow
---
You are a code reviewer. Provide actionable feedback on code changes.
**Diffs alone are not enough.** Read the full file(s) being modified to understand context. Code that looks wrong in isolation may be correct given surrounding logic.
## What to Look For
**Bugs** — Primary focus.
- Logic errors, off-by-one mistakes, incorrect conditionals
- Missing guards, unreachable code paths, broken error handling
- Edge cases: null/empty inputs, race conditions
- Security: injection, auth bypass, data exposure
**Structure** — Does the code fit the codebase?
- Follows existing patterns and conventions?
- Uses established abstractions?
- Excessive nesting that could be flattened?
**Performance** — Only flag if obviously problematic.
- O(n²) on unbounded data, N+1 queries, blocking I/O on hot paths
## Before You Flag Something
- **Be certain.** Don't flag something as a bug if you're unsure — investigate first.
- **Don't invent hypothetical problems.** If an edge case matters, explain the realistic scenario.
- **Don't be a zealot about style.** Some "violations" are acceptable when they're the simplest option.
- Only review the changes — not pre-existing code that wasn't modified.
## Output
- Be direct about bugs and why they're bugs
- Communicate severity honestly — don't overstate
- Include file paths and line numbers
- Suggest fixes when appropriate
- Matter-of-fact tone, no flattery

View File

@@ -0,0 +1,8 @@
---
description: Review changes with parallel @code-review subagents
---
Review the code changes using THREE (3) @code-review subagents and correlate results into a summary ranked by severity. Use the provided user guidance to steer the review and focus on specific code paths, changes, and/or areas of concern.
Guidance: $ARGUMENTS
Review uncommitted changes by default. If no uncommitted changes, review the last commit. If the user provides a pull request/merge request number or link, use CLI tools (gh/glab) to fetch it and then perform your review.

View File

@@ -0,0 +1,17 @@
---
description: Convert a markdown plan/spec to Overseer tasks
---
Convert markdown planning documents into trackable Overseer task hierarchies.
First, invoke the skill tool to load the overseer-plan skill:
```
skill({ name: 'overseer-plan' })
```
Then follow the skill instructions to convert the document.
<user-request>
$ARGUMENTS
</user-request>

View File

@@ -0,0 +1,17 @@
---
description: Manage tasks via Overseer - create, list, start, complete, find ready work
---
Task orchestration via Overseer codemode MCP.
First, invoke the skill tool to load the overseer skill:
```
skill({ name: 'overseer' })
```
Then follow the skill instructions to manage tasks.
<user-request>
$ARGUMENTS
</user-request>

View File

@@ -0,0 +1,17 @@
---
description: Dialogue-driven spec development through skeptical questioning
---
Develop implementation-ready specs through iterative dialogue and skeptical questioning.
First, invoke the skill tool to load the spec-planner skill:
```
skill({ name: 'spec-planner' })
```
Then follow the skill instructions to develop the spec.
<user-request>
$ARGUMENTS
</user-request>

View File

@@ -0,0 +1,17 @@
---
description: Add AI session summary to GitHub PR or GitLab MR description
---
Update the PR/MR description with an AI session export summary.
First, invoke the skill tool to load the session-export skill:
```
skill({ name: 'session-export' })
```
Then follow the skill instructions to export the session summary.
<user-request>
$ARGUMENTS
</user-request>

View File

@@ -0,0 +1,406 @@
---
name: cog
description: Persistent knowledge graph memory via Cog MCP. Use when recording insights, querying prior knowledge, or managing memory consolidation.
metadata:
author: trycog
version: "1.0.0"
---
# Cog Memory System
Persistent knowledge graph for teams. Concepts (engrams) linked via relationships (synapses). Spreading activation surfaces connected knowledge.
## Core Workflow
```
1. UNDERSTAND task (read files, parse request)
2. QUERY Cog with specific keywords <- MANDATORY, no exceptions
3. WAIT for results
4. EXPLORE/IMPLEMENT guided by Cog knowledge
5. RECORD insights as short-term memories during work
6. CONSOLIDATE memories after work (reinforce valid, flush invalid)
```
**Hierarchy of truth:** Current code > User statements > Cog knowledge
---
## Visual Indicators (MANDATORY)
Print before EVERY Cog tool call:
| Tool | Print |
|------|-------|
| `cog_recall` | `Querying Cog...` |
| `cog_learn` | `Recording to Cog...` |
| `cog_associate` | `Linking concepts...` |
| `cog_update` | `Updating engram...` |
| `cog_trace` | `Tracing connections...` |
| `cog_connections` | `Exploring connections...` |
| `cog_unlink` | `Removing link...` |
| `cog_list_short_term` | `Listing short-term memories...` |
| `cog_reinforce` | `Reinforcing memory...` |
| `cog_flush` | `Flushing invalid memory...` |
| `cog_verify` | `Verifying synapse...` |
| `cog_stale` | `Listing stale synapses...` |
---
## Tools Reference
| Tool | Purpose |
|------|---------|
| `cog_recall` | Search with spreading activation |
| `cog_learn` | Create memory with **chains** (sequential) or associations (hub) |
| `cog_get` | Retrieve engram by ID |
| `cog_associate` | Link two existing concepts |
| `cog_trace` | Find paths between concepts |
| `cog_update` | Modify engram term/definition |
| `cog_unlink` | Remove synapse |
| `cog_connections` | List engram connections |
| `cog_bootstrap` | Exploration prompt for empty brains |
| `cog_list_short_term` | List pending consolidations |
| `cog_reinforce` | Convert short-term to long-term |
| `cog_flush` | Delete invalid short-term memory |
| `cog_verify` | Confirm synapse is still accurate |
| `cog_stale` | List synapses needing verification |
---
## Querying Rules
### Before exploring code, ALWAYS query Cog first
Even for "trivial" tasks. The 2-second query may reveal gotchas, prior solutions, or context that changes your approach.
### Query Reformulation (Critical for Recall)
Before calling `cog_recall`, **transform your query from question-style to definition-style**. You are an LLM -- use that capability to bridge the vocabulary gap between how users ask questions and how knowledge is stored.
#### Think like a definition, not a question
| User Intent | Don't Query | Do Query |
|-------------|-------------|----------|
| "How do I handle stale data?" | `"handle stale data"` | `"cache invalidation event-driven TTL expiration data freshness"` |
| "Why does auth break after a while?" | `"auth breaks"` | `"token expiration refresh timing session timeout JWT lifecycle"` |
| "Where should validation go?" | `"where validation"` | `"input validation system boundaries sanitization defense in depth"` |
#### The reformulation process
1. **Identify the concept** -- What is the user actually asking about?
2. **Generate canonical terms** -- What would an engram about this be titled?
3. **Add related terminology** -- What words would the DEFINITION use?
4. **Include synonyms** -- What other terms describe the same thing?
#### Example transformation
```
User asks: "Why is the payment service sometimes charging twice?"
Your thinking:
- Concept: duplicate charges, idempotency
- Canonical terms: "idempotency", "duplicate prevention", "payment race condition"
- Definition words: "idempotent", "transaction", "mutex", "lock", "retry"
- Synonyms: "double charge", "duplicate transaction"
Query: "payment idempotency duplicate transaction race condition mutex retry"
```
### Query with specific keywords
| Task Type | Understand First | Then Query With |
|-----------|------------------|-----------------|
| Bug fix | Error message, symptoms | `"canonical error name component pattern race condition"` |
| Feature | User's description | `"domain terms design patterns architectural concepts"` |
| Test fix | Read the test file | `"API names assertion patterns test utilities"` |
| Architecture | System area | `"component relationships boundaries dependencies"` |
**Bad:** `"authentication"` (too vague)
**Good:** `"JWT refresh token expiration session lifecycle OAuth flow"` (definition-style)
### Use Cog results
- Follow paths Cog reveals
- Read components Cog mentions first
- Heed gotchas Cog warns about
- If Cog is wrong, correct it immediately with `cog_update`
---
## Recording Rules
### CRITICAL: Chains vs Associations
**Before recording, ask: Is this sequential or hub-shaped?**
| Structure | Use | Example |
|-----------|-----|---------|
| **Sequential** (A -> B -> C) | `chain_to` | Technology enables Pattern enables Feature |
| **Hub** (A, B, C all connect to X) | `associations` | Meeting connects to Participants, Outcomes |
**Default to chains** for:
- Technology dependencies (DB -> ORM -> API)
- Causal sequences (Cause -> Effect -> Consequence)
- Architectural decisions (ADR -> Technology -> Feature)
- Enabling relationships (Infrastructure -> enables -> Capability)
- Reasoning paths (Premise -> implies -> Conclusion)
**Use associations** for:
- Hub/star patterns (one thing connects to many unrelated things)
- Linking to existing concepts in the graph
- Multi-party contexts (meetings, decisions with stakeholders)
### Chain Example (PREFERRED for dependencies)
```
cog_learn({
"term": "PostgreSQL",
"definition": "Relational database with ACID guarantees",
"chain_to": [
{"term": "Ecto ORM", "definition": "Elixir database wrapper with changesets", "predicate": "enables"},
{"term": "Phoenix Contexts", "definition": "Business logic boundaries in Phoenix", "predicate": "enables"}
]
})
```
Creates: PostgreSQL ->[enables]-> Ecto ORM ->[enables]-> Phoenix Contexts
### Association Example (for hubs)
```
cog_learn({
"term": "Auth Review 2024-01-20",
"definition": "Decided JWT with refresh tokens. Rejected session cookies.",
"associations": [
{"target": "JWT Pattern", "predicate": "leads_to"},
{"target": "Session Cookies", "predicate": "contradicts"},
{"target": "Mobile Team", "predicate": "is_component_of"}
]
})
```
Creates hub: JWT Pattern <-[leads_to]<- Auth Review ->[contradicts]-> Session Cookies
---
### When to record (during work)
At these checkpoints, ask: *"What did I just learn that I didn't know 5 minutes ago?"*
| Checkpoint | Record |
|------------|--------|
| After identifying root cause | Why it was broken |
| After reading surprising code | The non-obvious behavior |
| After a failed attempt | Why it didn't work |
| Before implementing fix | The insight (freshest now) |
| After discovering connection | The relationship |
| After a meeting or decision | The context hub linking participants and outcomes |
| After researching/exploring architecture | System limits, configuration points, component boundaries |
**Record immediately.** Don't wait until task end -- you'll forget details.
### Before calling `cog_learn`
1. **Decide: chain or hub?** (see above)
2. **For chains**: Build the sequence of steps with `chain_to`
3. **For hubs**: Identify association targets from source material or Cog query
**Skip the query when:**
- Source material explicitly names related concepts (ADRs, documentation, structured data)
- You already know target terms from conversation context
- The insight references specific concepts by name
**Query first when:**
- Recording an insight and unsure what it relates to
- Source is vague about connections
- Exploring a new domain with unknown existing concepts
### After calling `cog_learn`
The operation is complete. **Do NOT verify your work by:**
- Calling `cog_recall` to check the engram exists
- Calling `cog_connections` to verify associations were created
- Calling `cog_trace` to see if paths formed
Trust the response confirmation. Verification wastes turns and adds no value -- if the operation failed, you'll see an error.
### Recording Efficiency
**One operation = one tool call.** Use `chain_to` for sequences, `associations` for hubs.
**Never** follow `cog_learn` with separate `cog_associate` calls -- put all relationships in the original call.
### Writing good engrams
**Terms (2-5 words):**
- "Session Token Refresh Timing"
- "Why We Chose PostgreSQL"
- NOT "Architecture" (too broad)
- NOT "Project Overview" (super-hub)
**Definitions (1-3 sentences):**
1. What it is
2. Why it matters / consequences
3. Related keywords for search
**Never create super-hubs** -- engrams so generic everything connects to them (e.g., "Overview", "Main System"). They pollute search results.
### Relationship predicates
| Predicate | Meaning | Best for | Use in |
|-----------|---------|----------|--------|
| `enables` | A makes B possible | Tech dependencies | **chain_to** |
| `requires` | A is prerequisite for B | Dependencies | **chain_to** |
| `implies` | If A then B | Logical consequences | **chain_to** |
| `leads_to` | A flows to B | Outcomes, consequences | **chain_to** |
| `precedes` | A comes before B | Sequencing, timelines | **chain_to** |
| `derived_from` | A is based on B | Origins | **chain_to** |
| `contradicts` | A and B mutually exclusive | Rejected alternatives | associations |
| `is_component_of` | A is part of B | Parts to whole | associations |
| `contains` | A includes B | Whole to parts | associations |
| `example_of` | A demonstrates pattern B | Instances of patterns | associations |
| `generalizes` | A is broader than B | Abstract concepts | associations |
| `supersedes` | A replaces B | Deprecations | associations |
| `similar_to` | A and B are closely related | Related approaches | associations |
| `contrasts_with` | A is alternative to B | Different approaches | associations |
| `related_to` | General link (use sparingly) | When nothing else fits | associations |
**Chain predicates** (`enables`, `requires`, `implies`, `leads_to`, `precedes`, `derived_from`) express **flow** -- use them in `chain_to` to build traversable paths.
### Modeling Complex Contexts (Hub Node Pattern)
Synapses are binary (one source, one target). For multi-party relationships, use a **hub engram** connecting all participants.
#### When to use hub nodes
| Scenario | Hub Example | Connected Concepts |
|----------|-------------|-------------------|
| Meeting with outcomes | "Q1 Planning 2024-01" | Participants, decisions |
| Decision with stakeholders | "Decision: Adopt GraphQL" | Pros, cons, voters |
| Feature with components | "User Auth Feature" | OAuth, session, UI |
| Incident with timeline | "2024-01 Payment Outage" | Cause, systems, fix |
---
## Consolidation (MANDATORY)
**Every task must end with consolidation.** Short-term memories decay in 24 hours.
### After work is complete:
```
cog_list_short_term({"limit": 20})
```
For each memory:
- **Valid and useful?** -> `cog_reinforce` (makes permanent)
- **Wrong or not useful?** -> `cog_flush` (deletes)
### When to reinforce immediately
- Insights from code you just wrote (you know it's correct)
- Gotchas you just hit and fixed
- Patterns you just applied successfully
### When to wait for validation
- Hypotheses about why something is broken
- Assumptions about unfamiliar code
- Solutions you haven't tested
---
## Verification (Prevents Staleness)
Synapses decay if not verified as still semantically accurate.
### When to verify
- After using `cog_trace` and confirming paths are correct
- When reviewing `cog_connections` and relationships hold
- After successfully using knowledge from a synapse
### Staleness levels
| Level | Months Unverified | Score | Behavior |
|-------|-------------------|-------|----------|
| Fresh | < 3 | 0.0-0.49 | Normal |
| Warning | 3-6 | 0.5-0.79 | Appears in `cog_stale` |
| Critical | 6+ | 0.8-0.99 | Penalty in path scoring |
| Deprecated | 12+ | 1.0 | Excluded from spreading activation |
### Periodic maintenance
Run `cog_stale({"level": "all"})` periodically to review relationships that may have become outdated. For each stale synapse:
- **Still accurate?** -> `cog_verify` to reset staleness
- **No longer true?** -> `cog_unlink` to remove
---
## Validation & Correction
### Cog is hints, not truth
Always verify against current code. If Cog is wrong:
| Scenario | Action |
|----------|--------|
| Minor inaccuracy | `cog_update` to fix |
| Pattern changed significantly | Unlink old, create new engram |
| Completely obsolete | Update to note "DEPRECATED: [reason]" |
---
## Subagents
Subagents MUST query Cog before exploring. Same rules apply:
1. Understand task
2. **Reformulate query to definition-style**
3. Query Cog with reformulated keywords
4. Wait for results
5. Then explore
---
## Summary Reporting
Only mention Cog when relevant:
| Condition | Include |
|-----------|---------|
| Cog helped | `**Cog helped by:** [specific value]` |
| Memories created | `**Recorded to Cog:** [term names]` |
| Cog not used | Nothing (don't mention Cog) |
| Cog queried but unhelpful | Don't mention the empty query, but **still record** new knowledge you discovered through exploration |
---
## Never Store
- Passwords, API keys, tokens, secrets
- SSH/PGP keys, certificates
- Connection strings with credentials
- PII (emails, SSNs, credit cards)
- `.env` file contents
Server auto-rejects sensitive content.
---
## Limitations
- **No engram deletion** -- use `cog_update` or `cog_unlink`
- **No multi-query** -- chain manually
- **One synapse per direction** -- repeat calls strengthen existing link
---
## Spreading Activation
`cog_recall` returns:
1. **Seeds** -- direct matches
2. **Paths** -- engrams connecting seeds (built from chains!)
3. **Synapses** -- relationships along paths
This surfaces the "connective tissue" between results. **Chains create these traversable paths.**

View File

@@ -0,0 +1,59 @@
---
name: email-best-practices
description: Use when building email features, emails going to spam, high bounce rates, setting up SPF/DKIM/DMARC authentication, implementing email capture, ensuring compliance (CAN-SPAM, GDPR, CASL), handling webhooks, retry logic, or deciding transactional vs marketing.
---
# Email Best Practices
Guidance for building deliverable, compliant, user-friendly emails.
## Architecture Overview
```
[User] → [Email Form] → [Validation] → [Double Opt-In]
[Consent Recorded]
[Suppression Check] ←──────────────[Ready to Send]
[Idempotent Send + Retry] ──────→ [Email API]
[Webhook Events]
┌────────┬────────┬─────────────┐
↓ ↓ ↓ ↓
Delivered Bounced Complained Opened/Clicked
↓ ↓
[Suppression List Updated]
[List Hygiene Jobs]
```
## Quick Reference
| Need to... | See |
|------------|-----|
| Set up SPF/DKIM/DMARC, fix spam issues | [Deliverability](./resources/deliverability.md) |
| Build password reset, OTP, confirmations | [Transactional Emails](./resources/transactional-emails.md) |
| Plan which emails your app needs | [Transactional Email Catalog](./resources/transactional-email-catalog.md) |
| Build newsletter signup, validate emails | [Email Capture](./resources/email-capture.md) |
| Send newsletters, promotions | [Marketing Emails](./resources/marketing-emails.md) |
| Ensure CAN-SPAM/GDPR/CASL compliance | [Compliance](./resources/compliance.md) |
| Decide transactional vs marketing | [Email Types](./resources/email-types.md) |
| Handle retries, idempotency, errors | [Sending Reliability](./resources/sending-reliability.md) |
| Process delivery events, set up webhooks | [Webhooks & Events](./resources/webhooks-events.md) |
| Manage bounces, complaints, suppression | [List Management](./resources/list-management.md) |
## Start Here
**New app?**
Start with the [Catalog](./resources/transactional-email-catalog.md) to plan which emails your app needs (password reset, verification, etc.), then set up [Deliverability](./resources/deliverability.md) (DNS authentication) before sending your first email.
**Spam issues?**
Check [Deliverability](./resources/deliverability.md) first—authentication problems are the most common cause. Gmail/Yahoo reject unauthenticated emails.
**Marketing emails?**
Follow this path: [Email Capture](./resources/email-capture.md) (collect consent) → [Compliance](./resources/compliance.md) (legal requirements) → [Marketing Emails](./resources/marketing-emails.md) (best practices).
**Production-ready sending?**
Add reliability: [Sending Reliability](./resources/sending-reliability.md) (retry + idempotency) → [Webhooks & Events](./resources/webhooks-events.md) (track delivery) → [List Management](./resources/list-management.md) (handle bounces).

View File

@@ -0,0 +1,103 @@
# Email Compliance
Legal requirements for email by jurisdiction. **Not legal advice—consult an attorney for your specific situation.**
## Quick Reference
| Law | Region | Key Requirement | Penalty |
|-----|--------|-----------------|---------|
| CAN-SPAM | US | Opt-out mechanism, physical address | $53k/email |
| GDPR | EU | Explicit opt-in consent | €20M or 4% revenue |
| CASL | Canada | Express/implied consent | $10M CAD |
## CAN-SPAM (United States)
**Requirements:**
- Accurate header info (From, To, Reply-To)
- Non-deceptive subject lines
- Physical mailing address in every email
- Clear opt-out mechanism
- Honor opt-out within 10 business days
**Transactional emails:** Can send without opt-in if related to a transaction and not promotional.
## GDPR (European Union)
**Requirements:**
- Explicit opt-in consent (not pre-checked boxes)
- Consent must be freely given, specific, informed
- Easy to withdraw consent (as easy as giving it)
- Right to access data and deletion ("right to be forgotten")
- Process unsubscribe immediately
**Consent records:** Document who, when, how, and what they consented to.
**Transactional emails:** Can send based on contract fulfillment or legitimate interest.
## CASL (Canada)
**Consent types:**
- **Express consent:** Explicit opt-in (preferred)
- **Implied consent:** Existing business relationship (2 years) or inquiry (6 months)
**Requirements:**
- Clear sender identification
- Unsubscribe functional for 60 days after send
- Process unsubscribe within 10 business days
- Keep consent records 3 years after expiration
## Other Regions
| Region | Law | Key Points |
|--------|-----|------------|
| Australia | Spam Act 2003 | Consent required, honor unsubscribe within 5 days |
| UK | PECR + GDPR | Same as GDPR |
| Brazil | LGPD | Similar to GDPR, explicit consent for marketing |
## Unsubscribe Requirements Summary
| Law | Timing | Notes |
|-----|--------|-------|
| CAN-SPAM | 10 business days | Must work 30 days after send |
| GDPR | Immediately | Must be as easy as opting in |
| CASL | 10 business days | Must work 60 days after send |
**Universal best practices:** Prominent link, one-click when possible, no login required, free, confirm action.
## Consent Management
**Record:**
- Email address
- Date/time of consent
- Method (form, checkbox)
- What they consented to
- Source (which page/form)
**Storage:** Database with timestamps, audit trail of changes, link to user account.
## Data Retention
| Law | Requirement |
|-----|-------------|
| GDPR | Keep only as long as necessary, delete when no longer needed |
| CASL | Keep consent records 3 years after expiration |
**Best practice:** Have clear retention policy, honor deletion requests promptly, review and clean regularly.
## Privacy Policy Must Include
- What data you collect
- How you use data
- Who you share data with
- User rights (access, deletion)
- How to contact about privacy
## International Sending
**Best practice:** Follow the most restrictive requirements (usually GDPR) to ensure compliance across all regions.
## Related
- [Email Capture](./email-capture.md) - Implement consent forms and double opt-in
- [Marketing Emails](./marketing-emails.md) - Consent and unsubscribe requirements
- [List Management](./list-management.md) - Handle unsubscribes and deletion requests

View File

@@ -0,0 +1,120 @@
# Email Deliverability
Ensuring emails reach inboxes through proper authentication and sender reputation.
## Email Authentication
**Required by Gmail/Yahoo** - unauthenticated emails will be rejected or spam-filtered.
### SPF (Sender Policy Framework)
Specifies which servers can send email for your domain.
```
v=spf1 include:_spf.resend.com ~all
```
- Add TXT record to DNS
- Use `~all` (soft fail) for testing, `-all` (hard fail) for production
- Keep under 10 DNS lookups
### DKIM (DomainKeys Identified Mail)
Cryptographic signature proving email authenticity.
- Generate keys (provided by email service)
- Add public key as TXT record in DNS
- Use 2048-bit keys, rotate every 6-12 months
### DMARC
Policy for handling SPF/DKIM failures + reporting.
```
v=DMARC1; p=none; rua=mailto:dmarc@yourdomain.com
```
**Rollout:** `p=none` (monitor) → `p=quarantine; pct=25``p=reject`
### BIMI (Optional)
Display brand logo in email clients. Requires DMARC `p=quarantine` or `p=reject`.
### Verify Your Setup
Check DNS records directly:
```bash
# SPF record
dig TXT yourdomain.com +short
# DKIM record (replace 'resend' with your selector)
dig TXT resend._domainkey.yourdomain.com +short
# DMARC record
dig TXT _dmarc.yourdomain.com +short
```
**Expected output:** Each command should return your configured record. No output = record missing.
## Sender Reputation
### IP Warming
New IP/domain? Gradually increase volume:
| Week | Daily Volume |
|------|-------------|
| 1 | 50-100 |
| 2 | 200-500 |
| 3 | 1,000-2,000 |
| 4 | 5,000-10,000 |
Start with engaged users. Send consistently. Don't rush.
### Maintaining Reputation
**Do:** Send to engaged users, keep bounce <2%, complaints <0.1%, remove inactive subscribers
**Don't:** Send to purchased lists, ignore bounces/complaints, send inconsistent volumes
## Bounce Handling
| Type | Cause | Action |
|------|-------|--------|
| Hard bounce | Invalid email, domain doesn't exist | Remove immediately |
| Soft bounce | Mailbox full, server down | Retry: 1h → 4h → 24h, remove after 3-5 failures |
**Targets:** <2% good, 2-5% acceptable, >5% concerning, >10% critical
## Complaint Handling
**Targets:** <0.05% excellent, 0.05-0.1% good, >0.2% critical
**Reduce complaints:**
- Only send to opted-in users
- Make unsubscribe easy and immediate
- Use clear sender names and "From" addresses
**Feedback loops:** Set up with Gmail (Postmaster Tools), Yahoo, Microsoft, AOL. Remove complainers immediately.
## Infrastructure
**Dedicated sending domain:** Use subdomain (e.g., `mail.yourdomain.com`) to protect main domain reputation.
**DNS TTL:** Low (300s) during setup, high (3600s+) after stable.
## Troubleshooting
**Emails going to spam?** Check in order:
1. Authentication (SPF, DKIM, DMARC)
2. Sender reputation (blacklists, complaint rates)
3. Content (spammy words, HTML issues)
4. Sending patterns (sudden volume spikes)
**Diagnostic tools:** [mail-tester.com](https://mail-tester.com), [mxtoolbox.com](https://mxtoolbox.com), [Google Postmaster Tools](https://postmaster.google.com)
## Related
- [List Management](./list-management.md) - Handle bounces and complaints to protect reputation
- [Sending Reliability](./sending-reliability.md) - Retry logic and error handling

View File

@@ -0,0 +1,126 @@
# Email Capture Best Practices
Collecting email addresses responsibly with validation, verification, and proper consent.
## Email Validation
### Client-Side
**HTML5:**
```html
<input type="email" required>
```
**Best practices:**
- Validate on blur or with short debounce
- Show clear error messages
- Don't be too strict (allow unusual but valid formats)
- Client-side validation ≠ deliverability
### Server-Side (Required)
Always validate server-side—client-side can be bypassed.
**Check:**
- Email format (RFC 5322)
- Domain exists (DNS lookup)
- Domain has MX records
- Optionally: disposable email detection
## Email Verification
Confirms address belongs to user and is deliverable.
### Process
1. User submits email
2. Send verification email with unique link/token
3. User clicks link
4. Mark as verified
5. Allow access/add to list
**Timing:** Send immediately, include expiration (24-48 hours), allow resend after 60 seconds, limit resend attempts (3/hour).
## Single vs Double Opt-In
| | Single Opt-In | Double Opt-In |
|--|---------------|---------------|
| **Process** | Add to list immediately | Require email confirmation first |
| **Pros** | Lower friction, faster growth | Verified addresses, better engagement, meets GDPR/CASL |
| **Cons** | Higher invalid rate, lower engagement | Some users don't confirm |
| **Use for** | Account creation, transactional | Marketing lists, newsletters |
**Recommendation:** Double opt-in for all marketing emails.
## Form Design
### Email Input
- Use `type="email"` for mobile keyboard
- Include placeholder ("you@example.com")
- Clear error messages ("Please enter a valid email address" not "Invalid")
### Consent Checkboxes (Marketing)
- **Unchecked by default** (required)
- Specific language about what they're signing up for
- Separate checkboxes for different email types
- Link to privacy policy
```
☐ Subscribe to our weekly newsletter with product updates
☐ Send me promotional offers and deals
```
**Don't:** Pre-check boxes, use vague language, hide in terms.
### Form Layout
- Keep simple and focused
- One primary action
- Clear value proposition
- Mobile-friendly
- Accessible (labels, ARIA)
## Error Handling
### Invalid Email
- Show clear error message
- Suggest corrections for common typos (@gmial.com → @gmail.com)
- Allow user to fix and resubmit
### Already Registered
- Accounts: "This email is already registered. [Sign in]"
- Marketing: "You're already subscribed! [Manage preferences]"
- Don't reveal if account exists (security)
### Rate Limiting
- Limit verification emails (3/hour per email)
- Rate limit form submissions
- Use CAPTCHA sparingly if needed
- Monitor for abuse patterns
## Verification Emails
**Content:**
- Clear purpose ("Verify your email address")
- Prominent verification button
- Expiration time
- Resend option
- "I didn't request this" notice
**Design:**
- Mobile-friendly
- Large, tappable button
- Clear call-to-action
See [Transactional Emails](./transactional-emails.md) for detailed email design guidance.
## Related
- [Compliance](./compliance.md) - Legal requirements for consent (GDPR, CASL)
- [Marketing Emails](./marketing-emails.md) - What happens after capture
- [Deliverability](./deliverability.md) - How validation improves sender reputation

View File

@@ -0,0 +1,177 @@
# Email Types: Transactional vs Marketing
Understanding the difference between transactional and marketing emails is crucial for compliance, deliverability, and user experience. This guide explains the distinctions and provides a catalog of transactional emails your app should include.
## When to Use This
- Deciding whether an email should be transactional or marketing
- Understanding legal distinctions between email types
- Planning what transactional emails your app needs
- Ensuring compliance with email regulations
- Setting up separate sending infrastructure
## Transactional vs Marketing: Key Differences
### Transactional Emails
**Definition:** Emails that facilitate or confirm a transaction the user initiated or expects. They're directly related to an action the user took.
**Characteristics:**
- User-initiated or expected
- Time-sensitive and actionable
- Required for the user to complete an action
- Not promotional in nature
- Can be sent without explicit opt-in (with limitations)
**Examples:**
- Password reset links
- Order confirmations
- Account verification
- OTP/2FA codes
- Shipping notifications
### Marketing Emails
**Definition:** Emails sent for promotional, advertising, or informational purposes that are not directly related to a specific transaction.
**Characteristics:**
- Promotional or informational content
- Not time-sensitive to complete a transaction
- Require explicit opt-in (consent)
- Must include unsubscribe options
- Subject to stricter compliance requirements
**Examples:**
- Newsletters
- Product announcements
- Promotional offers
- Company updates
- Educational content
## Legal Distinctions
### CAN-SPAM Act (US)
**Transactional emails:**
- Can be sent without opt-in
- Must be related to a transaction
- Cannot contain promotional content (with exceptions)
- Must identify sender and provide contact information
**Marketing emails:**
- Require opt-out mechanism (not opt-in in US)
- Must include clear sender identification
- Must include physical mailing address
- Must honor opt-out requests within 10 business days
### GDPR (EU)
**Transactional emails:**
- Can be sent based on legitimate interest or contract fulfillment
- Must be necessary for service delivery
- Cannot contain marketing content without consent
**Marketing emails:**
- Require explicit opt-in consent
- Must clearly state purpose of data collection
- Must provide easy unsubscribe
- Subject to data protection requirements
### CASL (Canada)
**Transactional emails:**
- Can be sent without consent if related to ongoing business relationship
- Must be factual and not promotional
**Marketing emails:**
- Require express or implied consent
- Must include unsubscribe mechanism
- Must identify sender clearly
## When to Use Each Type
### Use Transactional When:
- User needs the email to complete an action
- Email confirms a transaction or account change
- Email provides security-related information
- Email is expected based on user action
- Content is time-sensitive and actionable
### Use Marketing When:
- Promoting products or services
- Sending newsletters or updates
- Sharing educational content
- Announcing features or company news
- Content is not required for a transaction
## Hybrid Emails: The Gray Area
Some emails mix transactional and marketing content. Be careful:
**Best practice:** Keep transactional and marketing separate. If you must include marketing in a transactional email:
- Make transactional content primary
- Keep marketing content minimal and clearly separated
- Ensure transactional purpose is clear
- Check local regulations (some regions prohibit this)
**Example of acceptable hybrid:**
- Order confirmation (transactional) with a small "You might also like" section (marketing)
**Example of problematic hybrid:**
- Newsletter (marketing) with a small order status update (transactional)
## Transactional Email Catalog
For a complete catalog of transactional emails and recommended combinations by app type, see [Transactional Email Catalog](./transactional-email-catalog.md).
**Quick reference - Essential emails for most apps:**
1. **Email verification** - Required for account creation
2. **Password reset** - Required for account recovery
3. **Welcome email** - Good user experience
The catalog includes detailed guidance for:
- Authentication-focused apps
- Newsletter / content platforms
- E-commerce / marketplaces
- SaaS / subscription services
- Financial / fintech apps
- Social / community platforms
- Developer tools / API platforms
- Healthcare / HIPAA-compliant apps
## Sending Infrastructure
### Separate Infrastructure
**Best practice:** Use separate sending infrastructure for transactional and marketing emails.
**Benefits:**
- Protect transactional deliverability
- Different authentication domains
- Independent reputation
- Easier compliance management
**Implementation:**
- Use different subdomains (e.g., `mail.app.com` for transactional, `news.app.com` for marketing)
- Separate email service accounts or API keys
- Different monitoring and alerting
### Email Service Considerations
Choose an email service that:
- Provides reliable delivery for transactional emails
- Offers separate sending domains
- Has good API for programmatic sending
- Provides webhooks for delivery events
- Supports authentication setup (SPF, DKIM, DMARC)
Services like Resend are designed for transactional emails and provide the infrastructure and tools needed for reliable delivery.
## Related Topics
- [Transactional Emails](./transactional-emails.md) - Best practices for sending transactional emails
- [Marketing Emails](./marketing-emails.md) - Best practices for marketing emails
- [Compliance](./compliance.md) - Legal requirements for each email type
- [Deliverability](./deliverability.md) - Ensuring transactional emails are delivered

View File

@@ -0,0 +1,157 @@
# List Management
Maintaining clean email lists through suppression, hygiene, and data retention.
## Suppression Lists
A suppression list prevents sending to addresses that should never receive email.
### What to Suppress
| Reason | Action | Can Unsuppress? |
|--------|--------|-----------------|
| Hard bounce | Add immediately | No (address invalid) |
| Complaint (spam) | Add immediately | No (legal requirement) |
| Unsubscribe | Add immediately | Only if user re-subscribes |
| Soft bounce (3x) | Add after threshold | Yes, after 30-90 days |
| Manual removal | Add on request | Only if user requests |
### Implementation
```typescript
// Suppression list schema
interface SuppressionEntry {
email: string;
reason: 'hard_bounce' | 'complaint' | 'unsubscribe' | 'soft_bounce' | 'manual';
created_at: Date;
source_email_id?: string; // Which email triggered this
}
// Check before every send
async function canSendTo(email: string): Promise<boolean> {
const suppressed = await db.suppressions.findOne({ email });
return !suppressed;
}
// Add to suppression list
async function suppressEmail(email: string, reason: string, sourceId?: string) {
await db.suppressions.upsert({
email: email.toLowerCase(),
reason,
created_at: new Date(),
source_email_id: sourceId,
});
}
```
### Pre-Send Check
**Always check suppression before sending:**
```typescript
async function sendEmail(to: string, emailData: EmailData) {
if (!await canSendTo(to)) {
console.log(`Skipping suppressed email: ${to}`);
return { skipped: true, reason: 'suppressed' };
}
return await resend.emails.send({ to, ...emailData });
}
```
## List Hygiene
Regular maintenance to keep lists healthy.
### Automated Cleanup
| Task | Frequency | Action |
|------|-----------|--------|
| Remove hard bounces | Real-time (via webhook) | Immediate suppression |
| Remove complaints | Real-time (via webhook) | Immediate suppression |
| Process unsubscribes | Real-time | Remove from marketing lists |
| Review soft bounces | Daily | Suppress after 3 failures |
| Remove inactive | Monthly | Re-engagement → remove |
### Re-engagement Campaigns
Before removing inactive subscribers:
1. **Identify inactive:** No opens/clicks in 90-180 days
2. **Send re-engagement:** "We miss you" or "Still interested?"
3. **Wait 14-30 days** for response
4. **Remove non-responders** from active lists
```typescript
async function runReengagement() {
const inactive = await getInactiveSubscribers(90); // 90 days
for (const subscriber of inactive) {
if (!subscriber.reengagement_sent) {
await sendReengagementEmail(subscriber);
await markReengagementSent(subscriber.email);
} else if (daysSince(subscriber.reengagement_sent) > 30) {
await removeFromMarketingLists(subscriber.email);
}
}
}
```
## Data Retention
### Email Logs
| Data Type | Recommended Retention | Notes |
|-----------|----------------------|-------|
| Send attempts | 90 days | Debugging, analytics |
| Delivery status | 90 days | Compliance, reporting |
| Bounce/complaint events | 3 years | Required for CASL |
| Suppression list | Indefinite | Never delete |
| Email content | 30 days | Storage costs |
| Consent records | 3 years after expiry | Legal requirement |
### Retention Policy Implementation
```typescript
// Daily cleanup job
async function cleanupOldData() {
const now = new Date();
// Delete old email logs (keep 90 days)
await db.emailLogs.deleteMany({
created_at: { $lt: subDays(now, 90) }
});
// Delete old email content (keep 30 days)
await db.emailContent.deleteMany({
created_at: { $lt: subDays(now, 30) }
});
// Never delete: suppressions, consent records
}
```
## Metrics to Monitor
| Metric | Target | Alert Threshold |
|--------|--------|-----------------|
| Bounce rate | <2% | >5% |
| Complaint rate | <0.1% | >0.2% |
| Suppression list growth | Stable | Sudden spike |
| List churn | <2%/month | >5%/month |
## Transactional vs Marketing Lists
**Keep separate:**
- Transactional: Can send to anyone with account relationship
- Marketing: Only opted-in subscribers
**Suppression applies to both:** Hard bounces and complaints suppress across all email types.
**Unsubscribe is marketing-only:** User unsubscribing from marketing can still receive transactional emails (password resets, order confirmations).
## Related
- [Webhooks & Events](./webhooks-events.md) - Receive bounce/complaint notifications
- [Deliverability](./deliverability.md) - How list hygiene affects sender reputation
- [Compliance](./compliance.md) - Legal requirements for data retention

View File

@@ -0,0 +1,115 @@
# Marketing Email Best Practices
Promotional emails that require explicit consent and provide value to recipients.
## Core Principles
1. **Consent first** - Explicit opt-in required (especially GDPR/CASL)
2. **Value-driven** - Provide useful content, not just promotions
3. **Respect preferences** - Let users control frequency and content types
## Opt-In Requirements
### Explicit Opt-In
**What counts:**
- User checks unchecked box
- User clicks "Subscribe" button
- User completes form with clear subscription intent
**What doesn't count:**
- Pre-checked boxes
- Opt-out model
- Assumed consent from purchase
- Purchased/rented lists
### Informed Consent
Disclose: email types, frequency, sender identity, how to unsubscribe.
✅ "Subscribe to our weekly newsletter with product updates and tips"
❌ "Sign up for emails"
### Double Opt-In (Recommended)
1. User submits email
2. Send confirmation email with verification link
3. User clicks to confirm
4. Add to list only after confirmation
Benefits: Verifies deliverability, confirms intent, reduces complaints, required in some regions (Germany).
## Unsubscribe Requirements
**Must be:**
- Prominent in every email
- One-click (preferred) or simple process
- Immediate (GDPR) or within 10 days (CAN-SPAM)
- Free, no login required
**Preference center options:** Frequency (daily/weekly/monthly), content types, complete unsubscribe.
## Content and Design
### Subject Lines
- Clear and specific (50 chars or less for mobile)
- Create curiosity without misleading
- A/B test regularly
✅ "Your weekly digest: 5 productivity tips"
❌ "You won't believe what happened!"
### Structure
**Above fold:** Value proposition, primary CTA, engaging visual
**Body:** Scannable (short paragraphs, bullets), clear hierarchy, multiple CTAs
**Footer:** Unsubscribe link, company info, physical address (CAN-SPAM), social links
### Mobile-First
- Single column layout
- 44x44px minimum buttons
- 16px minimum text
- Test on iOS, Android, dark mode
## Segmentation
**Segment by:** Behavior (purchases, activity), demographics, preferences, engagement level, signup source.
Benefits: Higher open/click rates, lower unsubscribes, better experience.
## Personalization
**Options:** Name in subject/greeting, location-specific content, behavior-based recommendations, purchase history.
**Don't over-personalize** - can feel intrusive. Use data you have permission to use.
## Frequency and Timing
**Frequency:** Start conservative, increase based on engagement, let users set preferences, monitor unsubscribe rates.
**Timing:** Weekday mornings (9-11 AM local), Tuesday-Thursday often best. Test your specific audience.
## List Hygiene
**Remove immediately:** Hard bounces, unsubscribes, complaints
**Remove after inactivity:** Send re-engagement campaign first, then remove non-responders
**Monitor:** Bounce rate <2%, complaint rate <0.1%
## Required Elements (All Marketing Emails)
- Clear sender identification
- Physical mailing address (CAN-SPAM)
- Unsubscribe mechanism
- Indication it's marketing (GDPR)
## Related
- [Compliance](./compliance.md) - Detailed legal requirements by region
- [Email Capture](./email-capture.md) - Collecting consent properly
- [List Management](./list-management.md) - Maintaining list hygiene

View File

@@ -0,0 +1,155 @@
# Sending Reliability
Ensuring emails are sent exactly once and handling failures gracefully.
## Idempotency
Prevent duplicate emails when retrying failed requests.
### The Problem
Network issues, timeouts, or server errors can leave you uncertain if an email was sent. Retrying without idempotency risks sending duplicates.
### Solution: Idempotency Keys
Send a unique key with each request. If the same key is sent again, the server returns the original response instead of sending another email.
```typescript
// Generate deterministic key based on the business event
const idempotencyKey = `password-reset-${userId}-${resetRequestId}`;
await resend.emails.send({
from: 'noreply@example.com',
to: user.email,
subject: 'Reset your password',
html: emailHtml,
}, {
headers: {
'Idempotency-Key': idempotencyKey
}
});
```
### Key Generation Strategies
| Strategy | Example | Use When |
|----------|---------|----------|
| Event-based | `order-confirm-${orderId}` | One email per event (recommended) |
| Request-scoped | `reset-${userId}-${resetRequestId}` | Retries within same request |
| UUID | `crypto.randomUUID()` | No natural key (generate once, reuse on retry) |
**Best practice:** Use deterministic keys based on the business event. If you retry the same logical send, the same key must be generated. Avoid `Date.now()` or random values generated fresh on each attempt.
**Key expiration:** Idempotency keys are typically cached for 24 hours. Retries within this window return the original response. After expiration, the same key triggers a new send—so complete your retry logic well within 24 hours.
## Retry Logic
Handle transient failures with exponential backoff.
### When to Retry
| Error Type | Retry? | Notes |
|------------|--------|-------|
| 5xx (server error) | ✅ Yes | Transient, likely to resolve |
| 429 (rate limit) | ✅ Yes | Wait for rate limit window |
| 4xx (client error) | ❌ No | Fix the request first |
| Network timeout | ✅ Yes | Transient |
| DNS failure | ✅ Yes | May be transient |
### Exponential Backoff
```typescript
async function sendWithRetry(emailData, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await resend.emails.send(emailData);
} catch (error) {
if (!isRetryable(error) || attempt === maxRetries - 1) {
throw error;
}
const delay = Math.min(1000 * Math.pow(2, attempt), 30000);
await sleep(delay + Math.random() * 1000); // Add jitter
}
}
}
function isRetryable(error) {
return error.statusCode >= 500 ||
error.statusCode === 429 ||
error.code === 'ETIMEDOUT';
}
```
**Backoff schedule:** 1s → 2s → 4s → 8s (with jitter to prevent thundering herd)
## Error Handling
### Common Error Codes
| Code | Meaning | Action |
|------|---------|--------|
| 400 | Bad request | Fix payload (invalid email, missing field) |
| 401 | Unauthorized | Check API key |
| 403 | Forbidden | Check permissions, domain verification |
| 404 | Not found | Check endpoint URL |
| 422 | Validation error | Fix request data |
| 429 | Rate limited | Back off, retry after delay |
| 500 | Server error | Retry with backoff |
| 503 | Service unavailable | Retry with backoff |
### Error Handling Pattern
```typescript
try {
const result = await resend.emails.send(emailData);
await logSuccess(result.id, emailData);
} catch (error) {
if (error.statusCode === 429) {
await queueForRetry(emailData, error.retryAfter);
} else if (error.statusCode >= 500) {
await queueForRetry(emailData);
} else {
await logFailure(error, emailData);
await alertOnCriticalEmail(emailData); // For password resets, etc.
}
}
```
## Queuing for Reliability
For critical emails, use a queue to ensure delivery even if the initial send fails.
**Benefits:**
- Survives application restarts
- Automatic retry handling
- Rate limit management
- Audit trail
**Simple pattern:**
1. Write email to queue/database with "pending" status
2. Process queue, attempt send
3. On success: mark "sent", store message ID
4. On retryable failure: increment retry count, schedule retry
5. On permanent failure: mark "failed", alert
## Timeouts
Set appropriate timeouts to avoid hanging requests.
```typescript
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 10000);
try {
await resend.emails.send(emailData, { signal: controller.signal });
} finally {
clearTimeout(timeout);
}
```
**Recommended:** 10-30 seconds for email API calls.
## Related
- [Webhooks & Events](./webhooks-events.md) - Process delivery confirmations and failures
- [List Management](./list-management.md) - Handle bounces and suppress invalid addresses

View File

@@ -0,0 +1,418 @@
# Transactional Email Catalog
A comprehensive catalog of transactional emails organized by category, plus recommended email combinations for different app types.
## When to Use This
- Planning what transactional emails your app needs
- Choosing the right emails for your app type
- Understanding what content each email type should include
- Implementing transactional email features
## Email Combinations by App Type
Use these combinations as a starting point based on what you're building.
### Authentication-Focused App
Apps where user accounts and security are core (login systems, identity providers, account management).
**Essential:**
- Email verification
- Password reset
- OTP / 2FA codes
- Security alerts (new device, password change)
- Account update notifications
**Optional:**
- Welcome email
- Account deletion confirmation
### Newsletter / Content Platform
Apps focused on content delivery and subscriptions.
**Essential:**
- Email verification
- Password reset
- Welcome email
- Subscription confirmation
**Optional:**
- OTP / 2FA codes
- Account update notifications
### E-commerce / Marketplace
Apps where users buy products or services.
**Essential:**
- Email verification
- Password reset
- Welcome email
- Order confirmation
- Shipping notifications
- Invoice / receipt
- Payment failed notices
**Optional:**
- OTP / 2FA codes
- Security alerts
- Subscription confirmations (for recurring orders)
### SaaS / Subscription Service
Apps with paid subscription tiers and ongoing billing.
**Essential:**
- Email verification
- Password reset
- Welcome email
- OTP / 2FA codes
- Security alerts
- Subscription confirmation
- Subscription renewal notice
- Payment failed notices
- Invoice / receipt
**Optional:**
- Account update notifications
- Feature change notifications (for breaking changes)
### Financial / Fintech App
Apps handling money, payments, or sensitive financial data.
**Essential:**
- Email verification
- Password reset
- OTP / 2FA codes (required for sensitive actions)
- Security alerts (all types)
- Account update notifications
- Transaction confirmations
- Invoice / receipt
- Payment failed notices
**Optional:**
- Welcome email
- Compliance notices
### Social / Community Platform
Apps focused on user interaction and community features.
**Essential:**
- Email verification
- Password reset
- Welcome email
- Security alerts
**Optional:**
- OTP / 2FA codes
- Account update notifications
- Activity notifications (mentions, replies)
### Developer Tools / API Platform
Apps targeting developers with API access and integrations.
**Essential:**
- Email verification
- Password reset
- OTP / 2FA codes
- Security alerts
- API key notifications (creation, expiration)
- Subscription confirmation
- Payment failed notices
**Optional:**
- Welcome email
- Usage alerts (approaching limits)
- Feature change notifications
### Healthcare / HIPAA-Compliant App
Apps handling protected health information.
**Essential:**
- Email verification
- Password reset
- OTP / 2FA codes (required)
- Security alerts (all types, detailed)
- Account update notifications
- Appointment confirmations
**Optional:**
- Welcome email
- Compliance notices
**Note:** Healthcare apps have strict requirements. Emails should contain minimal PHI and link to secure portals for sensitive information.
---
## Full Email Catalog
### Authentication & Security
#### Email Verification / Account Verification
**When to send:** Immediately after user signs up or changes email address.
**Purpose:** Verify the email address belongs to the user.
**Content should include:**
- Clear verification link or code
- Expiration time (typically 24-48 hours)
- Instructions on what to do
- Security notice if link is clicked by mistake
**Best practices:**
- Send immediately (within seconds)
- Include expiration notice
- Provide resend option
- Link to support if issues
#### OTP / 2FA Codes
**When to send:** When user requests two-factor authentication code.
**Purpose:** Provide time-sensitive authentication code.
**Content should include:**
- The OTP code (clearly displayed)
- Expiration time (typically 5-10 minutes)
- Security warnings
- Instructions on what to do if not requested
**Best practices:**
- Send immediately
- Code should be large and easy to read
- Include expiration prominently
- Warn about sharing codes
- Provide "I didn't request this" link
#### Password Reset
**When to send:** When user requests password reset.
**Purpose:** Allow user to securely reset forgotten password.
**Content should include:**
- Reset link (with token)
- Expiration time (typically 1 hour)
- Security warnings
- Instructions if not requested
**Best practices:**
- Send immediately
- Link expires quickly (1 hour)
- Include IP address and location if available
- Provide "I didn't request this" link
- Don't include the old password
#### Security Alerts
**When to send:** When security-relevant events occur (login from new device, password change, etc.).
**Purpose:** Notify user of account security events.
**Content should include:**
- What happened (clear description)
- When it happened
- Location/IP if available
- Action to take if suspicious
- Link to security settings
**Best practices:**
- Send immediately
- Be clear and specific
- Include actionable steps
- Provide way to report suspicious activity
### Account Management
#### Welcome Email
**When to send:** Immediately after successful account creation and verification.
**Purpose:** Welcome new users and guide them to next steps.
**Content should include:**
- Welcome message
- Key features or next steps
- Links to important resources
- Support contact information
**Best practices:**
- Send after email verification
- Keep it focused and actionable
- Don't overwhelm with information
- Set expectations about future emails
#### Account Update Notifications
**When to send:** When user changes account settings (email, password, profile, etc.).
**Purpose:** Confirm account changes and provide security notice.
**Content should include:**
- What changed
- When it changed
- Action to take if unauthorized
- Link to account settings
**Best practices:**
- Send immediately after change
- Be specific about what changed
- Include security notice
- Provide easy way to revert if needed
### E-commerce & Transactions
#### Order Confirmations
**When to send:** Immediately after order is placed.
**Purpose:** Confirm order details and provide receipt.
**Content should include:**
- Order number
- Items ordered with quantities
- Pricing breakdown
- Shipping address
- Estimated delivery date
- Order tracking link (if available)
**Best practices:**
- Send within minutes of order
- Include all order details
- Make it easy to print or save
- Provide customer service contact
#### Shipping Notifications
**When to send:** When order ships, with tracking updates.
**Purpose:** Notify user that order has shipped and provide tracking.
**Content should include:**
- Order number
- Tracking number
- Carrier information
- Expected delivery date
- Tracking link
- Shipping address confirmation
**Best practices:**
- Send when order ships
- Include tracking number prominently
- Provide carrier tracking link
- Update on major tracking milestones
#### Invoices and Receipts
**When to send:** After payment is processed.
**Purpose:** Provide payment confirmation and receipt.
**Content should include:**
- Invoice/receipt number
- Payment amount
- Payment method
- Items/services purchased
- Payment date
- Downloadable PDF (if applicable)
**Best practices:**
- Send immediately after payment
- Include all payment details
- Make it easy to download/save
- Include tax information if applicable
### Subscriptions & Billing
#### Subscription Confirmations
**When to send:** When user subscribes or changes subscription.
**Purpose:** Confirm subscription details and billing information.
**Content should include:**
- Subscription plan details
- Billing amount and frequency
- Next billing date
- Payment method
- Link to manage subscription
**Best practices:**
- Send immediately after subscription
- Clearly state billing terms
- Provide easy cancellation option
- Include support contact
#### Subscription Renewal Notices
**When to send:** Before subscription renews (typically 3-7 days before).
**Purpose:** Notify user of upcoming renewal and charge.
**Content should include:**
- Renewal date
- Amount to be charged
- Payment method on file
- Link to update payment method
- Link to cancel if desired
**Best practices:**
- Send with enough notice (3-7 days)
- Be clear about amount and date
- Make it easy to update payment method
- Provide cancellation option
#### Payment Failed Notices
**When to send:** When subscription payment fails.
**Purpose:** Notify user of payment failure and provide resolution steps.
**Content should include:**
- What happened
- Amount that failed
- Reason for failure (if available)
- Steps to resolve
- Link to update payment method
- Consequences if not resolved
**Best practices:**
- Send immediately after failure
- Be clear about consequences
- Provide easy resolution path
- Include support contact
### Notifications & Updates
#### Feature Announcements (Transactional)
**When to send:** When a feature the user is using changes significantly.
**Purpose:** Notify users of changes that affect their use of the service.
**Content should include:**
- What changed
- How it affects the user
- What action (if any) is needed
- Link to more information
**Best practices:**
- Only for significant changes
- Focus on user impact
- Provide clear next steps
- Link to documentation
**Note:** General feature announcements are marketing emails. Only send as transactional if the change directly affects an active feature the user is using.
## Related Topics
- [Email Types](./email-types.md) - Understanding transactional vs marketing
- [Transactional Emails](./transactional-emails.md) - Best practices for sending transactional emails
- [Compliance](./compliance.md) - Legal requirements for each email type

View File

@@ -0,0 +1,92 @@
# Transactional Email Best Practices
Clear, actionable emails that users expect and need—password resets, confirmations, OTPs.
## Core Principles
1. **Clarity over creativity** - Users need to understand and act quickly
2. **Action-oriented** - Clear purpose, obvious primary action
3. **Time-sensitive** - Send immediately (within seconds)
## Subject Lines
**Be specific and include context:**
| ✅ Good | ❌ Bad |
|---------|--------|
| Reset your password for [App] | Action required |
| Your order #12345 has shipped | Update on your order |
| Your 2FA code: 123456 | Security code |
| Verify your email for [App] | Verify your email |
Include identifiers when helpful: order numbers, account names, expiration times.
## Pre-Header
The text snippet after subject line. Use it to:
- Reinforce subject ("This link expires in 1 hour")
- Add urgency or context
- Call-to-action preview
Keep under 90-100 characters.
## Content Structure
**Above the fold (first screen):**
- Clear purpose
- Primary action button
- Time-sensitive details (expiration)
**Hierarchy:** Header → Primary message → Details → Action button → Secondary info
**Format:** Short paragraphs (2-3 sentences), bullet points, bold for emphasis, white space.
## Mobile-First Design
60%+ emails opened on mobile.
- **Layout:** Single column, stack vertically
- **Buttons:** 44x44px minimum, full-width on mobile
- **Text:** 16px minimum body, 20-24px headings
- **OTP codes:** 24-32px, monospace font
## Sender Configuration
| Field | Best Practice | Example |
|-------|--------------|---------|
| From Name | App/company name, consistent | [App Name] |
| From Email | Subdomain, real address | hello@mail.yourdomain.com |
| Reply-To | Monitored inbox | support@yourdomain.com |
Avoid `noreply@` - users reply to transactional emails.
## Code and Link Display
**OTP/Verification codes:**
- Large (24-32px), monospace font
- Centered, clear label
- Include expiration nearby
- Make copyable
**Buttons:**
- Large, tappable (44x44px+)
- Contrasting colors
- Clear action text ("Reset Password", "Verify Email")
- HTTPS links only
## Error Handling
**Resend functionality:**
- Allow after 60 seconds
- Limit attempts (3 per hour)
- Show countdown timer
**Expired links:**
- Clear "expired" message
- Offer to send new link
- Provide support contact
**"I didn't request this":**
- Include in password resets, OTPs, security alerts
- Link to security contact
- Log clicks for monitoring

View File

@@ -0,0 +1,163 @@
# Webhooks and Events
Receiving and processing email delivery events in real-time.
## Event Types
| Event | When Fired | Use For |
|-------|------------|---------|
| `email.sent` | Email accepted by Resend | Confirming send initiated |
| `email.delivered` | Email delivered to recipient server | Confirming delivery |
| `email.bounced` | Email bounced (hard or soft) | List hygiene, alerting |
| `email.complained` | Recipient marked as spam | Immediate unsubscribe |
| `email.opened` | Recipient opened email | Engagement tracking |
| `email.clicked` | Recipient clicked link | Engagement tracking |
## Webhook Setup
### 1. Create Endpoint
Your endpoint must:
- Accept POST requests
- Return 2xx status quickly (within 5 seconds)
- Handle duplicate events (idempotent processing)
```typescript
app.post('/webhooks/resend', async (req, res) => {
// Return 200 immediately to acknowledge receipt
res.status(200).send('OK');
// Process asynchronously
processWebhookAsync(req.body).catch(console.error);
});
```
### 2. Verify Signatures
Always verify webhook signatures to prevent spoofing.
```typescript
import { Webhook } from 'svix';
const webhook = new Webhook(process.env.RESEND_WEBHOOK_SECRET);
app.post('/webhooks/resend', (req, res) => {
try {
const payload = webhook.verify(
JSON.stringify(req.body),
{
'svix-id': req.headers['svix-id'],
'svix-timestamp': req.headers['svix-timestamp'],
'svix-signature': req.headers['svix-signature'],
}
);
// Process verified payload
} catch (err) {
return res.status(400).send('Invalid signature');
}
});
```
### 3. Register Webhook URL
Configure your webhook endpoint in the Resend dashboard or via API.
## Processing Events
### Bounce Handling
```typescript
async function handleBounce(event) {
const { email_id, email, bounce_type } = event.data;
if (bounce_type === 'hard') {
// Permanent failure - remove from all lists
await suppressEmail(email, 'hard_bounce');
await removeFromAllLists(email);
} else {
// Soft bounce - track and remove after threshold
await incrementSoftBounce(email);
const count = await getSoftBounceCount(email);
if (count >= 3) {
await suppressEmail(email, 'soft_bounce_limit');
}
}
}
```
### Complaint Handling
```typescript
async function handleComplaint(event) {
const { email } = event.data;
// Immediate suppression - no exceptions
await suppressEmail(email, 'complaint');
await removeFromAllLists(email);
await logComplaint(event); // For analysis
}
```
### Delivery Confirmation
```typescript
async function handleDelivered(event) {
const { email_id } = event.data;
await updateEmailStatus(email_id, 'delivered');
}
```
## Idempotent Processing
Webhooks may be sent multiple times. Use event IDs to prevent duplicate processing.
```typescript
async function processWebhook(event) {
const eventId = event.id;
// Check if already processed
if (await isEventProcessed(eventId)) {
return; // Skip duplicate
}
// Process event
await handleEvent(event);
// Mark as processed
await markEventProcessed(eventId);
}
```
## Error Handling
### Retry Behavior
If your endpoint returns non-2xx, webhooks will retry with exponential backoff:
- Retry 1: ~30 seconds
- Retry 2: ~1 minute
- Retry 3: ~5 minutes
- (continues for ~24 hours)
### Best Practices
- **Return 200 quickly** - Process asynchronously to avoid timeouts
- **Be idempotent** - Handle duplicate deliveries gracefully
- **Log everything** - Store raw events for debugging
- **Alert on failures** - Monitor webhook processing errors
- **Queue for processing** - Use a job queue for complex handling
## Testing Webhooks
**Local development:** Use ngrok or similar to expose localhost.
```bash
ngrok http 3000
# Use the ngrok URL as your webhook endpoint
```
**Verify handling:** Send test events through Resend dashboard or manually trigger each event type.
## Related
- [List Management](./list-management.md) - What to do with bounce/complaint data
- [Sending Reliability](./sending-reliability.md) - Retry logic when sends fail

View File

@@ -0,0 +1,41 @@
---
name: frontend-design
description: Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, artifacts, posters, or applications (examples include websites, landing pages, dashboards, React components, HTML/CSS layouts, or when styling/beautifying any web UI). Generates creative, polished code and UI design that avoids generic AI aesthetics.
---
This skill guides creation of distinctive, production-grade frontend interfaces that avoid generic "AI slop" aesthetics. Implement real working code with exceptional attention to aesthetic details and creative choices.
The user provides frontend requirements: a component, page, application, or interface to build. They may include context about the purpose, audience, or technical constraints.
## Design Thinking
Before coding, understand the context and commit to a BOLD aesthetic direction:
- **Purpose**: What problem does this interface solve? Who uses it?
- **Tone**: Pick an extreme: brutally minimal, maximalist chaos, retro-futuristic, organic/natural, luxury/refined, playful/toy-like, editorial/magazine, brutalist/raw, art deco/geometric, soft/pastel, industrial/utilitarian, etc. There are so many flavors to choose from. Use these for inspiration but design one that is true to the aesthetic direction.
- **Constraints**: Technical requirements (framework, performance, accessibility).
- **Differentiation**: What makes this UNFORGETTABLE? What's the one thing someone will remember?
**CRITICAL**: Choose a clear conceptual direction and execute it with precision. Bold maximalism and refined minimalism both work - the key is intentionality, not intensity.
Then implement working code (HTML/CSS/JS, React, Vue, etc.) that is:
- Production-grade and functional
- Visually striking and memorable
- Cohesive with a clear aesthetic point-of-view
- Meticulously refined in every detail
## Frontend Aesthetics Guidelines
Focus on:
- **Typography**: Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics; unexpected, characterful font choices. Pair a distinctive display font with a refined body font.
- **Color & Theme**: Commit to a cohesive aesthetic. Use CSS variables for consistency. Dominant colors with sharp accents outperform timid, evenly-distributed palettes.
- **Motion**: Use animations for effects and micro-interactions. Prioritize CSS-only solutions for HTML. Use Motion library for React when available. Focus on high-impact moments: one well-orchestrated page load with staggered reveals (animation-delay) creates more delight than scattered micro-interactions. Use scroll-triggering and hover states that surprise.
- **Spatial Composition**: Unexpected layouts. Asymmetry. Overlap. Diagonal flow. Grid-breaking elements. Generous negative space OR controlled density.
- **Backgrounds & Visual Details**: Create atmosphere and depth rather than defaulting to solid colors. Add contextual effects and textures that match the overall aesthetic. Apply creative forms like gradient meshes, noise textures, geometric patterns, layered transparencies, dramatic shadows, decorative borders, custom cursors, and grain overlays.
NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character.
Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics. NEVER converge on common choices (Space Grotesk, for example) across generations.
**IMPORTANT**: Match implementation complexity to the aesthetic vision. Maximalist designs need elaborate code with extensive animations and effects. Minimalist or refined designs need restraint, precision, and careful attention to spacing, typography, and subtle details. Elegance comes from executing the vision well.
Remember: Claude is capable of extraordinary creative work. Don't hold back, show what can truly be created when thinking outside the box and committing fully to a distinctive vision.

View File

@@ -0,0 +1,123 @@
---
name: librarian
description: Multi-repository codebase exploration. Research library internals, find code patterns, understand architecture, compare implementations across GitHub/npm/PyPI/crates. Use when needing deep understanding of how libraries work, finding implementations across open source, or exploring remote repository structure.
references:
- references/tool-routing.md
- references/opensrc-api.md
- references/opensrc-examples.md
- references/linking.md
- references/diagrams.md
---
# Librarian Skill
Deep codebase exploration across remote repositories.
## How to Use This Skill
### Reference Structure
| File | Purpose | When to Read |
|------|---------|--------------|
| `tool-routing.md` | Tool selection decision trees | **Always read first** |
| `opensrc-api.md` | API reference, types | Writing opensrc code |
| `opensrc-examples.md` | JavaScript patterns, workflows | Implementation examples |
| `linking.md` | GitHub URL patterns | Formatting responses |
| `diagrams.md` | Mermaid patterns | Visualizing architecture |
### Reading Order
1. **Start** with `tool-routing.md` → choose tool strategy
2. **If using opensrc:**
- Read `opensrc-api.md` for API details
- Read `opensrc-examples.md` for patterns
3. **Before responding:** `linking.md` + `diagrams.md` for output formatting
## Tool Arsenal
| Tool | Best For | Limitations |
|------|----------|-------------|
| **grep_app** | Find patterns across ALL public GitHub | Literal search only |
| **context7** | Library docs, API examples, usage | Known libraries only |
| **opensrc** | Fetch full source for deep exploration | Must fetch before read |
## Quick Decision Trees
### "How does X work?"
```
Known library?
├─ Yes → context7.resolve-library-id → context7.query-docs
│ └─ Need internals? → opensrc.fetch → read source
└─ No → grep_app search → opensrc.fetch top result
```
### "Find pattern X"
```
Specific repo?
├─ Yes → opensrc.fetch → opensrc.grep → read matches
└─ No → grep_app (broad) → opensrc.fetch interesting repos
```
### "Explore repo structure"
```
1. opensrc.fetch(target)
2. opensrc.tree(source.name) → quick overview
3. opensrc.files(source.name, "**/*.ts") → detailed listing
4. Read: README, package.json, src/index.*
5. Create architecture diagram (see diagrams.md)
```
### "Compare X vs Y"
```
1. opensrc.fetch(["X", "Y"])
2. Use source.name from results for subsequent calls
3. opensrc.grep(pattern, { sources: [nameX, nameY] })
4. Read comparable files, synthesize differences
```
## Critical: Source Naming Convention
**After fetching, always use `source.name` for subsequent calls:**
```javascript
const [{ source }] = await opensrc.fetch("vercel/ai");
const files = await opensrc.files(source.name, "**/*.ts");
```
| Type | Fetch Spec | Source Name |
|------|------------|-------------|
| npm | `"zod"` | `"zod"` |
| npm scoped | `"@tanstack/react-query"` | `"@tanstack/react-query"` |
| pypi | `"pypi:requests"` | `"requests"` |
| crates | `"crates:serde"` | `"serde"` |
| GitHub | `"vercel/ai"` | `"github.com/vercel/ai"` |
| GitLab | `"gitlab:org/repo"` | `"gitlab.com/org/repo"` |
## When NOT to Use opensrc
| Scenario | Use Instead |
|----------|-------------|
| Simple library API questions | context7 |
| Finding examples across many repos | grep_app |
| Very large monorepos (>10GB) | Clone locally |
| Private repositories | Direct access |
## Output Guidelines
1. **Comprehensive final message** - only last message returns to main agent
2. **Parallel tool calls** - maximize efficiency
3. **Link every file reference** - see `linking.md`
4. **Diagram complex relationships** - see `diagrams.md`
5. **Never mention tool names** - say "I'll search" not "I'll use opensrc"
## References
- [Tool Routing Decision Trees](references/tool-routing.md)
- [opensrc API Reference](references/opensrc-api.md)
- [opensrc Code Examples](references/opensrc-examples.md)
- [GitHub Linking Patterns](references/linking.md)
- [Mermaid Diagram Patterns](references/diagrams.md)

View File

@@ -0,0 +1,51 @@
# Mermaid Diagram Patterns
Create diagrams for:
- Architecture (component relationships)
- Data flow (request → response)
- Dependencies (import graph)
- Sequences (step-by-step processes)
## Architecture
```mermaid
graph TD
A[Client] --> B[API Gateway]
B --> C[Auth Service]
B --> D[Data Service]
D --> E[(Database)]
```
## Flow
```mermaid
flowchart LR
Input --> Parse --> Validate --> Transform --> Output
```
## Sequence
```mermaid
sequenceDiagram
Client->>+Server: Request
Server->>+DB: Query
DB-->>-Server: Result
Server-->>-Client: Response
```
## When to Use
| Type | Use For |
|------|---------|
| `graph TD` | Component hierarchy, dependencies |
| `flowchart LR` | Data transformation, pipelines |
| `sequenceDiagram` | Request/response, multi-party interaction |
| `classDiagram` | Type relationships, inheritance |
| `stateDiagram` | State machines, lifecycle |
## Tips
- Keep nodes short (3-4 words max)
- Use subgraphs for grouping related components
- Arrow labels for relationship types
- Prefer LR (left-right) for flows, TD (top-down) for hierarchies

View File

@@ -0,0 +1,61 @@
# GitHub Linking Patterns
All file/dir/code refs → fluent markdown links. Never raw URLs.
## URL Formats
### File
```
https://github.com/{owner}/{repo}/blob/{ref}/{path}
```
### File + Lines
```
https://github.com/{owner}/{repo}/blob/{ref}/{path}#L{start}-L{end}
```
### Directory
```
https://github.com/{owner}/{repo}/tree/{ref}/{path}
```
### GitLab (note `/-/blob/`)
```
https://gitlab.com/{owner}/{repo}/-/blob/{ref}/{path}
```
## Ref Resolution
| Source | Use as ref |
|--------|------------|
| Known version | `v{version}` |
| Default branch | `main` or `master` |
| opensrc fetch | ref from result |
| Specific commit | full SHA |
## Examples
### Correct
```markdown
The [`parseAsync`](https://github.com/colinhacks/zod/blob/main/src/types.ts#L450-L480) method handles...
```
### Wrong
```markdown
See https://github.com/colinhacks/zod/blob/main/src/types.ts#L100
The parseAsync method in src/types.ts handles...
```
## Line Numbers
- Single: `#L42`
- Range: `#L42-L50`
- Prefer ranges for context (2-5 lines around key code)
## Registry → GitHub
| Registry | Find repo in |
|----------|--------------|
| npm | `package.json``repository` |
| PyPI | `pyproject.toml` or setup.py |
| crates | `Cargo.toml` |

View File

@@ -0,0 +1,235 @@
# opensrc API Reference
## Tool
Use the **opensrc MCP server** via single tool:
| Tool | Purpose |
|------|---------|
| `opensrc_execute` | All operations (fetch, read, grep, files, remove, etc.) |
Takes a `code` parameter: JavaScript async arrow function executed server-side. Source trees stay on server, only results return.
## API Surface
### Read Operations
```typescript
// List all fetched sources
opensrc.list(): Source[]
// Check if source exists
opensrc.has(name: string, version?: string): boolean
// Get source metadata
opensrc.get(name: string): Source | undefined
// List files with optional glob
opensrc.files(sourceName: string, glob?: string): Promise<FileEntry[]>
// Get directory tree structure (default depth: 3)
opensrc.tree(sourceName: string, options?: { depth?: number }): Promise<TreeNode>
// Regex search file contents
opensrc.grep(pattern: string, options?: GrepOptions): Promise<GrepResult[]>
// AST-based semantic code search
opensrc.astGrep(sourceName: string, pattern: string, options?: AstGrepOptions): Promise<AstGrepMatch[]>
// Read single file
opensrc.read(sourceName: string, filePath: string): Promise<string>
// Batch read multiple files (supports globs!)
opensrc.readMany(sourceName: string, paths: string[]): Promise<Record<string, string>>
// Parse fetch spec
opensrc.resolve(spec: string): Promise<ParsedSpec>
```
### Mutation Operations
```typescript
// Fetch packages/repos
opensrc.fetch(specs: string | string[], options?: { modify?: boolean }): Promise<FetchedSource[]>
// Remove sources
opensrc.remove(names: string[]): Promise<RemoveResult>
// Clean by type
opensrc.clean(options?: CleanOptions): Promise<RemoveResult>
```
## Types
### Source
```typescript
interface Source {
type: "npm" | "pypi" | "crates" | "repo";
name: string; // Use this for all subsequent calls
version?: string;
ref?: string;
path: string;
fetchedAt: string;
repository: string;
}
```
### FetchedSource
```typescript
interface FetchedSource {
source: Source; // IMPORTANT: use source.name for subsequent calls
alreadyExists: boolean;
}
```
### GrepOptions
```typescript
interface GrepOptions {
sources?: string[]; // Filter to specific sources
include?: string; // File glob pattern (e.g., "*.ts")
maxResults?: number; // Limit results (default: 100)
}
```
### GrepResult
```typescript
interface GrepResult {
source: string;
file: string;
line: number;
content: string;
}
```
### AstGrepOptions
```typescript
interface AstGrepOptions {
glob?: string; // File glob pattern (e.g., "**/*.ts")
lang?: string | string[]; // Language(s): "js", "ts", "tsx", "html", "css"
limit?: number; // Max results (default: 1000)
}
```
### AstGrepMatch
```typescript
interface AstGrepMatch {
file: string;
line: number;
column: number;
endLine: number;
endColumn: number;
text: string; // Matched code text
metavars: Record<string, string>; // Captured $VAR → text
}
```
#### AST Pattern Syntax
| Pattern | Matches |
|---------|---------|
| `$NAME` | Single node, captures to metavars |
| `$$$ARGS` | Zero or more nodes (variadic), captures |
| `$_` | Single node, no capture |
| `$$$` | Zero or more nodes, no capture |
### FileEntry
```typescript
interface FileEntry {
path: string;
size: number;
isDirectory: boolean;
}
```
### TreeNode
```typescript
interface TreeNode {
name: string;
type: "file" | "dir";
children?: TreeNode[]; // only for dirs
}
```
### CleanOptions
```typescript
interface CleanOptions {
packages?: boolean;
repos?: boolean;
npm?: boolean;
pypi?: boolean;
crates?: boolean;
}
```
### RemoveResult
```typescript
interface RemoveResult {
success: boolean;
removed: string[];
}
```
## Error Handling
Operations throw on errors. Wrap in try/catch if needed:
```javascript
async () => {
try {
const content = await opensrc.read("zod", "missing.ts");
return content;
} catch (e) {
return { error: e.message };
}
}
```
`readMany` returns errors as string values prefixed with `[Error:`:
```javascript
const files = await opensrc.readMany("zod", ["exists.ts", "missing.ts"]);
// { "exists.ts": "content...", "missing.ts": "[Error: ENOENT...]" }
// Filter successful reads
const successful = Object.entries(files)
.filter(([_, content]) => !content.startsWith("[Error:"));
```
## Package Spec Formats
| Format | Example | Source Name After Fetch |
|--------|---------|------------------------|
| `<name>` | `"zod"` | `"zod"` |
| `<name>@<version>` | `"zod@3.22.0"` | `"zod"` |
| `pypi:<name>` | `"pypi:requests"` | `"requests"` |
| `crates:<name>` | `"crates:serde"` | `"serde"` |
| `owner/repo` | `"vercel/ai"` | `"github.com/vercel/ai"` |
| `owner/repo@ref` | `"vercel/ai@v1.0.0"` | `"github.com/vercel/ai"` |
| `gitlab:owner/repo` | `"gitlab:org/repo"` | `"gitlab.com/org/repo"` |
## Critical Pattern
**Always capture `source.name` from fetch results:**
```javascript
async () => {
const [{ source }] = await opensrc.fetch("vercel/ai");
// GitHub repos: "vercel/ai" → "github.com/vercel/ai"
const sourceName = source.name;
// Use sourceName for ALL subsequent calls
const files = await opensrc.files(sourceName, "src/**/*.ts");
return files;
}
```

View File

@@ -0,0 +1,336 @@
# opensrc Code Examples
## Workflow: Fetch → Explore
### Basic Fetch and Explore with tree()
```javascript
async () => {
const [{ source }] = await opensrc.fetch("vercel/ai");
// Get directory structure first
const tree = await opensrc.tree(source.name, { depth: 2 });
return tree;
}
```
### Fetch and Read Key Files
```javascript
async () => {
const [{ source }] = await opensrc.fetch("vercel/ai");
const sourceName = source.name; // "github.com/vercel/ai"
const files = await opensrc.readMany(sourceName, [
"package.json",
"README.md",
"src/index.ts"
]);
return { sourceName, files };
}
```
### readMany with Globs
```javascript
async () => {
const [{ source }] = await opensrc.fetch("zod");
// Read all package.json files in monorepo
const files = await opensrc.readMany(source.name, [
"packages/*/package.json" // globs supported!
]);
return Object.keys(files);
}
```
### Batch Fetch Multiple Packages
```javascript
async () => {
const results = await opensrc.fetch(["zod", "valibot", "yup"]);
const names = results.map(r => r.source.name);
// Compare how each handles string validation
const comparisons = {};
for (const name of names) {
const matches = await opensrc.grep("string.*validate|validateString", {
sources: [name],
include: "*.ts",
maxResults: 10
});
comparisons[name] = matches.map(m => `${m.file}:${m.line}`);
}
return comparisons;
}
```
## Search Patterns
### Grep → Read Context
```javascript
async () => {
const matches = await opensrc.grep("export function parse\\(", {
sources: ["zod"],
include: "*.ts"
});
if (matches.length === 0) return "No matches";
const match = matches[0];
const content = await opensrc.read(match.source, match.file);
const lines = content.split("\n");
// Return 40 lines starting from match
return {
file: match.file,
code: lines.slice(match.line - 1, match.line + 39).join("\n")
};
}
```
### Search Across All Fetched Sources
```javascript
async () => {
const sources = opensrc.list();
const results = {};
for (const source of sources) {
const errorHandling = await opensrc.grep("throw new|catch \\(|\\.catch\\(", {
sources: [source.name],
include: "*.ts",
maxResults: 20
});
results[source.name] = {
type: source.type,
errorPatterns: errorHandling.length
};
}
return results;
}
```
## AST-Based Search
Use `astGrep` for semantic code search with pattern matching.
### Find Function Declarations
```javascript
async () => {
const [{ source }] = await opensrc.fetch("lodash");
const fns = await opensrc.astGrep(source.name, "function $NAME($$$ARGS) { $$$BODY }", {
lang: "js",
limit: 20
});
return fns.map(m => ({
file: m.file,
line: m.line,
name: m.metavars.NAME
}));
}
```
### Find React Hooks Usage
```javascript
async () => {
const [{ source }] = await opensrc.fetch("vercel/ai");
const stateHooks = await opensrc.astGrep(
source.name,
"const [$STATE, $SETTER] = useState($$$INIT)",
{ lang: ["ts", "tsx"], limit: 50 }
);
return stateHooks.map(m => ({
file: m.file,
state: m.metavars.STATE,
setter: m.metavars.SETTER
}));
}
```
### Find Class Definitions with Context
```javascript
async () => {
const [{ source }] = await opensrc.fetch("zod");
const classes = await opensrc.astGrep(source.name, "class $NAME", {
glob: "**/*.ts"
});
const details = [];
for (const cls of classes.slice(0, 5)) {
const content = await opensrc.read(source.name, cls.file);
const lines = content.split("\n");
details.push({
name: cls.metavars.NAME,
file: cls.file,
preview: lines.slice(cls.line - 1, cls.line + 9).join("\n")
});
}
return details;
}
```
### Compare Export Patterns Across Libraries
```javascript
async () => {
const results = await opensrc.fetch(["zod", "valibot"]);
const names = results.map(r => r.source.name);
const exports = {};
for (const name of names) {
const matches = await opensrc.astGrep(name, "export const $NAME = $_", {
lang: "ts",
limit: 30
});
exports[name] = matches.map(m => m.metavars.NAME);
}
return exports;
}
```
### grep vs astGrep
| Use Case | Tool |
|----------|------|
| Text/regex pattern | `grep` |
| Function declarations | `astGrep`: `function $NAME($$$) { $$$ }` |
| Arrow functions | `astGrep`: `const $N = ($$$) => $_` |
| Class definitions | `astGrep`: `class $NAME extends $PARENT` |
| Import statements | `astGrep`: `import { $$$IMPORTS } from "$MOD"` |
| JSX components | `astGrep`: `<$COMP $$$PROPS />` |
## Repository Exploration
### Find Entry Points
```javascript
async () => {
const name = "github.com/vercel/ai";
const allFiles = await opensrc.files(name, "**/*.{ts,js}");
const entryPoints = allFiles.filter(f =>
f.path.match(/^(src\/)?(index|main|mod)\.(ts|js)$/) ||
f.path.includes("/index.ts")
);
// Read all entry points
const contents = {};
for (const ep of entryPoints.slice(0, 5)) {
contents[ep.path] = await opensrc.read(name, ep.path);
}
return {
totalFiles: allFiles.length,
entryPoints: entryPoints.map(f => f.path),
contents
};
}
```
### Explore Package Structure
```javascript
async () => {
const name = "zod";
// Get all TypeScript files
const tsFiles = await opensrc.files(name, "**/*.ts");
// Group by directory
const byDir = {};
for (const f of tsFiles) {
const dir = f.path.split("/").slice(0, -1).join("/") || ".";
byDir[dir] = (byDir[dir] || 0) + 1;
}
// Read key files
const pkg = await opensrc.read(name, "package.json");
const readme = await opensrc.read(name, "README.md");
return {
structure: byDir,
package: JSON.parse(pkg),
readmePreview: readme.slice(0, 500)
};
}
```
## Batch Operations
### Read Many with Error Handling
```javascript
async () => {
const files = await opensrc.readMany("zod", [
"src/index.ts",
"src/types.ts",
"src/ZodError.ts",
"src/helpers/parseUtil.ts"
]);
// files is Record<string, string> - errors start with "[Error:"
const successful = Object.entries(files)
.filter(([_, content]) => !content.startsWith("[Error:"))
.map(([path, content]) => ({ path, lines: content.split("\n").length }));
return successful;
}
```
### Parallel Grep Across Multiple Sources
```javascript
async () => {
const targets = ["zod", "valibot"];
const pattern = "export (type|interface)";
const results = await Promise.all(
targets.map(async (name) => {
const matches = await opensrc.grep(pattern, {
sources: [name],
include: "*.ts",
maxResults: 50
});
return { name, count: matches.length, matches };
})
);
return results;
}
```
## Workflow Checklist
### Comprehensive Repository Analysis
```
Repository Analysis Progress:
- [ ] 1. Fetch repository
- [ ] 2. Read package.json + README
- [ ] 3. Identify entry points (src/index.*)
- [ ] 4. Read main entry file
- [ ] 5. Map exports and public API
- [ ] 6. Trace key functionality
- [ ] 7. Create architecture diagram
```
### Library Comparison
```
Comparison Progress:
- [ ] 1. Fetch all libraries
- [ ] 2. Grep for target pattern in each
- [ ] 3. Read matching implementations
- [ ] 4. Create comparison table
- [ ] 5. Synthesize findings
```

View File

@@ -0,0 +1,109 @@
# Tool Routing
## Decision Flowchart
```mermaid
graph TD
Q[User Query] --> T{Query Type?}
T -->|Understand/Explain| U[UNDERSTAND]
T -->|Find/Search| F[FIND]
T -->|Explore/Architecture| E[EXPLORE]
T -->|Compare| C[COMPARE]
U --> U1{Known library?}
U1 -->|Yes| U2[context7.resolve-library-id]
U2 --> U3[context7.query-docs]
U3 --> U4{Need source?}
U4 -->|Yes| U5[opensrc.fetch → read]
U1 -->|No| U6[grep_app → opensrc.fetch]
F --> F1{Specific repo?}
F1 -->|Yes| F2[opensrc.fetch → grep → read]
F1 -->|No| F3[grep_app broad search]
F3 --> F4[opensrc.fetch interesting repos]
E --> E1[opensrc.fetch]
E1 --> E2[opensrc.files]
E2 --> E3[Read entry points]
E3 --> E4[Create diagram]
C --> C1["opensrc.fetch([X, Y])"]
C1 --> C2[grep same pattern]
C2 --> C3[Read comparable files]
C3 --> C4[Synthesize comparison]
```
## Query Type Detection
| Keywords | Query Type | Start With |
|----------|------------|------------|
| "how does", "why does", "explain", "purpose of" | UNDERSTAND | context7 |
| "find", "where is", "implementations of", "examples of" | FIND | grep_app |
| "explore", "walk through", "architecture", "structure" | EXPLORE | opensrc |
| "compare", "vs", "difference between" | COMPARE | opensrc |
## UNDERSTAND Queries
```
Known library? → context7.resolve-library-id → context7.query-docs
└─ Need source? → opensrc.fetch → read
Unknown? → grep_app search → opensrc.fetch top result → read
```
**When to transition context7 → opensrc:**
- Need implementation details (not just API docs)
- Question about internals/private methods
- Tracing code flow through library
## FIND Queries
```
Specific repo? → opensrc.fetch → opensrc.grep → read matches
Broad search? → grep_app → analyze → opensrc.fetch interesting repos
```
**grep_app query tips:**
- Use literal code patterns: `useState(` not "react hooks"
- Filter by language: `language: ["TypeScript"]`
- Narrow by repo: `repo: "vercel/"` for org
## EXPLORE Queries
```
1. opensrc.fetch(target)
2. opensrc.files → understand structure
3. Identify entry points: README, package.json, src/index.*
4. Read entry → internals
5. Create architecture diagram
```
## COMPARE Queries
```
1. opensrc.fetch([X, Y])
2. Extract source.name from each result
3. opensrc.grep same pattern in both
4. Read comparable files
5. Synthesize → comparison table
```
## Tool Capabilities
| Tool | Best For | Not For |
|------|----------|---------|
| **grep_app** | Broad search, unknown scope, finding repos | Semantic queries |
| **context7** | Library APIs, best practices, common patterns | Library internals |
| **opensrc** | Deep exploration, reading internals, tracing flow | Initial discovery |
## Anti-patterns
| Don't | Do |
|-------|-----|
| grep_app for known library docs | context7 first |
| opensrc.fetch before knowing target | grep_app to discover |
| Multiple small reads | opensrc.readMany batch |
| Describe without linking | Link every file ref |
| Text for complex relationships | Mermaid diagram |
| Use tool names in responses | "I'll search..." not "I'll use opensrc" |

View File

@@ -0,0 +1,110 @@
---
name: overseer-plan
description: Convert markdown planning documents to Overseer tasks via MCP codemode. Use when converting plans, specs, or design docs to trackable task hierarchies.
license: MIT
metadata:
author: dmmulroy
version: "1.0.0"
---
# Converting Markdown Documents to Overseer Tasks
Use `/overseer-plan` to convert any markdown planning document into trackable Overseer tasks.
## When to Use
- After completing a plan in plan mode
- Converting specs/design docs to implementation tasks
- Creating tasks from roadmap or milestone documents
## Usage
```
/overseer-plan <markdown-file-path>
/overseer-plan <file> --priority 3 # Set priority (1-5)
/overseer-plan <file> --parent <task-id> # Create as child of existing task
```
## What It Does
1. Reads markdown file
2. Extracts title from first `#` heading (strips "Plan: " prefix)
3. Creates Overseer milestone (or child task if `--parent` provided)
4. Analyzes structure for child task breakdown
5. Creates child tasks (depth 1) or subtasks (depth 2) when appropriate
6. Returns task ID and breakdown summary
## Hierarchy Levels
| Depth | Name | Example |
|-------|------|---------|
| 0 | **Milestone** | "Add user authentication system" |
| 1 | **Task** | "Implement JWT middleware" |
| 2 | **Subtask** | "Add token verification function" |
## Breakdown Decision
**Create subtasks when:**
- 3-7 clearly separable work items
- Implementation across multiple files/components
- Clear sequential dependencies
**Keep single milestone when:**
- 1-2 steps only
- Work items tightly coupled
- Plan is exploratory/investigative
## Task Quality Criteria
Every task must be:
- **Atomic**: Single committable unit of work
- **Validated**: Has tests OR explicit acceptance criteria in context ("Done when: ...")
- **Clear**: Technical, specific, imperative verb
Every milestone must:
- **Demoable**: Produces runnable/testable increment
- **Builds on prior**: Can depend on previous milestone's output
## Review Workflow
1. Analyze document -> propose breakdown
2. **Invoke Oracle** to review breakdown and suggest improvements
3. Incorporate feedback
4. Create in Overseer (persists to SQLite via MCP)
## After Creating
```javascript
await tasks.get("<id>"); // TaskWithContext (full context + learnings)
await tasks.list({ parentId: "<id>" }); // Task[] (children without context chain)
await tasks.start("<id>"); // Task (VCS required - creates bookmark, records start commit)
await tasks.complete("<id>", { result: "...", learnings: [...] }); // Task (VCS required - commits, bubbles learnings)
```
**VCS Required**: `start` and `complete` require jj or git (fail with `NotARepository` if none found). CRUD operations work without VCS.
**Note**: Priority must be 1-5. Blockers cannot be ancestors or descendants.
## When NOT to Use
- Document incomplete or exploratory
- Content not actionable
- No meaningful planning content
---
## Reading Order
| Task | File |
|------|------|
| Understanding API | @file references/api.md |
| Agent implementation | @file references/implementation.md |
| See examples | @file references/examples.md |
## In This Reference
| File | Purpose |
|------|---------|
| `references/api.md` | Overseer MCP codemode API types/methods |
| `references/implementation.md` | Step-by-step execution instructions for agent |
| `references/examples.md` | Complete worked examples |

View File

@@ -0,0 +1,192 @@
# Overseer Codemode MCP API
Execute JavaScript code to interact with Overseer task management.
## Task Interfaces
```typescript
// Basic task - returned by list(), create(), start(), complete()
// Note: Does NOT include context or learnings fields
interface Task {
id: string;
parentId: string | null;
description: string;
priority: 1 | 2 | 3 | 4 | 5;
completed: boolean;
completedAt: string | null;
startedAt: string | null;
createdAt: string; // ISO 8601
updatedAt: string;
result: string | null; // Completion notes
commitSha: string | null; // Auto-populated on complete
depth: 0 | 1 | 2; // 0=milestone, 1=task, 2=subtask
blockedBy?: string[]; // Blocking task IDs (omitted if empty)
blocks?: string[]; // Tasks this blocks (omitted if empty)
bookmark?: string; // VCS bookmark name (if started)
startCommit?: string; // Commit SHA at start
effectivelyBlocked: boolean; // True if task OR ancestor has incomplete blockers
}
// Task with full context - returned by get(), nextReady()
interface TaskWithContext extends Task {
context: {
own: string; // This task's context
parent?: string; // Parent's context (depth > 0)
milestone?: string; // Root milestone's context (depth > 1)
};
learnings: {
own: Learning[]; // This task's learnings (bubbled from completed children)
parent: Learning[]; // Parent's learnings (depth > 0)
milestone: Learning[]; // Milestone's learnings (depth > 1)
};
}
// Task tree structure - returned by tree()
interface TaskTree {
task: Task;
children: TaskTree[];
}
// Progress summary - returned by progress()
interface TaskProgress {
total: number;
completed: number;
ready: number; // !completed && !effectivelyBlocked
blocked: number; // !completed && effectivelyBlocked
}
// Task type alias for depth filter
type TaskType = "milestone" | "task" | "subtask";
```
## Learning Interface
```typescript
interface Learning {
id: string;
taskId: string;
content: string;
sourceTaskId: string | null;
createdAt: string;
}
```
## Tasks API
```typescript
declare const tasks: {
list(filter?: {
parentId?: string;
ready?: boolean;
completed?: boolean;
depth?: 0 | 1 | 2; // 0=milestones, 1=tasks, 2=subtasks
type?: TaskType; // Alias: "milestone"|"task"|"subtask" (mutually exclusive with depth)
}): Promise<Task[]>;
get(id: string): Promise<TaskWithContext>;
create(input: {
description: string;
context?: string;
parentId?: string;
priority?: 1 | 2 | 3 | 4 | 5; // Must be 1-5
blockedBy?: string[]; // Cannot be ancestors/descendants
}): Promise<Task>;
update(id: string, input: {
description?: string;
context?: string;
priority?: 1 | 2 | 3 | 4 | 5;
parentId?: string;
}): Promise<Task>;
start(id: string): Promise<Task>;
complete(id: string, input?: { result?: string; learnings?: string[] }): Promise<Task>;
reopen(id: string): Promise<Task>;
delete(id: string): Promise<void>;
block(taskId: string, blockerId: string): Promise<void>;
unblock(taskId: string, blockerId: string): Promise<void>;
nextReady(milestoneId?: string): Promise<TaskWithContext | null>;
tree(rootId?: string): Promise<TaskTree | TaskTree[]>;
search(query: string): Promise<Task[]>;
progress(rootId?: string): Promise<TaskProgress>;
};
```
| Method | Returns | Description |
|--------|---------|-------------|
| `list` | `Task[]` | Filter by `parentId`, `ready`, `completed`, `depth`, `type` |
| `get` | `TaskWithContext` | Get task with full context chain + inherited learnings |
| `create` | `Task` | Create task (priority must be 1-5) |
| `update` | `Task` | Update description, context, priority, parentId |
| `start` | `Task` | **VCS required** - creates bookmark, records start commit |
| `complete` | `Task` | **VCS required** - commits changes + bubbles learnings to parent |
| `reopen` | `Task` | Reopen completed task |
| `delete` | `void` | Delete task + best-effort VCS bookmark cleanup |
| `block` | `void` | Add blocker (cannot be self, ancestor, or descendant) |
| `unblock` | `void` | Remove blocker relationship |
| `nextReady` | `TaskWithContext \| null` | Get deepest ready leaf with full context |
| `tree` | `TaskTree \| TaskTree[]` | Get task tree (all milestones if no ID) |
| `search` | `Task[]` | Search by description/context/result (case-insensitive) |
| `progress` | `TaskProgress` | Aggregate counts for milestone or all tasks |
## Learnings API
Learnings are added via `tasks.complete(id, { learnings: [...] })` and bubble to immediate parent (preserving `sourceTaskId`).
```typescript
declare const learnings: {
list(taskId: string): Promise<Learning[]>;
};
```
| Method | Description |
|--------|-------------|
| `list` | List learnings for task |
## VCS Integration (Required for Workflow)
VCS operations are **automatically handled** by the tasks API:
| Task Operation | VCS Effect |
|----------------|------------|
| `tasks.start(id)` | **VCS required** - creates bookmark `task/<id>`, records start commit |
| `tasks.complete(id)` | **VCS required** - commits changes (NothingToCommit = success) |
| `tasks.delete(id)` | Best-effort bookmark cleanup (logs warning on failure) |
**VCS (jj or git) is required** for start/complete. Fails with `NotARepository` if none found. CRUD operations work without VCS.
## Quick Examples
```javascript
// Create milestone with subtask
const milestone = await tasks.create({
description: "Build authentication system",
context: "JWT-based auth with refresh tokens",
priority: 1
});
const subtask = await tasks.create({
description: "Implement token refresh logic",
parentId: milestone.id,
context: "Handle 7-day expiry"
});
// Start work (VCS required - creates bookmark)
await tasks.start(subtask.id);
// ... do implementation work ...
// Complete task with learnings (VCS required - commits changes, bubbles learnings to parent)
await tasks.complete(subtask.id, {
result: "Implemented using jose library",
learnings: ["Use jose instead of jsonwebtoken"]
});
// Get progress summary
const progress = await tasks.progress(milestone.id);
// -> { total: 2, completed: 1, ready: 1, blocked: 0 }
// Search tasks
const authTasks = await tasks.search("authentication");
// Get task tree
const tree = await tasks.tree(milestone.id);
// -> { task: Task, children: TaskTree[] }
```

View File

@@ -0,0 +1,177 @@
# Examples
## Example 1: With Breakdown
### Input (`auth-plan.md`)
```markdown
# Plan: Add Authentication System
## Implementation
1. Create database schema for users/tokens
2. Implement auth controller with endpoints
3. Add JWT middleware for route protection
4. Build frontend login/register forms
5. Add integration tests
```
### Execution
```javascript
const milestone = await tasks.create({
description: "Add Authentication System",
context: `# Add Authentication System\n\n## Implementation\n1. Create database schema...`,
priority: 3
});
const subtasks = [
{ desc: "Create database schema for users/tokens", done: "Migration runs, tables exist with FK constraints" },
{ desc: "Implement auth controller with endpoints", done: "POST /register, /login return expected responses" },
{ desc: "Add JWT middleware for route protection", done: "Unauthorized requests return 401, valid tokens pass" },
{ desc: "Build frontend login/register forms", done: "Forms render, submit without errors" },
{ desc: "Add integration tests", done: "`npm test` passes with auth coverage" }
];
for (const sub of subtasks) {
await tasks.create({
description: sub.desc,
context: `Part of 'Add Authentication System'.\n\nDone when: ${sub.done}`,
parentId: milestone.id
});
}
return { milestone: milestone.id, subtaskCount: subtasks.length };
```
### Output
```
Created milestone task_01ABC from plan
Analyzed plan structure: Found 5 distinct implementation steps
Created 5 subtasks:
- task_02XYZ: Create database schema for users/tokens
- task_03ABC: Implement auth controller with endpoints
- task_04DEF: Add JWT middleware for route protection
- task_05GHI: Build frontend login/register forms
- task_06JKL: Add integration tests
View structure: execute `await tasks.list({ parentId: "task_01ABC" })`
```
## Example 2: No Breakdown
### Input (`bugfix-plan.md`)
```markdown
# Plan: Fix Login Validation Bug
## Problem
Login fails when username has spaces
## Solution
Update validation regex in auth.ts line 42
```
### Execution
```javascript
const milestone = await tasks.create({
description: "Fix Login Validation Bug",
context: `# Fix Login Validation Bug\n\n## Problem\nLogin fails...`,
priority: 3
});
return { milestone: milestone.id, breakdown: false };
```
### Output
```
Created milestone task_01ABC from plan
Plan describes a cohesive single task. No subtask breakdown needed.
View task: execute `await tasks.get("task_01ABC")`
```
## Example 3: Epic-Level (Two-Level Hierarchy)
### Input (`full-auth-plan.md`)
```markdown
# Complete User Authentication System
## Phase 1: Backend Infrastructure
1. Database schema for users/sessions
2. Password hashing with bcrypt
3. JWT token generation
## Phase 2: API Endpoints
1. POST /auth/register
2. POST /auth/login
3. POST /auth/logout
## Phase 3: Frontend
1. Login/register forms
2. Protected routes
3. Session persistence
```
### Execution
```javascript
const milestone = await tasks.create({
description: "Complete User Authentication System",
context: `<full-markdown>`,
priority: 3
});
const phases = [
{ name: "Backend Infrastructure", items: [
{ desc: "Database schema", done: "Migration runs, tables exist" },
{ desc: "Password hashing", done: "bcrypt hashes verified in tests" },
{ desc: "JWT tokens", done: "Token generation/validation works" }
]},
{ name: "API Endpoints", items: [
{ desc: "POST /auth/register", done: "Creates user, returns 201" },
{ desc: "POST /auth/login", done: "Returns JWT on valid credentials" },
{ desc: "POST /auth/logout", done: "Invalidates session, returns 200" }
]},
{ name: "Frontend", items: [
{ desc: "Login/register forms", done: "Forms render, submit successfully" },
{ desc: "Protected routes", done: "Redirect to login when unauthenticated" },
{ desc: "Session persistence", done: "Refresh maintains logged-in state" }
]}
];
for (const phase of phases) {
const phaseTask = await tasks.create({
description: phase.name,
parentId: milestone.id
});
for (const item of phase.items) {
await tasks.create({
description: item.desc,
context: `Part of '${phase.name}'.\n\nDone when: ${item.done}`,
parentId: phaseTask.id
});
}
}
return milestone;
```
### Output
```
Created milestone task_01ABC from plan
Analyzed plan structure: Found 3 major phases
Created as milestone with 3 tasks:
- task_02XYZ: Backend Infrastructure (3 subtasks)
- task_03ABC: API Endpoints (3 subtasks)
- task_04DEF: Frontend (3 subtasks)
View structure: execute `await tasks.list({ parentId: "task_01ABC" })`
```

View File

@@ -0,0 +1,210 @@
# Implementation Instructions
**For the skill agent executing `/overseer-plan`.** Follow this workflow exactly.
## Step 1: Read Markdown File
Read the provided file using the Read tool.
## Step 2: Extract Title
- Parse first `#` heading as title
- Strip "Plan: " prefix if present (case-insensitive)
- Fallback: use filename without extension
## Step 3: Create Milestone via MCP
Basic creation:
```javascript
const milestone = await tasks.create({
description: "<extracted-title>",
context: `<full-markdown-content>`,
priority: <priority-if-provided-else-3>
});
return milestone;
```
With `--parent` option:
```javascript
const task = await tasks.create({
description: "<extracted-title>",
context: `<full-markdown-content>`,
parentId: "<parent-id>",
priority: <priority-if-provided-else-3>
});
return task;
```
Capture returned task ID for subsequent steps.
## Step 4: Analyze Plan Structure
### Breakdown Indicators
1. **Numbered/bulleted implementation lists (3-7 items)**
```markdown
## Implementation
1. Create database schema
2. Build API endpoints
3. Add frontend components
```
2. **Clear subsections under implementation/tasks/steps**
```markdown
### 1. Backend Changes
- Modify server.ts
### 2. Frontend Updates
- Update login form
```
3. **File-specific sections**
```markdown
### `src/auth.ts` - Add JWT validation
### `src/middleware.ts` - Create auth middleware
```
4. **Sequential phases**
```markdown
**Phase 1: Database Layer**
**Phase 2: API Layer**
```
### Do NOT Break Down When
- Only 1-2 steps/items
- Plan is a single cohesive fix
- Content is exploratory ("investigate", "research")
- Work items inseparable
- Plan very short (<10 lines)
## Step 5: Validate Atomicity & Acceptance Criteria
For each proposed task, verify:
- **Atomic**: Can be completed in single commit
- **Validated**: Has clear acceptance criteria
If task too large -> split further.
If no validation -> add to context:
```
Done when: <specific observable criteria>
```
Examples of good acceptance criteria:
- "Done when: `npm test` passes, new migration applied"
- "Done when: API returns 200 with expected payload"
- "Done when: Component renders without console errors"
- "Done when: Type check passes (`tsc --noEmit`)"
## Step 6: Oracle Review
Before creating tasks, invoke Oracle to review the proposed breakdown.
**Prompt Oracle with:**
```
Review this task breakdown for "<milestone>":
1. <task> - Done when: <criteria>
2. <task> - Done when: <criteria>
...
Check:
- Are tasks truly atomic (single commit)?
- Is validation criteria clear and observable?
- Does milestone deliver demoable increment?
- Missing dependencies/blockers?
- Any tasks that should be split or merged?
```
Incorporate Oracle's feedback, then proceed to create tasks.
## Step 7: Create Subtasks (If Breaking Down)
### Extract for Each Subtask
1. **Description**: Strip numbering, keep concise (1-10 words), imperative form
2. **Context**: Section content + "Part of [milestone description]" + acceptance criteria
### Flat Breakdown
```javascript
const subtasks = [
{ description: "Create database schema", context: "Schema for users/tokens. Part of 'Add Auth'.\n\nDone when: Migration runs, tables exist with FK constraints." },
{ description: "Build API endpoints", context: "POST /auth/register, /auth/login. Part of 'Add Auth'.\n\nDone when: Endpoints return expected responses, tests pass." }
];
const created = [];
for (const sub of subtasks) {
const task = await tasks.create({
description: sub.description,
context: sub.context,
parentId: milestone.id
});
created.push(task);
}
return { milestone: milestone.id, subtasks: created };
```
### Epic-Level Breakdown (phases with sub-items)
```javascript
// Create phase as task under milestone
const phase = await tasks.create({
description: "Backend Infrastructure",
context: "Phase 1 context...",
parentId: milestoneId
});
// Create subtasks under phase
for (const item of phaseItems) {
await tasks.create({
description: item.description,
context: item.context,
parentId: phase.id
});
}
```
## Step 8: Report Results
### Subtasks Created
```
Created milestone <id> from plan
Analyzed plan structure: Found <N> distinct implementation steps
Created <N> subtasks:
- <id>: <description>
- <id>: <description>
...
View structure: execute `await tasks.list({ parentId: "<id>" })`
```
### No Breakdown
```
Created milestone <id> from plan
Plan describes a cohesive single task. No subtask breakdown needed.
View task: execute `await tasks.get("<id>")`
```
### Epic-Level Breakdown
```
Created milestone <id> from plan
Analyzed plan structure: Found <N> major phases
Created as milestone with <N> tasks:
- <id>: <phase-name> (<M> subtasks)
- <id>: <phase-name> (<M> subtasks)
...
View structure: execute `await tasks.list({ parentId: "<id>" })`
```

View File

@@ -0,0 +1,191 @@
---
name: overseer
description: Manage tasks via Overseer codemode MCP. Use when tracking multi-session work, breaking down implementation, or persisting context for handoffs.
license: MIT
metadata:
author: dmmulroy
version: "1.0.0"
---
# Agent Coordination with Overseer
## Core Principle: Tickets, Not Todos
Overseer tasks are **tickets** - structured artifacts with comprehensive context:
- **Description**: One-line summary (issue title)
- **Context**: Full background, requirements, approach (issue body)
- **Result**: Implementation details, decisions, outcomes (PR description)
Think: "Would someone understand the what, why, and how from this task alone AND what success looks like?"
## Task IDs are Ephemeral
**Never reference task IDs in external artifacts** (commits, PRs, docs). Task IDs like `task_01JQAZ...` become meaningless once tasks complete. Describe the work itself, not the task that tracked it.
## Overseer vs OpenCode's TodoWrite
| | Overseer | TodoWrite |
| --------------- | ------------------------------------- | ---------------------- |
| **Persistence** | SQLite database | Session-only |
| **Context** | Rich (description + context + result) | Basic |
| **Hierarchy** | 3-level (milestone -> task -> subtask)| Flat |
Use **Overseer** for persistent work. Use **TodoWrite** for ephemeral in-session tracking only.
## When to Use Overseer
**Use Overseer when:**
- Breaking down complexity into subtasks
- Work spans multiple sessions
- Context needs to persist for handoffs
- Recording decisions for future reference
**Skip Overseer when:**
- Work is a single atomic action
- Everything fits in one message exchange
- Overhead exceeds value
- TodoWrite is sufficient
## Finding Work
```javascript
// Get next ready task with full context (recommended for work sessions)
const task = await tasks.nextReady(milestoneId); // TaskWithContext | null
if (!task) {
console.log("No ready tasks");
return;
}
// Get all ready tasks (for progress overview)
const readyTasks = await tasks.list({ ready: true }); // Task[]
```
**Use `nextReady()`** when starting work - returns `TaskWithContext | null` (deepest ready leaf with full context chain + inherited learnings).
**Use `list({ ready: true })`** for status/progress checks - returns `Task[]` without context chain.
## Basic Workflow
```javascript
// 1. Get next ready task (returns TaskWithContext | null)
const task = await tasks.nextReady();
if (!task) return "No ready tasks";
// 2. Review context (available on TaskWithContext)
console.log(task.context.own); // This task's context
console.log(task.context.parent); // Parent's context (if depth > 0)
console.log(task.context.milestone); // Root milestone context (if depth > 1)
console.log(task.learnings.own); // Learnings attached to this task (bubbled from children)
// 3. Start work (VCS required - creates bookmark, records start commit)
await tasks.start(task.id);
// 4. Implement...
// 5. Complete with learnings (VCS required - commits changes, bubbles learnings to parent)
await tasks.complete(task.id, {
result: "Implemented login endpoint with JWT tokens",
learnings: ["bcrypt rounds should be 12 for production"]
});
```
See @file references/workflow.md for detailed workflow guidance.
## Understanding Task Context
Tasks have **progressive context** - inherited from ancestors:
```javascript
const task = await tasks.get(taskId); // Returns TaskWithContext
// task.context.own - this task's context (always present)
// task.context.parent - parent task's context (if depth > 0)
// task.context.milestone - root milestone's context (if depth > 1)
// Task's own learnings (bubbled from completed children)
// task.learnings.own - learnings attached to this task
```
## Return Type Summary
| Method | Returns | Notes |
|--------|---------|-------|
| `tasks.get(id)` | `TaskWithContext` | Full context chain + inherited learnings |
| `tasks.nextReady()` | `TaskWithContext \| null` | Deepest ready leaf with full context |
| `tasks.list()` | `Task[]` | Basic task fields only |
| `tasks.create()` | `Task` | No context chain |
| `tasks.start/complete()` | `Task` | No context chain |
## Blockers
Blockers prevent a task from being ready until the blocker completes.
**Constraints:**
- Blockers cannot be self
- Blockers cannot be ancestors (parent, grandparent, etc.)
- Blockers cannot be descendants
- Creating/reparenting with invalid blockers is rejected
```javascript
// Add blocker - taskA waits for taskB
await tasks.block(taskA.id, taskB.id);
// Remove blocker
await tasks.unblock(taskA.id, taskB.id);
```
## Task Hierarchies
Three levels: **Milestone** (depth 0) -> **Task** (depth 1) -> **Subtask** (depth 2).
| Level | Name | Purpose | Example |
|-------|------|---------|---------|
| 0 | **Milestone** | Large initiative | "Add user authentication system" |
| 1 | **Task** | Significant work item | "Implement JWT middleware" |
| 2 | **Subtask** | Atomic step | "Add token verification function" |
**Choosing the right level:**
- Small feature (1-2 files) -> Single task
- Medium feature (3-7 steps) -> Task with subtasks
- Large initiative (5+ tasks) -> Milestone with tasks
See @file references/hierarchies.md for detailed guidance.
## Recording Results
Complete tasks **immediately after implementing AND verifying**:
- Capture decisions while fresh
- Note deviations from plan
- Document verification performed
- Create follow-up tasks for tech debt
Your result must include explicit verification evidence. See @file references/verification.md.
## Best Practices
1. **Right-size tasks**: Completable in one focused session
2. **Clear completion criteria**: Context should define "done"
3. **Don't over-decompose**: 3-7 children per parent
4. **Action-oriented descriptions**: Start with verbs ("Add", "Fix", "Update")
5. **Verify before completing**: Tests passing, manual testing done
---
## Reading Order
| Task | File |
|------|------|
| Understanding API | @file references/api.md |
| Implementation workflow | @file references/workflow.md |
| Task decomposition | @file references/hierarchies.md |
| Good/bad examples | @file references/examples.md |
| Verification checklist | @file references/verification.md |
## In This Reference
| File | Purpose |
|------|---------|
| `references/api.md` | Overseer MCP codemode API types/methods |
| `references/workflow.md` | Start->implement->complete workflow |
| `references/hierarchies.md` | Milestone/task/subtask organization |
| `references/examples.md` | Good/bad context and result examples |
| `references/verification.md` | Verification checklist and process |

View File

@@ -0,0 +1,192 @@
# Overseer Codemode MCP API
Execute JavaScript code to interact with Overseer task management.
## Task Interface
```typescript
// Basic task - returned by list(), create(), start(), complete()
// Note: Does NOT include context or learnings fields
interface Task {
id: string;
parentId: string | null;
description: string;
priority: 1 | 2 | 3 | 4 | 5;
completed: boolean;
completedAt: string | null;
startedAt: string | null;
createdAt: string; // ISO 8601
updatedAt: string;
result: string | null; // Completion notes
commitSha: string | null; // Auto-populated on complete
depth: 0 | 1 | 2; // 0=milestone, 1=task, 2=subtask
blockedBy?: string[]; // Blocking task IDs (omitted if empty)
blocks?: string[]; // Tasks this blocks (omitted if empty)
bookmark?: string; // VCS bookmark name (if started)
startCommit?: string; // Commit SHA at start
effectivelyBlocked: boolean; // True if task OR ancestor has incomplete blockers
}
// Task with full context - returned by get(), nextReady()
interface TaskWithContext extends Task {
context: {
own: string; // This task's context
parent?: string; // Parent's context (depth > 0)
milestone?: string; // Root milestone's context (depth > 1)
};
learnings: {
own: Learning[]; // This task's learnings (bubbled from completed children)
parent: Learning[]; // Parent's learnings (depth > 0)
milestone: Learning[]; // Milestone's learnings (depth > 1)
};
}
// Task tree structure - returned by tree()
interface TaskTree {
task: Task;
children: TaskTree[];
}
// Progress summary - returned by progress()
interface TaskProgress {
total: number;
completed: number;
ready: number; // !completed && !effectivelyBlocked
blocked: number; // !completed && effectivelyBlocked
}
// Task type alias for depth filter
type TaskType = "milestone" | "task" | "subtask";
```
## Learning Interface
```typescript
interface Learning {
id: string;
taskId: string;
content: string;
sourceTaskId: string | null;
createdAt: string;
}
```
## Tasks API
```typescript
declare const tasks: {
list(filter?: {
parentId?: string;
ready?: boolean;
completed?: boolean;
depth?: 0 | 1 | 2; // 0=milestones, 1=tasks, 2=subtasks
type?: TaskType; // Alias: "milestone"|"task"|"subtask" (mutually exclusive with depth)
}): Promise<Task[]>;
get(id: string): Promise<TaskWithContext>;
create(input: {
description: string;
context?: string;
parentId?: string;
priority?: 1 | 2 | 3 | 4 | 5; // Required range: 1-5
blockedBy?: string[];
}): Promise<Task>;
update(id: string, input: {
description?: string;
context?: string;
priority?: 1 | 2 | 3 | 4 | 5;
parentId?: string;
}): Promise<Task>;
start(id: string): Promise<Task>;
complete(id: string, input?: { result?: string; learnings?: string[] }): Promise<Task>;
reopen(id: string): Promise<Task>;
delete(id: string): Promise<void>;
block(taskId: string, blockerId: string): Promise<void>;
unblock(taskId: string, blockerId: string): Promise<void>;
nextReady(milestoneId?: string): Promise<TaskWithContext | null>;
tree(rootId?: string): Promise<TaskTree | TaskTree[]>;
search(query: string): Promise<Task[]>;
progress(rootId?: string): Promise<TaskProgress>;
};
```
| Method | Returns | Description |
|--------|---------|-------------|
| `list` | `Task[]` | Filter by `parentId`, `ready`, `completed`, `depth`, `type` |
| `get` | `TaskWithContext` | Get task with full context chain + inherited learnings |
| `create` | `Task` | Create task (priority must be 1-5) |
| `update` | `Task` | Update description, context, priority, parentId |
| `start` | `Task` | **VCS required** - creates bookmark, records start commit |
| `complete` | `Task` | **VCS required** - commits changes + bubbles learnings to parent |
| `reopen` | `Task` | Reopen completed task |
| `delete` | `void` | Delete task + best-effort VCS bookmark cleanup |
| `block` | `void` | Add blocker (cannot be self, ancestor, or descendant) |
| `unblock` | `void` | Remove blocker relationship |
| `nextReady` | `TaskWithContext \| null` | Get deepest ready leaf with full context |
| `tree` | `TaskTree \| TaskTree[]` | Get task tree (all milestones if no ID) |
| `search` | `Task[]` | Search by description/context/result (case-insensitive) |
| `progress` | `TaskProgress` | Aggregate counts for milestone or all tasks |
## Learnings API
Learnings are added via `tasks.complete(id, { learnings: [...] })` and bubble to immediate parent (preserving `sourceTaskId`).
```typescript
declare const learnings: {
list(taskId: string): Promise<Learning[]>;
};
```
| Method | Description |
|--------|-------------|
| `list` | List learnings for task |
## VCS Integration (Required for Workflow)
VCS operations are **automatically handled** by the tasks API:
| Task Operation | VCS Effect |
|----------------|------------|
| `tasks.start(id)` | **VCS required** - creates bookmark `task/<id>`, records start commit |
| `tasks.complete(id)` | **VCS required** - commits changes (NothingToCommit = success) |
| `tasks.delete(id)` | Best-effort bookmark cleanup (logs warning on failure) |
**VCS (jj or git) is required** for start/complete. Fails with `NotARepository` if none found. CRUD operations work without VCS.
## Quick Examples
```javascript
// Create milestone with subtask
const milestone = await tasks.create({
description: "Build authentication system",
context: "JWT-based auth with refresh tokens",
priority: 1
});
const subtask = await tasks.create({
description: "Implement token refresh logic",
parentId: milestone.id,
context: "Handle 7-day expiry"
});
// Start work (auto-creates VCS bookmark)
await tasks.start(subtask.id);
// ... do implementation work ...
// Complete task with learnings (VCS required - commits changes, bubbles learnings to parent)
await tasks.complete(subtask.id, {
result: "Implemented using jose library",
learnings: ["Use jose instead of jsonwebtoken"]
});
// Get progress summary
const progress = await tasks.progress(milestone.id);
// -> { total: 2, completed: 1, ready: 1, blocked: 0 }
// Search tasks
const authTasks = await tasks.search("authentication");
// Get task tree
const tree = await tasks.tree(milestone.id);
// -> { task: Task, children: TaskTree[] }
```

View File

@@ -0,0 +1,195 @@
# Examples
Good and bad examples for writing task context and results.
## Writing Context
Context should include everything needed to do the work without asking questions:
- **What** needs to be done and why
- **Implementation approach** (steps, files to modify, technical choices)
- **Done when** (acceptance criteria)
### Good Context Example
```javascript
await tasks.create({
description: "Migrate storage to one file per task",
context: `Change storage format for git-friendliness:
Structure:
.overseer/
└── tasks/
├── task_01ABC.json
└── task_02DEF.json
NO INDEX - just scan task files. For typical task counts (<100), this is fast.
Implementation:
1. Update storage.ts:
- read(): Scan .overseer/tasks/*.json, parse each, return TaskStore
- write(task): Write single task to .overseer/tasks/{id}.json
- delete(id): Remove .overseer/tasks/{id}.json
- Add readTask(id) for single task lookup
2. Task file format: Same as current Task schema (one task per file)
3. Migration: On read, if old tasks.json exists, migrate to new format
4. Update tests
Benefits:
- Create = new file (never conflicts)
- Update = single file change
- Delete = remove file
- No index to maintain or conflict
- git diff shows exactly which tasks changed`
});
```
**Why it works:** States the goal, shows the structure, lists specific implementation steps, explains benefits. Someone could pick this up without asking questions.
### Bad Context Example
```javascript
await tasks.create({
description: "Add auth",
context: "Need to add authentication"
});
```
**What's missing:** How to implement it, what files, what's done when, technical approach.
## Writing Results
Results should capture what was actually done:
- **What changed** (implementation summary)
- **Key decisions** (and why)
- **Verification** (tests passing, manual testing done)
### Good Result Example
```javascript
await tasks.complete(taskId, `Migrated storage from single tasks.json to one file per task:
Structure:
- Each task stored as .overseer/tasks/{id}.json
- No index file (avoids merge conflicts)
- Directory scanned on read to build task list
Implementation:
- Modified Storage.read() to scan .overseer/tasks/ directory
- Modified Storage.write() to write/delete individual task files
- Auto-migration from old single-file format on first read
- Atomic writes using temp file + rename pattern
Trade-offs:
- Slightly slower reads (must scan directory + parse each file)
- Acceptable since task count is typically small (<100)
- Better git history - each task change is isolated
Verification:
- All 60 tests passing
- Build successful
- Manually tested migration: old -> new format works`);
```
**Why it works:** States what changed, lists implementation details, explains trade-offs, confirms verification.
### Bad Result Example
```javascript
await tasks.complete(taskId, "Fixed the storage issue");
```
**What's missing:** What was actually implemented, how, what decisions were made, verification evidence.
## Subtask Context Example
Link subtasks to their parent and explain what this piece does specifically:
```javascript
await tasks.create({
description: "Add token verification function",
parentId: jwtTaskId,
context: `Part of JWT middleware (parent task). This subtask: token verification.
What it does:
- Verify JWT signature and expiration on protected routes
- Extract user ID from token payload
- Attach user object to request
- Return 401 for invalid/expired tokens
Implementation:
- Create src/middleware/verify-token.ts
- Export verifyToken middleware function
- Use jose library (preferred over jsonwebtoken)
- Handle expired vs invalid token cases separately
Done when:
- Middleware function complete and working
- Unit tests cover valid/invalid/expired scenarios
- Integrated into auth routes in server.ts
- Parent task can use this to protect endpoints`
});
```
## Error Handling Examples
### Handling Pending Children
```javascript
try {
await tasks.complete(taskId, "Done");
} catch (err) {
if (err.message.includes("pending children")) {
const pending = await tasks.list({ parentId: taskId, completed: false });
console.log(`Cannot complete: ${pending.length} children pending`);
for (const child of pending) {
console.log(`- ${child.id}: ${child.description}`);
}
return;
}
throw err;
}
```
### Handling Blocked Tasks
```javascript
const task = await tasks.get(taskId);
if (task.blockedBy.length > 0) {
console.log("Task is blocked by:");
for (const blockerId of task.blockedBy) {
const blocker = await tasks.get(blockerId);
console.log(`- ${blocker.description} (${blocker.completed ? 'done' : 'pending'})`);
}
return "Cannot start - blocked by other tasks";
}
await tasks.start(taskId);
```
## Creating Task Hierarchies
```javascript
// Create milestone with tasks
const milestone = await tasks.create({
description: "Implement user authentication",
context: "Full auth: JWT, login/logout, password reset, rate limiting",
priority: 2
});
const subtasks = [
"Add login endpoint",
"Add logout endpoint",
"Implement JWT token service",
"Add password reset flow"
];
for (const desc of subtasks) {
await tasks.create({ description: desc, parentId: milestone.id });
}
```
See @file references/hierarchies.md for sequential subtasks with blockers.

View File

@@ -0,0 +1,170 @@
# Task Hierarchies
Guidance for organizing work into milestones, tasks, and subtasks.
## Three Levels
| Level | Name | Purpose | Example |
|-------|------|---------|---------|
| 0 | **Milestone** | Large initiative (5+ tasks) | "Add user authentication system" |
| 1 | **Task** | Significant work item | "Implement JWT middleware" |
| 2 | **Subtask** | Atomic implementation step | "Add token verification function" |
**Maximum depth is 3 levels.** Attempting to create a child of a subtask will fail.
## When to Use Each Level
### Single Task (No Hierarchy)
- Small feature (1-2 files, ~1 session)
- Work is atomic, no natural breakdown
### Task with Subtasks
- Medium feature (3-5 files, 3-7 steps)
- Work naturally decomposes into discrete steps
- Subtasks could be worked on independently
### Milestone with Tasks
- Large initiative (multiple areas, many sessions)
- Work spans 5+ distinct tasks
- You want high-level progress tracking
## Creating Hierarchies
```javascript
// Create the milestone
const milestone = await tasks.create({
description: "Add user authentication system",
context: "Full auth system with JWT tokens, password reset...",
priority: 2
});
// Create tasks under it
const jwtTask = await tasks.create({
description: "Implement JWT token generation",
context: "Create token service with signing and verification...",
parentId: milestone.id
});
const resetTask = await tasks.create({
description: "Add password reset flow",
context: "Email-based password reset with secure tokens...",
parentId: milestone.id
});
// For complex tasks, add subtasks
const verifySubtask = await tasks.create({
description: "Add token verification function",
context: "Verify JWT signature and expiration...",
parentId: jwtTask.id
});
```
## Subtask Best Practices
Each subtask should be:
- **Independently understandable**: Clear on its own
- **Linked to parent**: Reference parent, explain how this piece fits
- **Specific scope**: What this subtask does vs what parent/siblings do
- **Clear completion**: Define "done" for this piece specifically
Example subtask context:
```
Part of JWT middleware (parent task). This subtask: token verification.
What it does:
- Verify JWT signature and expiration
- Extract user ID from payload
- Return 401 for invalid/expired tokens
Done when:
- Function complete and tested
- Unit tests cover valid/invalid/expired cases
```
## Decomposition Strategy
When faced with large tasks:
1. **Assess scope**: Is this milestone-level (5+ tasks) or task-level (3-7 subtasks)?
2. Create parent task/milestone with overall goal and context
3. Analyze and identify 3-7 logical children
4. Create children with specific contexts and boundaries
5. Work through systematically, completing with results
6. Complete parent with summary of overall implementation
### Don't Over-Decompose
- **3-7 children per parent** is usually right
- If you'd only have 1-2 subtasks, just make separate tasks
- If you need depth 3+, restructure your breakdown
## Viewing Hierarchies
```javascript
// List all tasks under a milestone
const children = await tasks.list({ parentId: milestoneId });
// Get task with context breadcrumb
const task = await tasks.get(taskId);
// task.context.parent - parent's context
// task.context.milestone - root milestone's context
// Check progress
const pending = await tasks.list({ parentId: milestoneId, completed: false });
const done = await tasks.list({ parentId: milestoneId, completed: true });
console.log(`Progress: ${done.length}/${done.length + pending.length}`);
```
## Completion Rules
1. **Cannot complete with pending children**
```javascript
// This will fail if task has incomplete subtasks
await tasks.complete(taskId, "Done");
// Error: "pending children"
```
2. **Complete children first**
- Work through subtasks systematically
- Complete each with meaningful results
3. **Parent result summarizes overall implementation**
```javascript
await tasks.complete(milestoneId, `User authentication system complete:
Implemented:
- JWT token generation and verification
- Login/logout endpoints
- Password reset flow
- Rate limiting
5 tasks completed, all tests passing.`);
```
## Blocking Dependencies
Use `blockedBy` for cross-task dependencies:
```javascript
// Create task that depends on another
const deployTask = await tasks.create({
description: "Deploy to production",
context: "...",
blockedBy: [testTaskId, reviewTaskId]
});
// Add blocker to existing task
await tasks.block(deployTaskId, testTaskId);
// Remove blocker
await tasks.unblock(deployTaskId, testTaskId);
```
**Use blockers when:**
- Task B cannot start until Task A completes
- Multiple tasks depend on a shared prerequisite
**Don't use blockers when:**
- Tasks can be worked on in parallel
- The dependency is just logical grouping (use subtasks instead)

View File

@@ -0,0 +1,186 @@
# Verification Guide
Before marking any task complete, you MUST verify your work. Verification separates "I think it's done" from "it's actually done."
## The Verification Process
1. **Re-read the task context**: What did you originally commit to do?
2. **Check acceptance criteria**: Does your implementation satisfy the "Done when" conditions?
3. **Run relevant tests**: Execute the test suite and document results
4. **Test manually**: Actually try the feature/change yourself
5. **Compare with requirements**: Does what you built match what was asked?
## Strong vs Weak Verification
### Strong Verification Examples
- "All 60 tests passing, build successful"
- "All 69 tests passing (4 new tests for middleware edge cases)"
- "Manually tested with valid/invalid/expired tokens - all cases work"
- "Ran `cargo test` - 142 tests passed, 0 failed"
### Weak Verification (Avoid)
- "Should work now" - "should" means not verified
- "Made the changes" - no evidence it works
- "Added tests" - did the tests pass? What's the count?
- "Fixed the bug" - what bug? Did you verify the fix?
- "Done" - done how? prove it
## Verification by Task Type
| Task Type | How to Verify |
|-----------|---------------|
| Code changes | Run full test suite, document passing count |
| New features | Run tests + manual testing of functionality |
| Configuration | Test the config works (run commands, check workflows) |
| Documentation | Verify examples work, links resolve, formatting renders |
| Refactoring | Confirm tests still pass, no behavior changes |
| Bug fixes | Reproduce bug first, verify fix, add regression test |
## Cross-Reference Checklist
Before marking complete, verify all applicable items:
- [ ] Task description requirements met
- [ ] Context "Done when" criteria satisfied
- [ ] Tests passing (document count: "All X tests passing")
- [ ] Build succeeds (if applicable)
- [ ] Manual testing done (describe what you tested)
- [ ] No regressions introduced
- [ ] Edge cases considered (error handling, invalid input)
- [ ] Follow-up work identified (created new tasks if needed)
**If you can't check all applicable boxes, the task isn't done yet.**
## Result Examples with Verification
### Code Implementation
```javascript
await tasks.complete(taskId, `Implemented JWT middleware:
Implementation:
- Created src/middleware/verify-token.ts
- Separated 'expired' vs 'invalid' error codes
- Added user extraction from payload
Verification:
- All 69 tests passing (4 new tests for edge cases)
- Manually tested with valid token: Access granted
- Manually tested with expired token: 401 with 'token_expired'
- Manually tested with invalid signature: 401 with 'invalid_token'`);
```
### Configuration/Infrastructure
```javascript
await tasks.complete(taskId, `Added GitHub Actions workflow for CI:
Implementation:
- Created .github/workflows/ci.yml
- Jobs: lint, test, build with pnpm cache
Verification:
- Pushed to test branch, opened PR #123
- Workflow triggered automatically
- All jobs passed (lint: 0 errors, test: 69/69, build: success)
- Total run time: 2m 34s`);
```
### Refactoring
```javascript
await tasks.complete(taskId, `Refactored storage to one file per task:
Implementation:
- Split tasks.json into .overseer/tasks/{id}.json files
- Added auto-migration from old format
- Atomic writes via temp+rename
Verification:
- All 60 tests passing (including 8 storage tests)
- Build successful
- Manually tested migration: old -> new format works
- Confirmed git diff shows only changed tasks`);
```
### Bug Fix
```javascript
await tasks.complete(taskId, `Fixed login validation accepting usernames with spaces:
Root cause:
- Validation regex didn't account for leading/trailing spaces
Fix:
- Added .trim() before validation in src/auth/validate.ts:42
- Updated regex to reject internal spaces
Verification:
- All 45 tests passing (2 new regression tests)
- Manually tested:
- " admin" -> rejected (leading space)
- "admin " -> rejected (trailing space)
- "ad min" -> rejected (internal space)
- "admin" -> accepted`);
```
### Documentation
```javascript
await tasks.complete(taskId, `Updated API documentation for auth endpoints:
Implementation:
- Added docs for POST /auth/login
- Added docs for POST /auth/logout
- Added docs for POST /auth/refresh
- Included example requests/responses
Verification:
- All code examples tested and working
- Links verified (no 404s)
- Rendered in local preview - formatting correct
- Spell-checked content`);
```
## Common Verification Mistakes
| Mistake | Better Approach |
|---------|-----------------|
| "Tests pass" | "All 42 tests passing" (include count) |
| "Manually tested" | "Manually tested X, Y, Z scenarios" (be specific) |
| "Works" | "Works: [evidence]" (show proof) |
| "Fixed" | "Fixed: [root cause] -> [solution] -> [verification]" |
## When Verification Fails
If verification reveals issues:
1. **Don't complete the task** - it's not done
2. **Document what failed** in task context
3. **Fix the issues** before completing
4. **Re-verify** after fixes
```javascript
// Update context with failure notes
await tasks.update(taskId, {
context: task.context + `
Verification attempt 1 (failed):
- Tests: 41/42 passing
- Failing: test_token_refresh - timeout issue
- Need to investigate async handling`
});
// After fixing
await tasks.complete(taskId, `Implemented token refresh:
Implementation:
- Added refresh endpoint
- Fixed async timeout (was missing await)
Verification:
- All 42 tests passing (fixed timeout issue)
- Manual testing: refresh works within 30s window`);
```

View File

@@ -0,0 +1,164 @@
# Implementation Workflow
Step-by-step guide for working with Overseer tasks during implementation.
## 1. Get Next Ready Task
```javascript
// Get next task with full context (recommended)
const task = await tasks.nextReady();
// Or scope to specific milestone
const task = await tasks.nextReady(milestoneId);
if (!task) {
return "No tasks ready - all blocked or completed";
}
```
`nextReady()` returns a `TaskWithContext` (task with inherited context and learnings) or `null`.
## 2. Review Context
Before starting, verify you can answer:
- **What** needs to be done specifically?
- **Why** is this needed?
- **How** should it be implemented?
- **When** is it done (acceptance criteria)?
```javascript
const task = await tasks.get(taskId);
// Task's own context
console.log("Task:", task.context.own);
// Parent context (if task has parent)
if (task.context.parent) {
console.log("Parent:", task.context.parent);
}
// Milestone context (if depth > 1)
if (task.context.milestone) {
console.log("Milestone:", task.context.milestone);
}
// Task's own learnings (bubbled from completed children)
console.log("Task learnings:", task.learnings.own);
```
**If any answer is unclear:**
1. Check parent task or completed blockers for details
2. Suggest entering plan mode to flesh out requirements
**Proceed without full context when:**
- Task is trivial/atomic (e.g., "Add .gitignore entry")
- Conversation already provides the missing context
- Description itself is sufficiently detailed
## 3. Start Task
```javascript
await tasks.start(taskId);
```
**VCS Required:** Creates bookmark `task/<id>`, records start commit. Fails with `NotARepository` if no jj/git found.
After starting, the task status changes to `in_progress`.
## 4. Implement
Work on the task implementation. Note any learnings to include when completing.
## 5. Verify Work
Before completing, verify your implementation. See @file references/verification.md for full checklist.
Quick checklist:
- [ ] Task description requirements met
- [ ] Context "Done when" criteria satisfied
- [ ] Tests passing (document count)
- [ ] Build succeeds
- [ ] Manual testing done
## 6. Complete Task with Learnings
```javascript
await tasks.complete(taskId, {
result: `Implemented login endpoint:
Implementation:
- Created src/auth/login.ts
- Added JWT token generation
- Integrated with user service
Verification:
- All 42 tests passing (3 new)
- Manually tested valid/invalid credentials`,
learnings: [
"bcrypt rounds should be 12+ for production",
"jose library preferred over jsonwebtoken"
]
});
```
**VCS Required:** Commits changes (NothingToCommit treated as success), then deletes the task's bookmark (best-effort) and clears the DB bookmark field on success. Fails with `NotARepository` if no jj/git found.
**Learnings Effect:** Learnings bubble to immediate parent only. `sourceTaskId` is preserved through bubbling, so if this task's learnings later bubble further, the origin is tracked.
The `result` becomes part of the task's permanent record.
## VCS Integration (Required for Workflow)
VCS operations are **automatically handled** by the tasks API:
| Task Operation | VCS Effect |
|----------------|------------|
| `tasks.start(id)` | **VCS required** - creates bookmark `task/<id>`, records start commit |
| `tasks.complete(id)` | **VCS required** - commits changes, deletes bookmark (best-effort), clears DB bookmark on success |
| `tasks.complete(milestoneId)` | Same + deletes ALL descendant bookmarks recursively (depth-1 and depth-2) |
| `tasks.delete(id)` | Best-effort bookmark cleanup (logs warning on failure) |
**Note:** VCS (jj or git) is required for start/complete. CRUD operations work without VCS.
## Error Handling
### Pending Children
```javascript
try {
await tasks.complete(taskId, "Done");
} catch (err) {
if (err.message.includes("pending children")) {
const pending = await tasks.list({ parentId: taskId, completed: false });
return `Cannot complete: ${pending.length} children pending`;
}
throw err;
}
```
### Task Not Ready
```javascript
const task = await tasks.get(taskId);
// Check if blocked
if (task.blockedBy.length > 0) {
console.log("Blocked by:", task.blockedBy);
// Complete blockers first or unblock
await tasks.unblock(taskId, blockerId);
}
```
## Complete Workflow Example
```javascript
const task = await tasks.nextReady();
if (!task) return "No ready tasks";
await tasks.start(task.id);
// ... implement ...
await tasks.complete(task.id, {
result: "Implemented: ... Verification: All 58 tests passing",
learnings: ["Use jose for JWT"]
});
```

View File

@@ -0,0 +1,122 @@
---
name: session-export
description: Update GitHub PR descriptions with AI session export summaries. Use when user asks to add session summary to PR/MR, document AI assistance in PR/MR, or export conversation summary to PR/MR description.
---
# Session Export
Update PR/MR descriptions with a structured summary of the AI-assisted conversation.
## Output Format
```markdown
> [!NOTE]
> This PR was written with AI assistance.
<details><summary>AI Session Export</summary>
<p>
```json
{
"info": {
"title": "<brief task description>",
"agent": "opencode",
"models": ["<model(s) used>"]
},
"summary": [
"<action 1>",
"<action 2>",
...
]
}
```
</p>
</details>
```
## Workflow
### 1. Export Session Data
Get session data using OpenCode CLI:
```bash
opencode export [sessionID]
```
Returns JSON with session info including models used. Use current session if no sessionID provided.
### 2. Generate Summary JSON
From exported data and conversation context, create summary:
- **title**: 2-5 word task description (lowercase)
- **agent**: always "opencode"
- **models**: array from export data
- **summary**: array of terse action statements
- Use past tense ("added", "fixed", "created")
- Start with "user requested..." or "user asked..."
- Chronological order
- Attempt to keep the summary to a max of 25 turns ("user requested", "agent did")
- **NEVER include sensitive data**: API keys, credentials, secrets, tokens, passwords, env vars
### 3. Update PR/MR Description
**GitHub:**
```bash
gh pr edit <PR_NUMBER> --body "$(cat <<'EOF'
<existing description>
> [!NOTE]
> This PR was written with AI assistance.
<details><summary>AI Session Export</summary>
...
</details>
EOF
)"
```
### 4. Preserve Existing Content
Always fetch and preserve existing PR/MR description:
```bash
# GitHub
gh pr view <PR_NUMBER> --json body -q '.body'
Append session export after existing content with blank line separator.
## Example Summary
For a session where user asked to add dark mode:
```json
{
"info": {
"title": "dark mode implementation",
"agent": "opencode",
"models": ["claude sonnet 4"]
},
"summary": [
"user requested dark mode toggle in settings",
"agent explored existing theme system",
"agent created ThemeContext for state management",
"agent added DarkModeToggle component",
"agent updated CSS variables for dark theme",
"agent ran tests and fixed 2 failures",
"agent committed changes"
]
}
```
## Security
**NEVER include in summary:**
- API keys, tokens, secrets
- Passwords, credentials
- Environment variable values
- Private URLs with auth tokens
- Personal identifiable information
- Internal hostnames/IPs

View File

@@ -0,0 +1,464 @@
---
name: solidjs
description: |
SolidJS framework development skill for building reactive web applications with fine-grained reactivity.
Use when working with SolidJS projects including: (1) Creating components with signals, stores, and effects,
(2) Implementing reactive state management, (3) Using control flow components (Show, For, Switch/Match),
(4) Setting up routing with Solid Router, (5) Building full-stack apps with SolidStart,
(6) Data fetching with createResource, (7) Context API for shared state, (8) SSR/SSG configuration.
Triggers: solid, solidjs, solid-js, solid start, solidstart, createSignal, createStore, createEffect.
---
# SolidJS Development
SolidJS is a declarative JavaScript library for building user interfaces with fine-grained reactivity. Unlike virtual DOM frameworks, Solid compiles templates to real DOM nodes and updates them with fine-grained reactions.
## Core Principles
1. **Components run once** — Component functions execute only during initialization, not on every update
2. **Fine-grained reactivity** — Only the specific DOM nodes that depend on changed data update
3. **No virtual DOM** — Direct DOM manipulation via compiled templates
4. **Signals are functions** — Access values by calling: `count()` not `count`
## Reactivity Primitives
### Signals — Basic State
```tsx
import { createSignal } from "solid-js";
const [count, setCount] = createSignal(0);
// Read value (getter)
console.log(count()); // 0
// Update value (setter)
setCount(1);
setCount(prev => prev + 1); // Functional update
```
**Options:**
```tsx
const [value, setValue] = createSignal(initialValue, {
equals: false, // Always trigger updates, even if value unchanged
name: "debugName" // For devtools
});
```
### Effects — Side Effects
```tsx
import { createEffect } from "solid-js";
createEffect(() => {
console.log("Count changed:", count());
// Runs after render, re-runs when dependencies change
});
```
**Key behaviors:**
- Initial run: after render, before browser paint
- Subsequent runs: when tracked dependencies change
- Never runs during SSR or hydration
- Use `onCleanup` for cleanup logic
### Memos — Derived/Cached Values
```tsx
import { createMemo } from "solid-js";
const doubled = createMemo(() => count() * 2);
// Access like signal
console.log(doubled()); // Cached, only recalculates when count changes
```
Use memos when:
- Derived value is expensive to compute
- Derived value is accessed multiple times
- You want to prevent downstream updates when result unchanged
### Resources — Async Data
```tsx
import { createResource } from "solid-js";
const [user, { mutate, refetch }] = createResource(userId, fetchUser);
// In JSX
<Show when={!user.loading} fallback={<Loading />}>
<div>{user()?.name}</div>
</Show>
// Resource properties
user.loading // boolean
user.error // error if failed
user.state // "unresolved" | "pending" | "ready" | "refreshing" | "errored"
user.latest // last successful value
```
## Stores — Complex State
For nested objects/arrays with fine-grained updates:
```tsx
import { createStore } from "solid-js/store";
const [state, setState] = createStore({
user: { name: "John", age: 30 },
todos: []
});
// Path syntax updates
setState("user", "name", "Jane");
setState("todos", todos => [...todos, newTodo]);
setState("todos", 0, "completed", true);
// Produce for immer-like updates
import { produce } from "solid-js/store";
setState(produce(s => {
s.user.age++;
s.todos.push(newTodo);
}));
```
**Store utilities:**
- `produce` — Immer-like mutations
- `reconcile` — Diff and patch data (for API responses)
- `unwrap` — Get raw non-reactive object
## Components
### Basic Component
```tsx
import { Component } from "solid-js";
const MyComponent: Component<{ name: string }> = (props) => {
return <div>Hello, {props.name}</div>;
};
```
### Props Handling
```tsx
import { splitProps, mergeProps } from "solid-js";
// Default props
const merged = mergeProps({ size: "medium" }, props);
// Split props (for spreading)
const [local, others] = splitProps(props, ["class", "onClick"]);
return <button class={local.class} {...others} />;
```
**Props rules:**
- Props are reactive getters — don't destructure at top level
- Use `props.value` in JSX, not `const { value } = props`
### Children Helper
```tsx
import { children } from "solid-js";
const Wrapper: Component = (props) => {
const resolved = children(() => props.children);
createEffect(() => {
console.log("Children:", resolved());
});
return <div>{resolved()}</div>;
};
```
## Control Flow Components
### Show — Conditional Rendering
```tsx
import { Show } from "solid-js";
<Show when={user()} fallback={<Login />}>
{(user) => <Profile user={user()} />}
</Show>
```
### For — List Rendering (keyed by reference)
```tsx
import { For } from "solid-js";
<For each={items()} fallback={<Empty />}>
{(item, index) => (
<div>{index()}: {item.name}</div>
)}
</For>
```
**Note:** `index` is a signal, `item` is the value.
### Index — List Rendering (keyed by index)
```tsx
import { Index } from "solid-js";
<Index each={items()}>
{(item, index) => (
<input value={item().text} />
)}
</Index>
```
**Note:** `item` is a signal, `index` is the value. Better for primitive arrays or inputs.
### Switch/Match — Multiple Conditions
```tsx
import { Switch, Match } from "solid-js";
<Switch fallback={<Default />}>
<Match when={state() === "loading"}>
<Loading />
</Match>
<Match when={state() === "error"}>
<Error />
</Match>
<Match when={state() === "success"}>
<Success />
</Match>
</Switch>
```
### Dynamic — Dynamic Component
```tsx
import { Dynamic } from "solid-js/web";
<Dynamic component={selected()} someProp="value" />
```
### Portal — Render Outside DOM Hierarchy
```tsx
import { Portal } from "solid-js/web";
<Portal mount={document.body}>
<Modal />
</Portal>
```
### ErrorBoundary — Error Handling
```tsx
import { ErrorBoundary } from "solid-js";
<ErrorBoundary fallback={(err, reset) => (
<div>
Error: {err.message}
<button onClick={reset}>Retry</button>
</div>
)}>
<RiskyComponent />
</ErrorBoundary>
```
### Suspense — Async Loading
```tsx
import { Suspense } from "solid-js";
<Suspense fallback={<Loading />}>
<AsyncComponent />
</Suspense>
```
## Context API
```tsx
import { createContext, useContext } from "solid-js";
// Create context
const CounterContext = createContext<{
count: () => number;
increment: () => void;
}>();
// Provider component
export function CounterProvider(props) {
const [count, setCount] = createSignal(0);
return (
<CounterContext.Provider value={{
count,
increment: () => setCount(c => c + 1)
}}>
{props.children}
</CounterContext.Provider>
);
}
// Consumer hook
export function useCounter() {
const ctx = useContext(CounterContext);
if (!ctx) throw new Error("useCounter must be used within CounterProvider");
return ctx;
}
```
## Lifecycle
```tsx
import { onMount, onCleanup } from "solid-js";
function MyComponent() {
onMount(() => {
console.log("Mounted");
const handler = () => {};
window.addEventListener("resize", handler);
onCleanup(() => {
window.removeEventListener("resize", handler);
});
});
return <div>Content</div>;
}
```
## Refs
```tsx
let inputRef: HTMLInputElement;
<input ref={inputRef} />
<input ref={(el) => { /* el is the DOM element */ }} />
```
## Event Handling
```tsx
// Standard events (lowercase)
<button onClick={handleClick}>Click</button>
<button onClick={(e) => handleClick(e)}>Click</button>
// Delegated events (on:)
<input on:input={handleInput} />
// Native events (on:) - not delegated
<div on:scroll={handleScroll} />
```
## Common Patterns
### Conditional Classes
```tsx
import { clsx } from "clsx"; // or classList
<div class={clsx("base", { active: isActive() })} />
<div classList={{ active: isActive(), disabled: isDisabled() }} />
```
### Batch Updates
```tsx
import { batch } from "solid-js";
batch(() => {
setName("John");
setAge(30);
// Effects run once after batch completes
});
```
### Untrack
```tsx
import { untrack } from "solid-js";
createEffect(() => {
console.log(count()); // tracked
console.log(untrack(() => other())); // not tracked
});
```
## TypeScript
```tsx
import type { Component, ParentComponent, JSX } from "solid-js";
// Basic component
const Button: Component<{ label: string }> = (props) => (
<button>{props.label}</button>
);
// With children
const Layout: ParentComponent<{ title: string }> = (props) => (
<div>
<h1>{props.title}</h1>
{props.children}
</div>
);
// Event handler types
const handleClick: JSX.EventHandler<HTMLButtonElement, MouseEvent> = (e) => {
console.log(e.currentTarget);
};
```
## Project Setup
```bash
# Create new project
npm create solid@latest my-app
# With template
npx degit solidjs/templates/ts my-app
# SolidStart
npm create solid@latest my-app -- --template solidstart
```
**vite.config.ts:**
```ts
import { defineConfig } from "vite";
import solid from "vite-plugin-solid";
export default defineConfig({
plugins: [solid()]
});
```
## Anti-Patterns to Avoid
1. **Destructuring props** — Breaks reactivity
```tsx
// ❌ Bad
const { name } = props;
// ✅ Good
props.name
```
2. **Accessing signals outside tracking scope**
```tsx
// ❌ Won't update
console.log(count());
// ✅ Will update
createEffect(() => console.log(count()));
```
3. **Forgetting to call signal getters**
```tsx
// ❌ Passes the function
<div>{count}</div>
// ✅ Passes the value
<div>{count()}</div>
```
4. **Using array index as key** — Use `<For>` for reference-keyed, `<Index>` for index-keyed
5. **Side effects during render** — Use `createEffect` or `onMount`

View File

@@ -0,0 +1,777 @@
# SolidJS API Reference
Complete reference for all SolidJS primitives, utilities, and component APIs.
## Basic Reactivity
### createSignal
```tsx
import { createSignal } from "solid-js";
const [getter, setter] = createSignal<T>(initialValue, options?);
// Options
interface SignalOptions<T> {
equals?: false | ((prev: T, next: T) => boolean);
name?: string;
internal?: boolean;
}
```
**Examples:**
```tsx
const [count, setCount] = createSignal(0);
const [user, setUser] = createSignal<User | null>(null);
// Always update
const [data, setData] = createSignal(obj, { equals: false });
// Custom equality
const [items, setItems] = createSignal([], {
equals: (a, b) => a.length === b.length
});
// Setter forms
setCount(5); // Direct value
setCount(prev => prev + 1); // Functional update
```
### createEffect
```tsx
import { createEffect } from "solid-js";
createEffect<T>(fn: (prev: T) => T, initialValue?: T, options?);
// Options
interface EffectOptions {
name?: string;
}
```
**Examples:**
```tsx
// Basic
createEffect(() => {
console.log("Count:", count());
});
// With previous value
createEffect((prev) => {
console.log("Changed from", prev, "to", count());
return count();
}, count());
// With cleanup
createEffect(() => {
const handler = () => {};
window.addEventListener("resize", handler);
onCleanup(() => window.removeEventListener("resize", handler));
});
```
### createMemo
```tsx
import { createMemo } from "solid-js";
const getter = createMemo<T>(fn: (prev: T) => T, initialValue?: T, options?);
// Options
interface MemoOptions<T> {
equals?: false | ((prev: T, next: T) => boolean);
name?: string;
}
```
**Examples:**
```tsx
const doubled = createMemo(() => count() * 2);
const filtered = createMemo(() => items().filter(i => i.active));
// Previous value
const delta = createMemo((prev) => count() - prev, 0);
```
### createResource
```tsx
import { createResource } from "solid-js";
const [resource, { mutate, refetch }] = createResource(
source?, // Optional reactive source
fetcher, // (source, info) => Promise<T>
options?
);
// Resource properties
resource() // T | undefined
resource.loading // boolean
resource.error // any
resource.state // "unresolved" | "pending" | "ready" | "refreshing" | "errored"
resource.latest // T | undefined (last successful value)
// Options
interface ResourceOptions<T> {
initialValue?: T;
name?: string;
deferStream?: boolean;
ssrLoadFrom?: "initial" | "server";
storage?: (init: T) => [Accessor<T>, Setter<T>];
onHydrated?: (key, info: { value: T }) => void;
}
```
**Examples:**
```tsx
// Without source
const [users] = createResource(fetchUsers);
// With source
const [user] = createResource(userId, fetchUser);
// With options
const [data] = createResource(id, fetchData, {
initialValue: [],
deferStream: true
});
// Actions
mutate(newValue); // Update locally
refetch(); // Re-fetch
refetch(customInfo); // Pass to fetcher's info.refetching
```
## Stores
### createStore
```tsx
import { createStore } from "solid-js/store";
const [store, setStore] = createStore<T>(initialValue);
```
**Update patterns:**
```tsx
const [state, setState] = createStore({
user: { name: "John", age: 30 },
todos: [{ id: 1, text: "Learn Solid", done: false }]
});
// Path syntax
setState("user", "name", "Jane");
setState("user", "age", a => a + 1);
setState("todos", 0, "done", true);
// Array operations
setState("todos", t => [...t, newTodo]);
setState("todos", todos.length, newTodo);
// Multiple paths
setState("todos", { from: 0, to: 2 }, "done", true);
setState("todos", [0, 2, 4], "done", true);
setState("todos", i => i.done, "done", false);
// Object merge (shallow)
setState("user", { age: 31 }); // Keeps other properties
```
### produce
```tsx
import { produce } from "solid-js/store";
setState(produce(draft => {
draft.user.age++;
draft.todos.push({ id: 2, text: "New", done: false });
draft.todos[0].done = true;
}));
```
### reconcile
```tsx
import { reconcile } from "solid-js/store";
// Replace with diff (minimal updates)
setState("todos", reconcile(newTodosFromAPI));
// Options
reconcile(data, { key: "id", merge: true });
```
### unwrap
```tsx
import { unwrap } from "solid-js/store";
const raw = unwrap(store); // Non-reactive plain object
```
### createMutable
```tsx
import { createMutable } from "solid-js/store";
const state = createMutable({
count: 0,
user: { name: "John" }
});
// Direct mutation (like MobX)
state.count++;
state.user.name = "Jane";
```
### modifyMutable
```tsx
import { modifyMutable, reconcile, produce } from "solid-js/store";
modifyMutable(state, reconcile(newData));
modifyMutable(state, produce(s => { s.count++ }));
```
## Component APIs
### children
```tsx
import { children } from "solid-js";
const resolved = children(() => props.children);
// Access
resolved(); // JSX.Element | JSX.Element[]
resolved.toArray(); // Always array
```
### createContext / useContext
```tsx
import { createContext, useContext } from "solid-js";
const MyContext = createContext<T>(defaultValue?);
// Provider
<MyContext.Provider value={value}>
{children}
</MyContext.Provider>
// Consumer
const value = useContext(MyContext);
```
### createUniqueId
```tsx
import { createUniqueId } from "solid-js";
const id = createUniqueId(); // "0", "1", etc.
```
### lazy
```tsx
import { lazy } from "solid-js";
const LazyComponent = lazy(() => import("./Component"));
// Use with Suspense
<Suspense fallback={<Loading />}>
<LazyComponent />
</Suspense>
```
## Lifecycle
### onMount
```tsx
import { onMount } from "solid-js";
onMount(() => {
// Runs once after initial render
console.log("Mounted");
});
```
### onCleanup
```tsx
import { onCleanup } from "solid-js";
// In component
onCleanup(() => {
console.log("Cleaning up");
});
// In effect
createEffect(() => {
const sub = subscribe();
onCleanup(() => sub.unsubscribe());
});
```
## Reactive Utilities
### batch
```tsx
import { batch } from "solid-js";
batch(() => {
setA(1);
setB(2);
setC(3);
// Effects run once after batch
});
```
### untrack
```tsx
import { untrack } from "solid-js";
createEffect(() => {
console.log(a()); // Tracked
console.log(untrack(() => b())); // Not tracked
});
```
### on
```tsx
import { on } from "solid-js";
// Explicit dependencies
createEffect(on(count, (value, prev) => {
console.log("Count changed:", prev, "->", value);
}));
// Multiple dependencies
createEffect(on([a, b], ([a, b], [prevA, prevB]) => {
console.log("Changed");
}));
// Defer first run
createEffect(on(count, (v) => console.log(v), { defer: true }));
```
### mergeProps
```tsx
import { mergeProps } from "solid-js";
const merged = mergeProps(
{ size: "medium", color: "blue" }, // Defaults
props // Overrides
);
```
### splitProps
```tsx
import { splitProps } from "solid-js";
const [local, others] = splitProps(props, ["class", "onClick"]);
// local.class, local.onClick
// others contains everything else
const [a, b, rest] = splitProps(props, ["foo"], ["bar"]);
```
### createRoot
```tsx
import { createRoot } from "solid-js";
const dispose = createRoot((dispose) => {
const [count, setCount] = createSignal(0);
// Use signals...
return dispose;
});
// Later
dispose();
```
### getOwner / runWithOwner
```tsx
import { getOwner, runWithOwner } from "solid-js";
const owner = getOwner();
// Later, in async code
runWithOwner(owner, () => {
createEffect(() => {
// This effect has proper ownership
});
});
```
### mapArray
```tsx
import { mapArray } from "solid-js";
const mapped = mapArray(
() => items(),
(item, index) => ({ ...item, doubled: item.value * 2 })
);
```
### indexArray
```tsx
import { indexArray } from "solid-js";
const mapped = indexArray(
() => items(),
(item, index) => <div>{index}: {item().name}</div>
);
```
### observable
```tsx
import { observable } from "solid-js";
const obs = observable(signal);
obs.subscribe((value) => console.log(value));
```
### from
```tsx
import { from } from "solid-js";
// Convert observable/subscribable to signal
const signal = from(rxObservable);
const signal = from((set) => {
const unsub = subscribe(set);
return unsub;
});
```
### catchError
```tsx
import { catchError } from "solid-js";
catchError(
() => riskyOperation(),
(err) => console.error("Error:", err)
);
```
## Secondary Primitives
### createComputed
```tsx
import { createComputed } from "solid-js";
// Like createEffect but runs during render phase
createComputed(() => {
setDerived(source() * 2);
});
```
### createRenderEffect
```tsx
import { createRenderEffect } from "solid-js";
// Runs before paint (for DOM measurements)
createRenderEffect(() => {
const height = element.offsetHeight;
});
```
### createDeferred
```tsx
import { createDeferred } from "solid-js";
// Returns value after idle time
const deferred = createDeferred(() => expensiveComputation(), {
timeoutMs: 1000
});
```
### createReaction
```tsx
import { createReaction } from "solid-js";
const track = createReaction(() => {
console.log("Something changed");
});
track(() => count()); // Start tracking
```
### createSelector
```tsx
import { createSelector } from "solid-js";
const isSelected = createSelector(selectedId);
<For each={items()}>
{(item) => (
<div class={isSelected(item.id) ? "selected" : ""}>
{item.name}
</div>
)}
</For>
```
## Components
### Show
```tsx
<Show when={condition()} fallback={<Fallback />}>
<Content />
</Show>
// With callback (narrowed type)
<Show when={user()}>
{(user) => <div>{user().name}</div>}
</Show>
```
### For
```tsx
<For each={items()} fallback={<Empty />}>
{(item, index) => <div>{index()}: {item.name}</div>}
</For>
```
### Index
```tsx
<Index each={items()} fallback={<Empty />}>
{(item, index) => <input value={item().text} />}
</Index>
```
### Switch / Match
```tsx
<Switch fallback={<Default />}>
<Match when={state() === "loading"}>
<Loading />
</Match>
<Match when={state() === "error"}>
<Error />
</Match>
</Switch>
```
### Dynamic
```tsx
import { Dynamic } from "solid-js/web";
<Dynamic component={selected()} prop={value} />
<Dynamic component="div" class="dynamic">Content</Dynamic>
```
### Portal
```tsx
import { Portal } from "solid-js/web";
<Portal mount={document.body}>
<Modal />
</Portal>
```
### ErrorBoundary
```tsx
<ErrorBoundary fallback={(err, reset) => (
<div>
<p>Error: {err.message}</p>
<button onClick={reset}>Retry</button>
</div>
)}>
<Content />
</ErrorBoundary>
```
### Suspense
```tsx
<Suspense fallback={<Loading />}>
<AsyncContent />
</Suspense>
```
### SuspenseList
```tsx
<SuspenseList revealOrder="forwards" tail="collapsed">
<Suspense fallback={<Loading />}><Item1 /></Suspense>
<Suspense fallback={<Loading />}><Item2 /></Suspense>
<Suspense fallback={<Loading />}><Item3 /></Suspense>
</SuspenseList>
```
## Rendering
### render
```tsx
import { render } from "solid-js/web";
const dispose = render(() => <App />, document.getElementById("root")!);
// Cleanup
dispose();
```
### hydrate
```tsx
import { hydrate } from "solid-js/web";
hydrate(() => <App />, document.getElementById("root")!);
```
### renderToString
```tsx
import { renderToString } from "solid-js/web";
const html = renderToString(() => <App />);
```
### renderToStringAsync
```tsx
import { renderToStringAsync } from "solid-js/web";
const html = await renderToStringAsync(() => <App />);
```
### renderToStream
```tsx
import { renderToStream } from "solid-js/web";
const stream = renderToStream(() => <App />);
stream.pipe(res);
```
### isServer
```tsx
import { isServer } from "solid-js/web";
if (isServer) {
// Server-only code
}
```
## JSX Attributes
### ref
```tsx
let el: HTMLDivElement;
<div ref={el} />
<div ref={(e) => console.log(e)} />
```
### classList
```tsx
<div classList={{ active: isActive(), disabled: isDisabled() }} />
```
### style
```tsx
<div style={{ color: "red", "font-size": "14px" }} />
<div style={`color: ${color()}`} />
```
### on:event (native)
```tsx
<div on:click={handleClick} />
<div on:scroll={handleScroll} />
```
### use:directive
```tsx
function clickOutside(el: HTMLElement, accessor: () => () => void) {
const handler = (e: MouseEvent) => {
if (!el.contains(e.target as Node)) accessor()();
};
document.addEventListener("click", handler);
onCleanup(() => document.removeEventListener("click", handler));
}
<div use:clickOutside={() => setOpen(false)} />
```
### prop:property
```tsx
<input prop:value={value()} /> // Set as property, not attribute
```
### attr:attribute
```tsx
<div attr:data-custom={value()} /> // Force attribute
```
### bool:attribute
```tsx
<input bool:disabled={isDisabled()} />
```
### @once
```tsx
<div title={/*@once*/ staticValue} /> // Never updates
```
## Types
```tsx
import type {
Component,
ParentComponent,
FlowComponent,
VoidComponent,
JSX,
Accessor,
Setter,
Signal,
Resource,
Owner
} from "solid-js";
// Component types
const MyComponent: Component<Props> = (props) => <div />;
const Parent: ParentComponent<Props> = (props) => <div>{props.children}</div>;
const Flow: FlowComponent<Props, Item> = (props) => props.children(item);
const Void: VoidComponent<Props> = (props) => <input />;
// Event types
type Handler = JSX.EventHandler<HTMLButtonElement, MouseEvent>;
type ChangeHandler = JSX.ChangeEventHandler<HTMLInputElement>;
```

View File

@@ -0,0 +1,720 @@
# SolidJS Patterns & Best Practices
Common patterns, recipes, and best practices for SolidJS development.
## Component Patterns
### Controlled vs Uncontrolled Inputs
**Controlled:**
```tsx
function ControlledInput() {
const [value, setValue] = createSignal("");
return (
<input
value={value()}
onInput={(e) => setValue(e.currentTarget.value)}
/>
);
}
```
**Uncontrolled with ref:**
```tsx
function UncontrolledInput() {
let inputRef: HTMLInputElement;
const handleSubmit = () => {
console.log(inputRef.value);
};
return (
<>
<input ref={inputRef!} />
<button onClick={handleSubmit}>Submit</button>
</>
);
}
```
### Compound Components
```tsx
const Tabs = {
Root: (props: ParentProps<{ defaultTab?: string }>) => {
const [activeTab, setActiveTab] = createSignal(props.defaultTab ?? "");
return (
<TabsContext.Provider value={{ activeTab, setActiveTab }}>
<div class="tabs">{props.children}</div>
</TabsContext.Provider>
);
},
List: (props: ParentProps) => (
<div class="tabs-list" role="tablist">{props.children}</div>
),
Tab: (props: ParentProps<{ value: string }>) => {
const ctx = useTabsContext();
return (
<button
role="tab"
aria-selected={ctx.activeTab() === props.value}
onClick={() => ctx.setActiveTab(props.value)}
>
{props.children}
</button>
);
},
Panel: (props: ParentProps<{ value: string }>) => {
const ctx = useTabsContext();
return (
<Show when={ctx.activeTab() === props.value}>
<div role="tabpanel">{props.children}</div>
</Show>
);
}
};
// Usage
<Tabs.Root defaultTab="first">
<Tabs.List>
<Tabs.Tab value="first">First</Tabs.Tab>
<Tabs.Tab value="second">Second</Tabs.Tab>
</Tabs.List>
<Tabs.Panel value="first">First Content</Tabs.Panel>
<Tabs.Panel value="second">Second Content</Tabs.Panel>
</Tabs.Root>
```
### Render Props
```tsx
function MouseTracker(props: {
children: (pos: { x: number; y: number }) => JSX.Element;
}) {
const [pos, setPos] = createSignal({ x: 0, y: 0 });
onMount(() => {
const handler = (e: MouseEvent) => setPos({ x: e.clientX, y: e.clientY });
window.addEventListener("mousemove", handler);
onCleanup(() => window.removeEventListener("mousemove", handler));
});
return <>{props.children(pos())}</>;
}
// Usage
<MouseTracker>
{(pos) => <div>Mouse: {pos.x}, {pos.y}</div>}
</MouseTracker>
```
### Higher-Order Components
```tsx
function withAuth<P extends object>(Component: Component<P>) {
return (props: P) => {
const { user } = useAuth();
return (
<Show when={user()} fallback={<Redirect to="/login" />}>
<Component {...props} />
</Show>
);
};
}
const ProtectedDashboard = withAuth(Dashboard);
```
### Polymorphic Components
```tsx
type PolymorphicProps<E extends keyof JSX.IntrinsicElements> = {
as?: E;
} & JSX.IntrinsicElements[E];
function Box<E extends keyof JSX.IntrinsicElements = "div">(
props: PolymorphicProps<E>
) {
const [local, others] = splitProps(props as PolymorphicProps<"div">, ["as"]);
return <Dynamic component={local.as || "div"} {...others} />;
}
// Usage
<Box>Default div</Box>
<Box as="section">Section element</Box>
<Box as="button" onClick={handleClick}>Button</Box>
```
## State Patterns
### Derived State with Multiple Sources
```tsx
function SearchResults() {
const [query, setQuery] = createSignal("");
const [filters, setFilters] = createSignal({ category: "all" });
const results = createMemo(() => {
const q = query().toLowerCase();
const f = filters();
return allItems()
.filter(item => item.name.toLowerCase().includes(q))
.filter(item => f.category === "all" || item.category === f.category);
});
return <For each={results()}>{item => <Item item={item} />}</For>;
}
```
### State Machine Pattern
```tsx
type State = "idle" | "loading" | "success" | "error";
type Event = { type: "FETCH" } | { type: "SUCCESS"; data: any } | { type: "ERROR"; error: Error };
function createMachine(initial: State) {
const [state, setState] = createSignal<State>(initial);
const [data, setData] = createSignal<any>(null);
const [error, setError] = createSignal<Error | null>(null);
const send = (event: Event) => {
const current = state();
switch (current) {
case "idle":
if (event.type === "FETCH") setState("loading");
break;
case "loading":
if (event.type === "SUCCESS") {
setData(event.data);
setState("success");
} else if (event.type === "ERROR") {
setError(event.error);
setState("error");
}
break;
}
};
return { state, data, error, send };
}
```
### Optimistic Updates
```tsx
const [todos, setTodos] = createStore<Todo[]>([]);
async function deleteTodo(id: string) {
const original = [...unwrap(todos)];
// Optimistic remove
setTodos(todos => todos.filter(t => t.id !== id));
try {
await api.deleteTodo(id);
} catch {
// Rollback on error
setTodos(reconcile(original));
}
}
```
### Undo/Redo
```tsx
function createHistory<T>(initial: T) {
const [past, setPast] = createSignal<T[]>([]);
const [present, setPresent] = createSignal<T>(initial);
const [future, setFuture] = createSignal<T[]>([]);
const canUndo = () => past().length > 0;
const canRedo = () => future().length > 0;
const set = (value: T | ((prev: T) => T)) => {
const newValue = typeof value === "function"
? (value as (prev: T) => T)(present())
: value;
setPast(p => [...p, present()]);
setPresent(newValue);
setFuture([]);
};
const undo = () => {
if (!canUndo()) return;
const previous = past()[past().length - 1];
setPast(p => p.slice(0, -1));
setFuture(f => [present(), ...f]);
setPresent(previous);
};
const redo = () => {
if (!canRedo()) return;
const next = future()[0];
setPast(p => [...p, present()]);
setFuture(f => f.slice(1));
setPresent(next);
};
return { value: present, set, undo, redo, canUndo, canRedo };
}
```
## Custom Hooks/Primitives
### useLocalStorage
```tsx
function createLocalStorage<T>(key: string, initialValue: T) {
const stored = localStorage.getItem(key);
const initial = stored ? JSON.parse(stored) : initialValue;
const [value, setValue] = createSignal<T>(initial);
createEffect(() => {
localStorage.setItem(key, JSON.stringify(value()));
});
return [value, setValue] as const;
}
```
### useDebounce
```tsx
function createDebounce<T>(source: () => T, delay: number) {
const [debounced, setDebounced] = createSignal<T>(source());
createEffect(() => {
const value = source();
const timer = setTimeout(() => setDebounced(() => value), delay);
onCleanup(() => clearTimeout(timer));
});
return debounced;
}
// Usage
const debouncedQuery = createDebounce(query, 300);
```
### useThrottle
```tsx
function createThrottle<T>(source: () => T, delay: number) {
const [throttled, setThrottled] = createSignal<T>(source());
let lastRun = 0;
createEffect(() => {
const value = source();
const now = Date.now();
if (now - lastRun >= delay) {
lastRun = now;
setThrottled(() => value);
} else {
const timer = setTimeout(() => {
lastRun = Date.now();
setThrottled(() => value);
}, delay - (now - lastRun));
onCleanup(() => clearTimeout(timer));
}
});
return throttled;
}
```
### useMediaQuery
```tsx
function createMediaQuery(query: string) {
const mql = window.matchMedia(query);
const [matches, setMatches] = createSignal(mql.matches);
onMount(() => {
const handler = (e: MediaQueryListEvent) => setMatches(e.matches);
mql.addEventListener("change", handler);
onCleanup(() => mql.removeEventListener("change", handler));
});
return matches;
}
// Usage
const isMobile = createMediaQuery("(max-width: 768px)");
```
### useClickOutside
```tsx
function createClickOutside(
ref: () => HTMLElement | undefined,
callback: () => void
) {
onMount(() => {
const handler = (e: MouseEvent) => {
const el = ref();
if (el && !el.contains(e.target as Node)) {
callback();
}
};
document.addEventListener("click", handler);
onCleanup(() => document.removeEventListener("click", handler));
});
}
// Usage
let dropdownRef: HTMLDivElement;
createClickOutside(() => dropdownRef, () => setOpen(false));
```
### useIntersectionObserver
```tsx
function createIntersectionObserver(
ref: () => HTMLElement | undefined,
options?: IntersectionObserverInit
) {
const [isIntersecting, setIsIntersecting] = createSignal(false);
onMount(() => {
const el = ref();
if (!el) return;
const observer = new IntersectionObserver(([entry]) => {
setIsIntersecting(entry.isIntersecting);
}, options);
observer.observe(el);
onCleanup(() => observer.disconnect());
});
return isIntersecting;
}
```
## Form Patterns
### Form Validation
```tsx
function createForm<T extends Record<string, any>>(initial: T) {
const [values, setValues] = createStore<T>(initial);
const [errors, setErrors] = createStore<Partial<Record<keyof T, string>>>({});
const [touched, setTouched] = createStore<Partial<Record<keyof T, boolean>>>({});
const handleChange = (field: keyof T) => (e: Event) => {
const target = e.target as HTMLInputElement;
setValues(field as any, target.value as any);
};
const handleBlur = (field: keyof T) => () => {
setTouched(field as any, true);
};
const validate = (validators: Partial<Record<keyof T, (v: any) => string | undefined>>) => {
let isValid = true;
for (const [field, validator] of Object.entries(validators)) {
if (validator) {
const error = validator(values[field as keyof T]);
setErrors(field as any, error as any);
if (error) isValid = false;
}
}
return isValid;
};
return { values, errors, touched, handleChange, handleBlur, validate, setValues };
}
// Usage
const form = createForm({ email: "", password: "" });
<input
value={form.values.email}
onInput={form.handleChange("email")}
onBlur={form.handleBlur("email")}
/>
<Show when={form.touched.email && form.errors.email}>
<span class="error">{form.errors.email}</span>
</Show>
```
### Field Array
```tsx
function createFieldArray<T>(initial: T[] = []) {
const [fields, setFields] = createStore<T[]>(initial);
const append = (value: T) => setFields(f => [...f, value]);
const remove = (index: number) => setFields(f => f.filter((_, i) => i !== index));
const update = (index: number, value: Partial<T>) => setFields(index, v => ({ ...v, ...value }));
const move = (from: number, to: number) => {
setFields(produce(f => {
const [item] = f.splice(from, 1);
f.splice(to, 0, item);
}));
};
return { fields, append, remove, update, move };
}
```
## Performance Patterns
### Virtualized List
```tsx
function VirtualList<T>(props: {
items: T[];
itemHeight: number;
height: number;
renderItem: (item: T, index: number) => JSX.Element;
}) {
const [scrollTop, setScrollTop] = createSignal(0);
const startIndex = createMemo(() =>
Math.floor(scrollTop() / props.itemHeight)
);
const visibleCount = createMemo(() =>
Math.ceil(props.height / props.itemHeight) + 1
);
const visibleItems = createMemo(() =>
props.items.slice(startIndex(), startIndex() + visibleCount())
);
return (
<div
style={{ height: `${props.height}px`, overflow: "auto" }}
onScroll={(e) => setScrollTop(e.currentTarget.scrollTop)}
>
<div style={{ height: `${props.items.length * props.itemHeight}px`, position: "relative" }}>
<For each={visibleItems()}>
{(item, i) => (
<div style={{
position: "absolute",
top: `${(startIndex() + i()) * props.itemHeight}px`,
height: `${props.itemHeight}px`
}}>
{props.renderItem(item, startIndex() + i())}
</div>
)}
</For>
</div>
</div>
);
}
```
### Lazy Loading with Intersection Observer
```tsx
function LazyLoad(props: ParentProps<{ placeholder?: JSX.Element }>) {
let ref: HTMLDivElement;
const [isVisible, setIsVisible] = createSignal(false);
onMount(() => {
const observer = new IntersectionObserver(
([entry]) => {
if (entry.isIntersecting) {
setIsVisible(true);
observer.disconnect();
}
},
{ rootMargin: "100px" }
);
observer.observe(ref);
onCleanup(() => observer.disconnect());
});
return (
<div ref={ref!}>
<Show when={isVisible()} fallback={props.placeholder}>
{props.children}
</Show>
</div>
);
}
```
### Memoized Component
```tsx
// For expensive components that shouldn't re-render on parent updates
function MemoizedExpensiveList(props: { items: Item[] }) {
// Component only re-renders when items actually change
return (
<For each={props.items}>
{(item) => <ExpensiveItem item={item} />}
</For>
);
}
```
## Testing Patterns
### Component Testing
```tsx
import { render, fireEvent, screen } from "@solidjs/testing-library";
test("Counter increments", async () => {
render(() => <Counter />);
const button = screen.getByRole("button", { name: /increment/i });
expect(screen.getByText("Count: 0")).toBeInTheDocument();
fireEvent.click(button);
expect(screen.getByText("Count: 1")).toBeInTheDocument();
});
```
### Testing with Context
```tsx
function renderWithContext(component: () => JSX.Element) {
return render(() => (
<ThemeProvider>
<AuthProvider>
{component()}
</AuthProvider>
</ThemeProvider>
));
}
test("Dashboard shows user", () => {
renderWithContext(() => <Dashboard />);
// ...
});
```
### Testing Async Components
```tsx
import { render, waitFor, screen } from "@solidjs/testing-library";
test("Loads user data", async () => {
render(() => <UserProfile userId="123" />);
expect(screen.getByText(/loading/i)).toBeInTheDocument();
await waitFor(() => {
expect(screen.getByText("John Doe")).toBeInTheDocument();
});
});
```
## Error Handling Patterns
### Global Error Handler
```tsx
function App() {
return (
<ErrorBoundary
fallback={(err, reset) => (
<ErrorPage error={err} onRetry={reset} />
)}
>
<Suspense fallback={<AppLoader />}>
<Router>
{/* Routes */}
</Router>
</Suspense>
</ErrorBoundary>
);
}
```
### Async Error Handling
```tsx
function DataComponent() {
const [data] = createResource(fetchData);
return (
<Switch>
<Match when={data.loading}>
<Loading />
</Match>
<Match when={data.error}>
<Error error={data.error} onRetry={() => refetch()} />
</Match>
<Match when={data()}>
{(data) => <Content data={data()} />}
</Match>
</Switch>
);
}
```
## Accessibility Patterns
### Focus Management
```tsx
function Modal(props: ParentProps<{ isOpen: boolean; onClose: () => void }>) {
let dialogRef: HTMLDivElement;
let previousFocus: HTMLElement | null;
createEffect(() => {
if (props.isOpen) {
previousFocus = document.activeElement as HTMLElement;
dialogRef.focus();
} else if (previousFocus) {
previousFocus.focus();
}
});
return (
<Show when={props.isOpen}>
<Portal>
<div
ref={dialogRef!}
role="dialog"
aria-modal="true"
tabIndex={-1}
onKeyDown={(e) => e.key === "Escape" && props.onClose()}
>
{props.children}
</div>
</Portal>
</Show>
);
}
```
### Live Regions
```tsx
function Notifications() {
const [message, setMessage] = createSignal("");
return (
<div
role="status"
aria-live="polite"
aria-atomic="true"
class="sr-only"
>
{message()}
</div>
);
}
```

View File

@@ -0,0 +1,223 @@
---
name: spec-planner
description: Dialogue-driven spec development through skeptical questioning and iterative refinement. Triggers: "spec this out", feature planning, architecture decisions, "is this worth it?" questions, RFC/design doc creation, work scoping. Invoke Librarian for unfamiliar tech/frameworks/APIs.
---
# Spec Planner
Produce implementation-ready specs through rigorous dialogue and honest trade-off analysis.
## Core Philosophy
- **Dialogue over deliverables** - Plans emerge from discussion, not assumption
- **Skeptical by default** - Requirements are incomplete until proven otherwise
- **Second-order thinking** - Consider downstream effects and maintenance burden
## Workflow Phases
```
CLARIFY --[user responds]--> DISCOVER --[done]--> DRAFT --[complete]--> REFINE --[approved]--> DONE
| | | |
+--[still ambiguous]--<------+-------------------+----[gaps found]------+
```
**State phase at end of every response:**
```
---
Phase: CLARIFY | Waiting for: answers to questions 1-4
```
---
## Phase 1: CLARIFY (Mandatory)
**Hard rule:** No spec until user has responded to at least one round of questions.
1. **STOP.** Do not proceed to planning.
2. Identify gaps in: scope, motivation, constraints, edge cases, success criteria
3. Ask 3-5 pointed questions that would change the approach. USE YOUR QUESTION TOOL.
4. **Wait for responses**
**IMPORTANT: Always use the `question` tool to ask clarifying questions.** Do NOT output questions as freeform text. The question tool provides structured options and better UX. Example:
```
question({
questions: [{
header: "Scope",
question: "Which subsystems need detailed specs?",
options: [
{ label: "VCS layer", description: "jj-lib + gix unified interface" },
{ label: "Review workflow", description: "GitHub PR-style local review" },
{ label: "Event system", description: "pub/sub + persistence" }
],
multiple: true
}]
})
```
| Category | Example |
|----------|---------|
| Scope | "Share where? Social media? Direct link? Embed?" |
| Motivation | "What user problem are we actually solving?" |
| Constraints | "Does this need to work with existing privacy settings?" |
| Success | "How will we know this worked?" |
**Escape prevention:** Even if request seems complete, ask 2+ clarifying questions. Skip only for mechanical requests (e.g., "rename X to Y").
**Anti-patterns to resist:**
- "Just give me a rough plan" -> Still needs scope questions
- "I'll figure out the details" -> Those details ARE the spec
- Very long initial request -> Longer != clearer; probe assumptions
**Transition:** User answered AND no new ambiguities -> DISCOVER
---
## Phase 2: DISCOVER
**After clarification, before planning:** Understand existing system.
Launch explore subagents in parallel:
```
Task(
subagent_type="explore",
description="Explore [area name]",
prompt="Explore [area]. Return: key files, abstractions, patterns, integration points."
)
```
| Target | What to Find |
|--------|--------------|
| Affected area | Files, modules that will change |
| Existing patterns | How similar features are implemented |
| Integration points | APIs, events, data flows touched |
**If unfamiliar tech involved**, invoke Librarian:
```
Task(
subagent_type="librarian",
description="Research [tech name]",
prompt="Research [tech] for [use case]. Return: recommended approach, gotchas, production patterns."
)
```
**Output:** Brief architecture summary before proposing solutions.
**Transition:** System context understood -> DRAFT
---
## Phase 3: DRAFT
Apply planning framework from [decision-frameworks.md](./references/decision-frameworks.md):
1. **Problem Definition** - What are we solving? For whom? Cost of not solving?
2. **Constraints Inventory** - Time, system, knowledge, scope ceiling
3. **Solution Space** - Simplest -> Balanced -> Full engineering solution
4. **Trade-off Analysis** - See table format in references
5. **Recommendation** - One clear choice with reasoning
Use appropriate template from [templates.md](./references/templates.md):
- **Quick Decision** - Scoped technical choices
- **Feature Plan** - New feature development
- **ADR** - Architecture decisions
- **RFC** - Larger proposals
**Transition:** Spec produced -> REFINE
---
## Phase 4: REFINE
Run completeness check:
| Criterion | Check |
|-----------|-------|
| Scope bounded | Every deliverable listed; non-goals explicit |
| Ambiguity resolved | No "TBD" or "to be determined" |
| Acceptance testable | Each criterion pass/fail verifiable |
| Dependencies ordered | Clear what blocks what |
| Types defined | Data shapes specified (not "some object") |
| Effort estimated | Each deliverable has S/M/L/XL |
| Risks identified | At least 2 risks with mitigations |
| Open questions | Resolved OR assigned owner |
**If any criterion fails:** Return to dialogue. "To finalize, I need clarity on: [failing criteria]."
**Transition:** All criteria pass + user approval -> DONE
---
## Phase 5: DONE
### Final Output
```
=== Spec Complete ===
Phase: DONE
Type: <feature plan | architecture decision | refactoring | strategy>
Effort: <S/M/L/XL>
Status: Ready for task breakdown
Discovery:
- Explored: <areas investigated>
- Key findings: <relevant architecture/patterns>
Recommendation:
<brief summary>
Key Trade-offs:
- <what we're choosing vs alternatives>
Deliverables (Ordered):
1. [D1] (effort) - depends on: -
2. [D2] (effort) - depends on: D1
Open Questions:
- [ ] <if any remain> -> Owner: [who]
```
### Write Spec to File (MANDATORY)
1. Derive filename from feature/decision name (kebab-case)
2. Write spec to `specs/<filename>.md`
3. Confirm: `Spec written to: specs/<filename>.md`
---
## Effort Estimates
| Size | Time | Scope |
|------|------|-------|
| **S** | <1 hour | Single file, isolated change |
| **M** | 1-3 hours | Few files, contained feature |
| **L** | 1-2 days | Cross-cutting, multiple components |
| **XL** | >2 days | Major refactor, new system |
## Scope Control
When scope creeps:
1. **Name it:** "That's scope expansion. Let's finish X first."
2. **Park it:** "Added to Open Questions. Revisit after core spec stable."
3. **Cost it:** "Adding Y changes effort from M to XL. Worth it?"
**Hard rule:** If scope changes, re-estimate and flag explicitly.
## References
| File | When to Read |
|------|--------------|
| [templates.md](./references/templates.md) | Output formats for plans, ADRs, RFCs |
| [decision-frameworks.md](./references/decision-frameworks.md) | Complex multi-factor decisions |
| [estimation.md](./references/estimation.md) | Breaking down work, avoiding underestimation |
| [technical-debt.md](./references/technical-debt.md) | Evaluating refactoring ROI |
## Integration
| Agent | When to Invoke |
|-------|----------------|
| **Librarian** | Research unfamiliar tech, APIs, frameworks |
| **Oracle** | Deep architectural analysis, complex debugging |

View File

@@ -0,0 +1,75 @@
# Decision Frameworks
## Reversibility Matrix
| Decision Type | Approach |
|---------------|----------|
| **Two-way door** (easily reversed) | Decide fast, learn from outcome |
| **One-way door** (hard to reverse) | Invest time in analysis |
Most decisions are two-way doors. Don't over-analyze.
## Cost of Delay
```
Daily Cost = (Value Delivered / Time to Deliver) x Risk Factor
```
Use when prioritizing:
- High daily cost -> Do first
- Low daily cost -> Can wait
## RICE Scoring
| Factor | Question | Scale |
|--------|----------|-------|
| **R**each | How many users affected? | # users/period |
| **I**mpact | How much per user? | 0.25, 0.5, 1, 2, 3 |
| **C**onfidence | How sure are we? | 20%, 50%, 80%, 100% |
| **E**ffort | Person-weeks | 0.5, 1, 2, 4, 8+ |
```
RICE = (Reach x Impact x Confidence) / Effort
```
## Technical Decision Checklist
Before committing to a technical approach:
- [ ] Have we talked to someone who's done this before?
- [ ] What's the simplest version that teaches us something?
- [ ] What would make us reverse this decision?
- [ ] Who maintains this in 6 months?
- [ ] What's our rollback plan?
## When to Build vs Buy vs Adopt
| Signal | Build | Buy | Adopt (OSS) |
|--------|-------|-----|-------------|
| Core differentiator | Yes | No | Maybe |
| Commodity problem | No | Yes | Yes |
| Tight integration needed | Yes | Maybe | Maybe |
| Team has expertise | Yes | N/A | Yes |
| Time pressure | No | Yes | Maybe |
| Long-term control needed | Yes | No | Maybe |
## Decomposition Strategies
### Vertical Slicing
Cut features into thin end-to-end slices that deliver value:
```
Bad: "Build database layer" -> "Build API" -> "Build UI"
Good: "User can see their profile" -> "User can edit name" -> "User can upload avatar"
```
### Risk-First Ordering
1. Identify highest-risk unknowns
2. Build spike/proof-of-concept for those first
3. Then build around proven foundation
### Dependency Mapping
```
[Feature A] -depends on-> [Feature B] -depends on-> [Feature C]
^
Start here
```

View File

@@ -0,0 +1,69 @@
# Estimation
## Why Estimates Fail
| Cause | Mitigation |
|-------|------------|
| Optimism bias | Use historical data, not gut |
| Missing scope | List "obvious" tasks explicitly |
| Integration blindness | Add 20-30% for glue code |
| Unknown unknowns | Add buffer based on novelty |
| Interruptions | Assume 60% focused time |
## Estimation Techniques
### Three-Point Estimation
```
Expected = (Optimistic + 4xMostLikely + Pessimistic) / 6
```
### Relative Sizing
Compare to known references:
- "This is about twice as complex as Feature X"
- Use Fibonacci (1, 2, 3, 5, 8, 13) to reflect uncertainty
### Task Decomposition
1. Break into tasks <=4 hours
2. If can't decompose, spike first
3. Sum tasks + 20% integration buffer
## Effort Multipliers
| Factor | Multiplier |
|--------|------------|
| New technology | 1.5-2x |
| Unclear requirements | 1.3-1.5x |
| External dependencies (waiting on others) | 1.2-1.5x |
| Legacy/undocumented code | 1.3-2x |
| Production deployment | 1.2x |
| First time doing X | 2-3x |
| Context switching (other priorities) | 1.3x |
| Yak shaving risk (unknown unknowns) | 1.5x |
## Hidden Work Checklist
Always include time for:
- [ ] Code review (20% of dev time)
- [ ] Testing (30-50% of dev time)
- [ ] Documentation (10% of dev time)
- [ ] Deployment/config (varies)
- [ ] Bug fixes from testing (20% buffer)
- [ ] Interruptions / competing priorities
## When to Re-Estimate
Re-estimate when:
- Scope changes materially
- Major unknown becomes known
- Actual progress diverges >30% from estimate
## Communicating Estimates
**Good:** "1-2 weeks, confidence 70%-main risk is the third-party API integration"
**Bad:** "About 2 weeks"
Always include:
1. Range, not point estimate
2. Confidence level
3. Key assumptions/risks

View File

@@ -0,0 +1,94 @@
# Technical Debt
## Debt Categories
| Type | Example | Urgency |
|------|---------|---------|
| **Deliberate-Prudent** | "Ship now, refactor next sprint" | Planned paydown |
| **Deliberate-Reckless** | "We don't have time for tests" | Accumulating risk |
| **Inadvertent-Prudent** | "Now we know a better way" | Normal learning |
| **Inadvertent-Reckless** | "What's layering?" | Learning curve |
## When to Pay Down Debt
**Pay now when:**
- Debt is in path of upcoming work
- Cognitive load slowing every change
- Bugs recurring in same area
- Onboarding time increasing
**Defer when:**
- Area is stable, rarely touched
- Bigger refactor coming anyway
- Time constrained on priority work
- Code may be deprecated
## ROI Framework
```
Debt ROI = (Time Saved Per Touch x Touches/Month x Months) / Paydown Cost
```
| ROI | Action |
|-----|--------|
| >3x | Prioritize immediately |
| 1-3x | Plan into upcoming work |
| <1x | Accept or isolate |
## Refactoring Strategies
### Strangler Fig
1. Build new alongside old
2. Redirect traffic incrementally
3. Remove old when empty
Best for: Large system replacements
### Branch by Abstraction
1. Create abstraction over old code
2. Implement new behind abstraction
3. Switch implementations
4. Remove old
Best for: Library/dependency swaps
### Parallel Change (Expand-Contract)
1. Add new behavior alongside old
2. Migrate callers incrementally
3. Remove old behavior
Best for: API changes
### Mikado Method
1. Try the change
2. When it breaks, note prerequisites
3. Revert
4. Recursively fix prerequisites
5. Apply original change
Best for: Untangling dependencies
## Tracking Debt
Minimum viable debt tracking:
```markdown
## Tech Debt Log
| ID | Description | Impact | Area | Added |
|----|-------------|--------|------|-------|
| TD-1 | No caching layer | Slow queries | /api | 2024-01 |
```
Review monthly. Prune resolved items.
## Communicating Debt to Stakeholders
**Frame as investment, not cleanup:**
- "This will reduce bug rate by ~30%"
- "Deployment time goes from 2 hours to 20 minutes"
- "New features in this area take 2x longer than they should"
**Avoid:**
- "The code is messy"
- "We need to refactor"
- Technical jargon without business impact

View File

@@ -0,0 +1,161 @@
# Output Templates
## Quick Decision
For scoped technical choices with clear options.
```
## Decision: [choice]
**Why:** [1-2 sentences]
**Trade-off:** [what we're giving up]
**Revisit if:** [trigger conditions]
```
## Feature Plan (Implementation-Ready)
For new feature development. **Complete enough for task decomposition.**
```
## Feature: [name]
### Problem Statement
**Who:** [specific user/persona]
**What:** [the problem they face]
**Why it matters:** [business/user impact]
**Evidence:** [how we know this is real]
### Proposed Solution
[High-level approach in 2-3 paragraphs]
### Scope & Deliverables
| Deliverable | Effort | Depends On |
|-------------|--------|------------|
| [D1] | S/M/L | - |
| [D2] | S/M/L | D1 |
### Non-Goals (Explicit Exclusions)
- [Thing people might assume is in scope but isn't]
### Data Model
[Types, schemas, state shapes that will exist or change]
### API/Interface Contract
[Public interfaces between components-input/output/errors]
### Acceptance Criteria
- [ ] [Testable statement 1]
- [ ] [Testable statement 2]
### Test Strategy
| Layer | What | How |
|-------|------|-----|
| Unit | [specific logic] | [approach] |
| Integration | [boundaries] | [approach] |
### Risks & Mitigations
| Risk | Likelihood | Impact | Mitigation |
|------|------------|--------|------------|
### Trade-offs Made
| Chose | Over | Because |
|-------|------|---------|
### Open Questions
- [ ] [Question] -> Owner: [who decides]
### Success Metrics
- [Measurable outcome]
```
## Architecture Decision Record (ADR)
For significant architecture decisions that need documentation.
```
## ADR: [title]
**Status:** Proposed | Accepted | Deprecated | Superseded
**Date:** [date]
### Context
[What forces are at play]
### Decision
[What we're doing]
### Consequences
- [+] [Benefit]
- [-] [Drawback]
- [~] [Neutral observation]
```
## RFC (Request for Comments)
For larger proposals needing broader review.
```
## RFC: [title]
**Author:** [name]
**Status:** Draft | In Review | Accepted | Rejected
**Created:** [date]
### Summary
[1-2 paragraph overview]
### Motivation
[Why are we doing this?]
### Detailed Design
[Technical details]
### Alternatives Considered
| Option | Pros | Cons | Why Not |
|--------|------|------|---------|
### Migration/Rollout
[How we get from here to there]
### Open Questions
- [ ] [Question]
```
## Handoff Artifact
When spec is complete, produce final summary for task decomposition:
```
# [Feature Name] - Implementation Spec
**Status:** Ready for task breakdown
**Effort:** [total estimate]
**Approved by:** [human who approved]
**Date:** [date]
## Deliverables (Ordered)
1. **[D1]** (S) - [one-line description]
- Depends on: -
- Files likely touched: [paths]
2. **[D2]** (M) - [one-line description]
- Depends on: D1
- Files likely touched: [paths]
## Key Technical Decisions
- [Decision]: [choice] because [reason]
## Data Model
[Copy from spec]
## Acceptance Criteria
1. [Criterion 1]
2. [Criterion 2]
## Open Items (Non-Blocking)
- [Item] -> Owner: [who]
---
*Spec approved for task decomposition.*
```

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,136 @@
---
name: vercel-react-best-practices
description: React and Next.js performance optimization guidelines from Vercel Engineering. This skill should be used when writing, reviewing, or refactoring React/Next.js code to ensure optimal performance patterns. Triggers on tasks involving React components, Next.js pages, data fetching, bundle optimization, or performance improvements.
license: MIT
metadata:
author: vercel
version: "1.0.0"
---
# Vercel React Best Practices
Comprehensive performance optimization guide for React and Next.js applications, maintained by Vercel. Contains 57 rules across 8 categories, prioritized by impact to guide automated refactoring and code generation.
## When to Apply
Reference these guidelines when:
- Writing new React components or Next.js pages
- Implementing data fetching (client or server-side)
- Reviewing code for performance issues
- Refactoring existing React/Next.js code
- Optimizing bundle size or load times
## Rule Categories by Priority
| Priority | Category | Impact | Prefix |
|----------|----------|--------|--------|
| 1 | Eliminating Waterfalls | CRITICAL | `async-` |
| 2 | Bundle Size Optimization | CRITICAL | `bundle-` |
| 3 | Server-Side Performance | HIGH | `server-` |
| 4 | Client-Side Data Fetching | MEDIUM-HIGH | `client-` |
| 5 | Re-render Optimization | MEDIUM | `rerender-` |
| 6 | Rendering Performance | MEDIUM | `rendering-` |
| 7 | JavaScript Performance | LOW-MEDIUM | `js-` |
| 8 | Advanced Patterns | LOW | `advanced-` |
## Quick Reference
### 1. Eliminating Waterfalls (CRITICAL)
- `async-defer-await` - Move await into branches where actually used
- `async-parallel` - Use Promise.all() for independent operations
- `async-dependencies` - Use better-all for partial dependencies
- `async-api-routes` - Start promises early, await late in API routes
- `async-suspense-boundaries` - Use Suspense to stream content
### 2. Bundle Size Optimization (CRITICAL)
- `bundle-barrel-imports` - Import directly, avoid barrel files
- `bundle-dynamic-imports` - Use next/dynamic for heavy components
- `bundle-defer-third-party` - Load analytics/logging after hydration
- `bundle-conditional` - Load modules only when feature is activated
- `bundle-preload` - Preload on hover/focus for perceived speed
### 3. Server-Side Performance (HIGH)
- `server-auth-actions` - Authenticate server actions like API routes
- `server-cache-react` - Use React.cache() for per-request deduplication
- `server-cache-lru` - Use LRU cache for cross-request caching
- `server-dedup-props` - Avoid duplicate serialization in RSC props
- `server-serialization` - Minimize data passed to client components
- `server-parallel-fetching` - Restructure components to parallelize fetches
- `server-after-nonblocking` - Use after() for non-blocking operations
### 4. Client-Side Data Fetching (MEDIUM-HIGH)
- `client-swr-dedup` - Use SWR for automatic request deduplication
- `client-event-listeners` - Deduplicate global event listeners
- `client-passive-event-listeners` - Use passive listeners for scroll
- `client-localstorage-schema` - Version and minimize localStorage data
### 5. Re-render Optimization (MEDIUM)
- `rerender-defer-reads` - Don't subscribe to state only used in callbacks
- `rerender-memo` - Extract expensive work into memoized components
- `rerender-memo-with-default-value` - Hoist default non-primitive props
- `rerender-dependencies` - Use primitive dependencies in effects
- `rerender-derived-state` - Subscribe to derived booleans, not raw values
- `rerender-derived-state-no-effect` - Derive state during render, not effects
- `rerender-functional-setstate` - Use functional setState for stable callbacks
- `rerender-lazy-state-init` - Pass function to useState for expensive values
- `rerender-simple-expression-in-memo` - Avoid memo for simple primitives
- `rerender-move-effect-to-event` - Put interaction logic in event handlers
- `rerender-transitions` - Use startTransition for non-urgent updates
- `rerender-use-ref-transient-values` - Use refs for transient frequent values
### 6. Rendering Performance (MEDIUM)
- `rendering-animate-svg-wrapper` - Animate div wrapper, not SVG element
- `rendering-content-visibility` - Use content-visibility for long lists
- `rendering-hoist-jsx` - Extract static JSX outside components
- `rendering-svg-precision` - Reduce SVG coordinate precision
- `rendering-hydration-no-flicker` - Use inline script for client-only data
- `rendering-hydration-suppress-warning` - Suppress expected mismatches
- `rendering-activity` - Use Activity component for show/hide
- `rendering-conditional-render` - Use ternary, not && for conditionals
- `rendering-usetransition-loading` - Prefer useTransition for loading state
### 7. JavaScript Performance (LOW-MEDIUM)
- `js-batch-dom-css` - Group CSS changes via classes or cssText
- `js-index-maps` - Build Map for repeated lookups
- `js-cache-property-access` - Cache object properties in loops
- `js-cache-function-results` - Cache function results in module-level Map
- `js-cache-storage` - Cache localStorage/sessionStorage reads
- `js-combine-iterations` - Combine multiple filter/map into one loop
- `js-length-check-first` - Check array length before expensive comparison
- `js-early-exit` - Return early from functions
- `js-hoist-regexp` - Hoist RegExp creation outside loops
- `js-min-max-loop` - Use loop for min/max instead of sort
- `js-set-map-lookups` - Use Set/Map for O(1) lookups
- `js-tosorted-immutable` - Use toSorted() for immutability
### 8. Advanced Patterns (LOW)
- `advanced-event-handler-refs` - Store event handlers in refs
- `advanced-init-once` - Initialize app once per app load
- `advanced-use-latest` - useLatest for stable callback refs
## How to Use
Read individual rule files for detailed explanations and code examples:
```
rules/async-parallel.md
rules/bundle-barrel-imports.md
```
Each rule file contains:
- Brief explanation of why it matters
- Incorrect code example with explanation
- Correct code example with explanation
- Additional context and references
## Full Compiled Document
For the complete guide with all rules expanded: `AGENTS.md`

View File

@@ -0,0 +1,46 @@
# Sections
This file defines all sections, their ordering, impact levels, and descriptions.
The section ID (in parentheses) is the filename prefix used to group rules.
---
## 1. Eliminating Waterfalls (async)
**Impact:** CRITICAL
**Description:** Waterfalls are the #1 performance killer. Each sequential await adds full network latency. Eliminating them yields the largest gains.
## 2. Bundle Size Optimization (bundle)
**Impact:** CRITICAL
**Description:** Reducing initial bundle size improves Time to Interactive and Largest Contentful Paint.
## 3. Server-Side Performance (server)
**Impact:** HIGH
**Description:** Optimizing server-side rendering and data fetching eliminates server-side waterfalls and reduces response times.
## 4. Client-Side Data Fetching (client)
**Impact:** MEDIUM-HIGH
**Description:** Automatic deduplication and efficient data fetching patterns reduce redundant network requests.
## 5. Re-render Optimization (rerender)
**Impact:** MEDIUM
**Description:** Reducing unnecessary re-renders minimizes wasted computation and improves UI responsiveness.
## 6. Rendering Performance (rendering)
**Impact:** MEDIUM
**Description:** Optimizing the rendering process reduces the work the browser needs to do.
## 7. JavaScript Performance (js)
**Impact:** LOW-MEDIUM
**Description:** Micro-optimizations for hot paths can add up to meaningful improvements.
## 8. Advanced Patterns (advanced)
**Impact:** LOW
**Description:** Advanced patterns for specific cases that require careful implementation.

View File

@@ -0,0 +1,28 @@
---
title: Rule Title Here
impact: MEDIUM
impactDescription: Optional description of impact (e.g., "20-50% improvement")
tags: tag1, tag2
---
## Rule Title Here
**Impact: MEDIUM (optional impact description)**
Brief explanation of the rule and why it matters. This should be clear and concise, explaining the performance implications.
**Incorrect (description of what's wrong):**
```typescript
// Bad code example here
const bad = example()
```
**Correct (description of what's right):**
```typescript
// Good code example here
const good = example()
```
Reference: [Link to documentation or resource](https://example.com)

View File

@@ -0,0 +1,55 @@
---
title: Store Event Handlers in Refs
impact: LOW
impactDescription: stable subscriptions
tags: advanced, hooks, refs, event-handlers, optimization
---
## Store Event Handlers in Refs
Store callbacks in refs when used in effects that shouldn't re-subscribe on callback changes.
**Incorrect (re-subscribes on every render):**
```tsx
function useWindowEvent(event: string, handler: (e) => void) {
useEffect(() => {
window.addEventListener(event, handler)
return () => window.removeEventListener(event, handler)
}, [event, handler])
}
```
**Correct (stable subscription):**
```tsx
function useWindowEvent(event: string, handler: (e) => void) {
const handlerRef = useRef(handler)
useEffect(() => {
handlerRef.current = handler
}, [handler])
useEffect(() => {
const listener = (e) => handlerRef.current(e)
window.addEventListener(event, listener)
return () => window.removeEventListener(event, listener)
}, [event])
}
```
**Alternative: use `useEffectEvent` if you're on latest React:**
```tsx
import { useEffectEvent } from 'react'
function useWindowEvent(event: string, handler: (e) => void) {
const onEvent = useEffectEvent(handler)
useEffect(() => {
window.addEventListener(event, onEvent)
return () => window.removeEventListener(event, onEvent)
}, [event])
}
```
`useEffectEvent` provides a cleaner API for the same pattern: it creates a stable function reference that always calls the latest version of the handler.

View File

@@ -0,0 +1,42 @@
---
title: Initialize App Once, Not Per Mount
impact: LOW-MEDIUM
impactDescription: avoids duplicate init in development
tags: initialization, useEffect, app-startup, side-effects
---
## Initialize App Once, Not Per Mount
Do not put app-wide initialization that must run once per app load inside `useEffect([])` of a component. Components can remount and effects will re-run. Use a module-level guard or top-level init in the entry module instead.
**Incorrect (runs twice in dev, re-runs on remount):**
```tsx
function Comp() {
useEffect(() => {
loadFromStorage()
checkAuthToken()
}, [])
// ...
}
```
**Correct (once per app load):**
```tsx
let didInit = false
function Comp() {
useEffect(() => {
if (didInit) return
didInit = true
loadFromStorage()
checkAuthToken()
}, [])
// ...
}
```
Reference: [Initializing the application](https://react.dev/learn/you-might-not-need-an-effect#initializing-the-application)

View File

@@ -0,0 +1,39 @@
---
title: useEffectEvent for Stable Callback Refs
impact: LOW
impactDescription: prevents effect re-runs
tags: advanced, hooks, useEffectEvent, refs, optimization
---
## useEffectEvent for Stable Callback Refs
Access latest values in callbacks without adding them to dependency arrays. Prevents effect re-runs while avoiding stale closures.
**Incorrect (effect re-runs on every callback change):**
```tsx
function SearchInput({ onSearch }: { onSearch: (q: string) => void }) {
const [query, setQuery] = useState('')
useEffect(() => {
const timeout = setTimeout(() => onSearch(query), 300)
return () => clearTimeout(timeout)
}, [query, onSearch])
}
```
**Correct (using React's useEffectEvent):**
```tsx
import { useEffectEvent } from 'react';
function SearchInput({ onSearch }: { onSearch: (q: string) => void }) {
const [query, setQuery] = useState('')
const onSearchEvent = useEffectEvent(onSearch)
useEffect(() => {
const timeout = setTimeout(() => onSearchEvent(query), 300)
return () => clearTimeout(timeout)
}, [query])
}
```

View File

@@ -0,0 +1,38 @@
---
title: Prevent Waterfall Chains in API Routes
impact: CRITICAL
impactDescription: 2-10× improvement
tags: api-routes, server-actions, waterfalls, parallelization
---
## Prevent Waterfall Chains in API Routes
In API routes and Server Actions, start independent operations immediately, even if you don't await them yet.
**Incorrect (config waits for auth, data waits for both):**
```typescript
export async function GET(request: Request) {
const session = await auth()
const config = await fetchConfig()
const data = await fetchData(session.user.id)
return Response.json({ data, config })
}
```
**Correct (auth and config start immediately):**
```typescript
export async function GET(request: Request) {
const sessionPromise = auth()
const configPromise = fetchConfig()
const session = await sessionPromise
const [config, data] = await Promise.all([
configPromise,
fetchData(session.user.id)
])
return Response.json({ data, config })
}
```
For operations with more complex dependency chains, use `better-all` to automatically maximize parallelism (see Dependency-Based Parallelization).

View File

@@ -0,0 +1,80 @@
---
title: Defer Await Until Needed
impact: HIGH
impactDescription: avoids blocking unused code paths
tags: async, await, conditional, optimization
---
## Defer Await Until Needed
Move `await` operations into the branches where they're actually used to avoid blocking code paths that don't need them.
**Incorrect (blocks both branches):**
```typescript
async function handleRequest(userId: string, skipProcessing: boolean) {
const userData = await fetchUserData(userId)
if (skipProcessing) {
// Returns immediately but still waited for userData
return { skipped: true }
}
// Only this branch uses userData
return processUserData(userData)
}
```
**Correct (only blocks when needed):**
```typescript
async function handleRequest(userId: string, skipProcessing: boolean) {
if (skipProcessing) {
// Returns immediately without waiting
return { skipped: true }
}
// Fetch only when needed
const userData = await fetchUserData(userId)
return processUserData(userData)
}
```
**Another example (early return optimization):**
```typescript
// Incorrect: always fetches permissions
async function updateResource(resourceId: string, userId: string) {
const permissions = await fetchPermissions(userId)
const resource = await getResource(resourceId)
if (!resource) {
return { error: 'Not found' }
}
if (!permissions.canEdit) {
return { error: 'Forbidden' }
}
return await updateResourceData(resource, permissions)
}
// Correct: fetches only when needed
async function updateResource(resourceId: string, userId: string) {
const resource = await getResource(resourceId)
if (!resource) {
return { error: 'Not found' }
}
const permissions = await fetchPermissions(userId)
if (!permissions.canEdit) {
return { error: 'Forbidden' }
}
return await updateResourceData(resource, permissions)
}
```
This optimization is especially valuable when the skipped branch is frequently taken, or when the deferred operation is expensive.

View File

@@ -0,0 +1,51 @@
---
title: Dependency-Based Parallelization
impact: CRITICAL
impactDescription: 2-10× improvement
tags: async, parallelization, dependencies, better-all
---
## Dependency-Based Parallelization
For operations with partial dependencies, use `better-all` to maximize parallelism. It automatically starts each task at the earliest possible moment.
**Incorrect (profile waits for config unnecessarily):**
```typescript
const [user, config] = await Promise.all([
fetchUser(),
fetchConfig()
])
const profile = await fetchProfile(user.id)
```
**Correct (config and profile run in parallel):**
```typescript
import { all } from 'better-all'
const { user, config, profile } = await all({
async user() { return fetchUser() },
async config() { return fetchConfig() },
async profile() {
return fetchProfile((await this.$.user).id)
}
})
```
**Alternative without extra dependencies:**
We can also create all the promises first, and do `Promise.all()` at the end.
```typescript
const userPromise = fetchUser()
const profilePromise = userPromise.then(user => fetchProfile(user.id))
const [user, config, profile] = await Promise.all([
userPromise,
fetchConfig(),
profilePromise
])
```
Reference: [https://github.com/shuding/better-all](https://github.com/shuding/better-all)

View File

@@ -0,0 +1,28 @@
---
title: Promise.all() for Independent Operations
impact: CRITICAL
impactDescription: 2-10× improvement
tags: async, parallelization, promises, waterfalls
---
## Promise.all() for Independent Operations
When async operations have no interdependencies, execute them concurrently using `Promise.all()`.
**Incorrect (sequential execution, 3 round trips):**
```typescript
const user = await fetchUser()
const posts = await fetchPosts()
const comments = await fetchComments()
```
**Correct (parallel execution, 1 round trip):**
```typescript
const [user, posts, comments] = await Promise.all([
fetchUser(),
fetchPosts(),
fetchComments()
])
```

Some files were not shown because too many files have changed in this diff Show More