Self-Improving Skills: How SkillSafe Skills Get Better With Use
SkillSafe skills can improve from real usage feedback. How the observe-improve-save loop works and how to opt your skills into automatic iteration.
The Problem With Static Skills
Most AI coding skills are write-once artifacts. An author publishes a skill, users install it, and that’s where the story ends. If a command fails on macOS because the skill was written on Linux, the user has to debug it themselves. If the output format isn’t quite right, they work around it. The skill never learns.
SkillSafe changes this. Skills installed from the registry can improve themselves based on how they’re actually used — fixing broken commands, adding examples from successful runs, and clarifying instructions when users get confused.
How It Works
The self-improvement loop has five steps:
1. Execute in a Forked Context
When an improvable skill runs, it executes in a sub-agent — a separate context from the main agent. This is the key design decision. The main agent acts as an observer, watching the skill’s execution and the user’s reaction from outside the skill’s context.
name: my-skill
context: fork
improvable: true
registry: "@myname/my-skill"
The context: fork field tells the host tool (Claude Code, Cursor, etc.) to spawn a sub-agent. Without it, the main agent would be inside the skill’s execution and couldn’t observe the result objectively.
2. Detect Feedback
After the skill completes, the main agent watches the user’s next few messages for signals:
- Positive — the user says “thanks” or proceeds without corrections. The skill worked as intended.
- Negative — the user says “wrong” or manually corrects the output. Something needs fixing.
- Error recovery — the sub-agent hit a tool error (like
jq: command not found) and used a workaround. The skill should learn the workaround.
3. Edit the Skill
When feedback warrants a change, the main agent edits the skill’s files directly. Three types of edits:
- Add examples — append a successful input/output pair to a
## Examplessection so the skill handles similar requests better next time - Patch scripts — replace a command that failed with one that works (e.g., swap
jqforpython3 -cwhen jq isn’t installed) - Fix instructions — rewrite a confusing section of the SKILL.md based on what the user actually meant
4. Save a New Version
The main agent saves the improved skill back to the registry:
skillsafe save ./my-skill --changelog "[patch] replaced jq with python3 fallback"
No version number needed — the CLI auto-increments the patch version (e.g., 1.0.2 to 1.0.3). If the content hasn’t actually changed, the save is skipped.
5. Confirm
The main agent tells the user what was improved and the new version number. The improved skill is ready for the next invocation.
Opting In to Self-Improvement
Self-improvement is disabled by default. To enable it, pass --auto-improve when installing:
skillsafe install @publisher/some-skill --auto-improve
This injects improvable: true and registry: "@publisher/some-skill" into the skill’s SKILL.md frontmatter. Without the flag, only the .skillsafe.json metadata file is written — the skill’s frontmatter is left untouched.
Guiding the Improvement
Skill authors can include optional sections to steer how improvements are made.
Feedback Signals
Define what positive and negative feedback looks like for your specific skill:
## Feedback Signals
### Positive
- User accepts the generated output without edits
- Tests pass after the skill's changes
### Negative
- User reverts the skill's changes
- Tests fail after the skill's changes
Improvement Guide
Tell the main agent what kinds of edits to make in different failure modes:
## Improvement Guide
### When a command fails
Add platform detection and fallback commands.
### When output format is wrong
Add a concrete example showing the correct format.
### When instructions are misunderstood
Add DO and DO NOT lists to clarify edge cases.
Without these sections, the main agent uses its own judgment. With them, improvements are more targeted.
Rate Limiting
Self-improvement is conservative by design:
- Improvements only happen after explicit user feedback, not on every error
- Maximum one improvement save per skill per conversation — no rapid-fire version bumps
- If a skill fails again after an improvement, the agent asks the user before making another edit
Changelog Convention
Each improvement uses a bracketed prefix so the version history is easy to scan:
[example]— added a concrete example of correct behavior[patch]— fixed a script or command[instruction]— clarified or corrected instructions[bugfix]— fixed a bug in the skill’s logic
Run skillsafe info @myname/my-skill to see the full version history with changelogs.
Versioning Keeps You Safe
Every improvement creates a new immutable version with a SHA-256 tree hash. Nothing is overwritten. If an improvement makes things worse, you can always roll back:
skillsafe install @myname/my-skill --version 1.0.2
Combined with dual-side verification, this means self-improvement doesn’t compromise security. Each new version goes through the same scan-and-verify pipeline as any other published skill.
Get Started
Install any skill from the registry and start using it. If something doesn’t work right, just say so — the skill will learn.
Install skillsafe from https://skillsafe.ai/skill.md
Browse available skills on the Skills page, or read the documentation for the full CLI reference.