Top AI Coding Skills for Python Developers [2026]
We installed and scored 20 Python skills. These 5 deliver — from 736 lines of async patterns to Sentry's zero-false-positive Django auditor.
We installed 20 Python skills from the SkillSafe registry, read every file, and ran each one against real Django and FastAPI codebases. Most were fine. Five were genuinely good — the kind where the skill file itself teaches you something about Python, not just the AI.
The difference shows up immediately in the file structure. A mediocre skill gives the AI generic guidance (“use async/await for I/O-bound work”). A good skill gives it 736 lines of numbered patterns, each with a problem statement, working code, and the specific pitfalls to avoid. That specificity is what separates skills that change AI output from skills that merely run alongside it.
These five cover the territory Python developers spend the most time in: async concurrency, performance bottlenecks in Django querysets, modern toolchain decisions, IDOR vulnerability investigation, and production FastAPI architecture. All five are compatible with Claude Code, Cursor, and Windsurf.
How We Scored
Each skill was scored across five dimensions, 10 points each, for a maximum of 50:
- Depth — Does the skill encode knowledge the AI couldn’t derive on its own?
- Specificity — Are there concrete code examples, not just principles?
- Structure — Is the file navigable? Can the AI find the relevant section quickly?
- Completeness — Does it cover edge cases, gotchas, and the “what not to do”?
- Real-world applicability — Does it reflect how Python is actually used in production?
Quick Comparison
| Skill | Score | Install Count | Library / Feature |
|---|---|---|---|
| @wshobson/async-python-patterns | 47/50 | 7,969 | asyncio, aiohttp, pytest-asyncio, WebSockets |
| @getsentry/django-perf-review | 45/50 | 9,424 | Django ORM, select_related, prefetch_related, bulk_create |
| @trailofbits/modern-python | 43/50 | 5,552 | uv, ruff, ty, pip-audit, Dependabot, detect-secrets |
| @getsentry/django-access-review | 43/50 | 8,419 | Django, DRF, permission_classes, get_object_or_404 |
| @wshobson/fastapi-templates | 43/50 | 9,332 | FastAPI, SQLAlchemy, Pydantic, httpx, OAuth2, JWT |
Browse the full Python catalog: /tags/python/
1. @wshobson/async-python-patterns — 47/50
736 lines. 10 numbered patterns. The most thorough asyncio skill in the registry.
Source: github.com/wshobson/agents (299 stars) · 7,969 installs
skillsafe install @wshobson/async-python-patterns
This skill’s structure is worth examining before you even run it. The SKILL.md opens with ten explicitly numbered async patterns, each complete enough to stand alone:
- Basic async/await — correct coroutine anatomy
asyncio.gather— parallel execution and return value handling- Task management —
create_task, cancellation, cleanup - Error handling — exception propagation in concurrent code
- Timeouts —
asyncio.wait_forversusasyncio.timeout - Async context managers —
__aenter__/__aexit__and resource cleanup - Async iterators —
__aiter__/__anext__, async generators - Producer-consumer queues —
asyncio.Queuewith backpressure - Semaphore rate limiting — bounded concurrency with
asyncio.Semaphore - Async locks —
asyncio.Lockand the mutex pattern
Beyond the core patterns, the skill covers three real-world application sections: aiohttp web scraping with connection pools (including how to size pools for throughput without overwhelming targets), async database operations (transaction handling, connection reuse), and WebSocket server implementation.
The most useful single artifact is the sync-vs-async decision guide table near the middle of the file. It addresses the question AI tools get wrong most often: not how to write async code, but when to use it. The table covers I/O-bound vs CPU-bound work, threading vs asyncio vs multiprocessing, and the cases where sync code outperforms async (short scripts, CLI tools, simple serial pipelines).
The skill closes with a full pytest-asyncio testing section — async fixtures, @pytest.mark.asyncio, event loop scope — which rounds out a skill that covers the full async development lifecycle rather than just the happy-path patterns.
Score breakdown: Depth 10/10 · Specificity 10/10 · Structure 9/10 · Completeness 9/10 · Real-world applicability 9/10
2. @getsentry/django-perf-review — 45/50
~397 lines. Five priority categories. Zero false positives by design.
Source: github.com/getsentry · 9,424 installs
skillsafe install @getsentry/django-perf-review
Most performance review skills will flag everything that looks slow. Sentry’s Django performance skill takes the opposite approach: it teaches the AI to only report what’s actually a problem, and to verify before reporting.
The skill is organized around five priority categories, each with explicit severity labels:
- N+1 Queries (CRITICAL) — ORM loop patterns that generate one query per object
- Unbounded Querysets (CRITICAL) —
.all(),.filter()without.limit()on large tables - Missing Indexes (HIGH) —
filter()andorder_by()on unindexed columns - Write Loops (HIGH) —
save()orcreate()inside iteration instead ofbulk_create - Inefficient Patterns (LOW) —
.count()vslen(),.exists()vs conditional fetches
Each category is concrete: it shows the Django code that exhibits the problem, the correct replacement, and a validation checklist. For N+1 queries, that means showing both the loop-with-query pattern and the select_related/prefetch_related fix, with notes on when each applies.
The section that sets this skill apart is “What NOT to Report”. It explicitly tells the AI to skip: queryset variable assignment (assigning a queryset without evaluating it is lazy and not a problem), single-object lookups (get_object_or_404 is fine), and style preferences that don’t affect query count. This is rare. Most skills describe what to do; this one also describes what to ignore, which is what prevents the AI from generating noise that wastes developer time.
Before reporting any finding, the skill requires three confirmations: tracing the data flow to verify the pattern actually executes, confirming data volume is large enough for the pattern to matter, and verifying the code path is hot (called under real load, not just in tests).
Score breakdown: Depth 9/10 · Specificity 10/10 · Structure 9/10 · Completeness 9/10 · Real-world applicability 8/10
3. @trailofbits/modern-python — 43/50
~333 lines + 12 reference files. The opinionated guide to Python toolchain modernization.
Source: github.com/trailofbits/cookiecutter-python · 5,552 installs
skillsafe install @trailofbits/modern-python
Trail of Bits is a security firm, but this skill is about toolchain and project structure, not vulnerability scanning. It’s a clean-room rethinking of how Python projects should be set up in 2026, using tools that didn’t exist when most Python best-practices guides were written.
The skill opens with a decision tree that the AI applies before touching any project setup:
- PEP 723 scripts — single-file tools with inline dependency declarations, no project structure needed
- Minimal projects — libraries or small apps:
pyproject.toml+src/layout, nothing else - Full packages — applications with CI, Docker, multiple contributors: the complete setup
The toolchain replacements are explicit. The skill replaces the old stack (pip + Poetry, flake8 + black + isort, mypy) with the modern equivalents:
- uv replaces pip and Poetry for dependency and environment management
- ruff replaces flake8, black, and isort — one tool, one config, faster
- ty replaces mypy for static type checking
Each replacement comes with complete pyproject.toml configuration — not a snippet, a full working config block you can paste. The references/ and templates/ directories contain a complete Makefile template and per-tool config files.
The migration guides are the most practical section. There are four: from requirements.txt, from setup.py, from flake8 + black + isort, and from mypy. Each guide is a sequence of shell commands with explanations — the kind of thing developers search for when inheriting an old project.
The security tooling appendix covers four tools: shellcheck for shell scripts embedded in the project, detect-secrets for credential scanning, pip-audit for known vulnerable dependencies, and Dependabot config for automated updates.
Score breakdown: Depth 9/10 · Specificity 9/10 · Structure 8/10 · Completeness 9/10 · Real-world applicability 8/10
4. @getsentry/django-access-review — 43/50
~340 lines + references/ directory. A structured IDOR investigation protocol for Django.
Source: github.com/getsentry · 8,419 installs
skillsafe install @getsentry/django-access-review
IDOR (Insecure Direct Object Reference) vulnerabilities are endemic in Django applications. An endpoint accepts a resource ID as a URL parameter, queries it directly, and returns it — without checking whether the requesting user is allowed to see it. The fix is obvious in hindsight. Finding all the places it occurs in a real codebase is not.
This skill encodes a five-phase IDOR investigation that the AI works through systematically:
- Understand the auth model — identify the authentication backend, permission classes, and ownership model in use
- Map the attack surface — enumerate all endpoints that accept user-controlled IDs (URL params, query strings, POST body fields)
- Targeted questions — for each endpoint: what object does this ID reference? What’s the ownership relationship? Is there an authorization check?
- Trace endpoints — follow the request through view → serializer → queryset, checking each layer for authorization enforcement
- Report with confidence levels — findings are rated confirmed, probable, or needs-investigation, not just flagged
The core question the skill trains the AI to ask is: “If I’m User A and I know User B’s resource ID, can I access it?” That framing keeps the investigation concrete and prevents false positives from views that look unprotected but rely on auth at a higher layer.
The references/ directory includes bash investigation commands for finding permission classes (grep -r "permission_classes" .), locating base views that might provide implicit authorization, and identifying custom manager methods that filter by ownership. These make the skill immediately useful in an unfamiliar codebase.
The reporting requirements are strict: fixes must enforce authorization with code — get_object_or_404(Model, pk=pk, owner=request.user) or equivalent — not just document the gap. The skill explicitly rejects findings that say “should add a check” without providing the check.
Score breakdown: Depth 9/10 · Specificity 9/10 · Structure 9/10 · Completeness 8/10 · Real-world applicability 8/10
5. @wshobson/fastapi-templates — 43/50
~540 lines. Five production patterns. Complete async SQLAlchemy and JWT auth included.
Source: github.com/wshobson/agents (15 stars) · 9,332 installs
skillsafe install @wshobson/fastapi-templates
The 15-star GitHub count understates how widely used this skill is — 9,332 installs makes it one of the most-installed Python skills in the registry. Most FastAPI tutorials stop at “hello world with Pydantic.” This skill starts at production architecture.
The skill is built around five complete patterns, each with enough code to copy directly into a real project:
- Complete FastAPI application — lifespan event handlers (replacing deprecated
on_startup/on_shutdown), CORS middleware,pydantic-settingsfor environment config, async SQLAlchemy engine and session factory - Generic CRUD repository — type-parameterized base class covering create, read, update, delete, list with pagination
- Service layer — business logic separated from HTTP concerns, dependency injection via
Depends() - API endpoints with DI — router setup, response models, error handling with
HTTPException - JWT authentication with OAuth2 —
OAuth2PasswordBearer, token creation and verification,get_current_userdependency
The project structure the skill prescribes is explicit:
api/v1/endpoints/
core/
models/
schemas/
services/
repositories/
This matters because AI tools, without a prescribed structure, will put everything in one file until the project grows large enough that reorganizing becomes painful. The skill front-loads that decision.
The testing section covers async test fixtures with httpx.AsyncClient, which is the correct approach for testing FastAPI applications and notably different from the sync TestClient that most tutorials use. The async fixture pattern with pytest-asyncio and anyio is shown in full, including database session override.
Score breakdown: Depth 9/10 · Specificity 9/10 · Structure 8/10 · Completeness 9/10 · Real-world applicability 8/10
Frequently Asked Questions
What makes a good AI Python skill?
A good skill encodes knowledge the AI cannot derive from general training data alone — framework-specific patterns, opinionated toolchain choices, and explicit “what not to do” guidance that prevents false positives or over-engineering. The best skills in this list share a common trait: they include concrete code examples for both the problem and the solution, not just a description of the pattern. A 736-line async patterns file with ten numbered patterns and a sync-vs-async decision table gives the AI something specific to work from. A 40-line skill that says “follow async best practices” does not.
Do these skills work with Django, FastAPI, and general Python?
Yes, with some specificity: @getsentry/django-perf-review and @getsentry/django-access-review are Django-specific and understand the ORM, permission class system, and view hierarchy. @wshobson/fastapi-templates is FastAPI-specific and includes async SQLAlchemy, Pydantic v2, and OAuth2 patterns. @wshobson/async-python-patterns and @trailofbits/modern-python are framework-agnostic — the async patterns apply to any async Python application, and the toolchain guidance applies to any Python project regardless of framework. All five are compatible with Claude Code, Cursor, and Windsurf.
How were these skills scored?
Each skill was scored across five dimensions: depth (does it encode knowledge the AI couldn’t derive independently?), specificity (are there concrete working code examples?), structure (can the AI navigate it quickly?), completeness (does it cover edge cases and anti-patterns?), and real-world applicability (does it reflect production use, not tutorial use?). Maximum score is 50. We read every file and reference document, not just the top-level SKILL.md. Skills without published file contents were excluded from this roundup.
Conclusion
Start with @wshobson/async-python-patterns — it’s the highest-scoring skill in this list and covers the async concurrency patterns that matter most in production Python services. If you’re building Django applications, add @getsentry/django-perf-review immediately; the N+1 query detection alone is worth the install. For new projects or inherited codebases with outdated tooling, @trailofbits/modern-python will pay for itself the first time you have to explain why the project uses uv instead of pip.
The Sentry access review skill (@getsentry/django-access-review) is more specialized but earns its place: IDOR bugs in Django are common enough that having a structured investigation protocol — rather than relying on the AI’s ad hoc judgment — is worth the habit. And @wshobson/fastapi-templates is the right foundation if you’re starting a FastAPI project and want the AI to lay out the architecture correctly from the first commit.
Install the core two with:
skillsafe install @wshobson/async-python-patterns
skillsafe install @getsentry/django-perf-review
Or browse the full Python catalog on SkillSafe: /tags/python/
Related roundups: Browse all Best Of roundups