Building Skills for Claude: The Complete Guide to Modular AI Expertise (2026)
By Learnia Team
Building Skills for Claude: The Complete Guide to Modular AI Expertise
This article is written in English. Our training modules are available in multiple languages.
📅 Last Updated: February 13, 2026 — Covers Claude Skills as available in Claude.ai, Claude Code, and the Anthropic API.
📚 Related: Claude Opus 4.6 Guide | Claude Code Agent Skills | MCP Protocol Explained | AI Agents ReAct Explained
Table of Contents
- →What Are Claude Skills?
- →The Paradigm Shift
- →Skill Architecture
- →Core Principles
- →Building Your First Skill
- →Advanced Patterns
- →Skills + MCP Integration
- →Deployment & Production
- →Real-World Examples
- →FAQ
- →Pre-Built Skills
- →The Skill Creator
- →Skills API
- →Distribution & Sharing
- →Troubleshooting
- →Key Takeaways
What Are Claude Skills?
Claude Skills are modular, self-contained packages that enhance Claude's capabilities by providing specialized knowledge, workflows, and executable scripts. Released on January 29, 2026 in a comprehensive 32-page guide by Anthropic, Skills transform Claude from a general-purpose assistant into a specialized agent equipped with procedural knowledge.
Before vs. After Skills
Before Skills (traditional approach):
System prompt: "You are an assistant that helps with brand writing.
Always use our brand voice: professional yet approachable...
[500 lines of brand guidelines]
[200 lines of formatting rules]
[300 lines of examples]"
Problems: Token-heavy, loaded for every conversation, hard to maintain, easy to conflict.
After Skills:
📁 brand-writing/
├── SKILL.md → "Apply brand guidelines to any document"
├── scripts/
│ └── check_tone.py → Automated brand voice checker
└── references/
├── brand_guide.md
└── examples.md
Benefits: Loaded only when needed, organizated, testable, shareable, maintainable.
The Paradigm Shift: From Prompts to Procedures
Skills represent a fundamental shift in how we think about AI customization:
| Aspect | Traditional Prompts | Claude Skills |
|---|---|---|
| Structure | Flat text | Organized folder |
| Loading | Always loaded | On-demand |
| Code | No execution | Scripts included |
| Sharing | Copy-paste | Share folder |
| Versioning | Manual | Git-friendly |
| Testing | Ad-hoc | Systematic |
| Composability | Difficult | Built-in |
Skill Architecture
The Folder Structure
Every skill follows the same organizational pattern:
📁 my-skill/
├── SKILL.md ← Required: Core instructions + metadata
├── scripts/ ← Optional: Executable code
│ ├── process_data.py
│ └── validate.sh
├── references/ ← Optional: Loaded on-demand docs
│ ├── api_docs.md
│ └── style_guide.md
├── templates/ ← Optional: Output templates
│ └── report_template.md
└── assets/ ← Optional: Icons, images, etc.
└── icon.png
The SKILL.md File
The SKILL.md file is the heart of every skill. It uses YAML frontmatter for metadata and Markdown for instructions:
---
name: Brand Voice Checker
description: |
Apply our brand voice guidelines to any document.
Use this skill when the user asks to review, write, or edit
content for brand consistency.
---
# Brand Voice Checker
## Instructions
When the user provides content for brand review:
1. Read the document carefully
2. Run `scripts/check_tone.py` on the content
3. Load `references/brand_guide.md` for current guidelines
4. Identify any deviations from brand voice
5. Provide specific corrections with before/after examples
6. Summarize the overall brand alignment score
## Rules
- Never change the core message or meaning
- Preserve technical accuracy
- Flag ambiguous cases rather than auto-correcting
- Always explain WHY a change improves brand alignment
Component Breakdown
Core Principles
1. Progressive Disclosure
How it works:
- →Claude scans skill descriptions (YAML frontmatter) to find relevant skills
- →When user intent matches a skill description, the full SKILL.md is loaded
- →During execution, references are loaded only when explicitly needed
- →Scripts execute only when called
This means a project with 20 skills doesn't waste tokens loading all 20 — only the relevant one(s) activate.
2. Modularity & Composability
Skills are designed to work together:
User: "Create a quarterly report for Q4 with brand formatting
and send it to the leadership team"
Claude activates:
1. 📊 data-analysis skill → Pulls and analyzes Q4 metrics
2. 📝 brand-writing skill → Applies brand voice
3. 📄 report-template skill → Formats as quarterly report
4. ✉️ email-composer skill → Drafts email to leadership
Claude automatically identifies and sequences the skills needed, coordinating their execution.
3. Determinism Through Scripts
Skills can include executable scripts for tasks that need 100% reliability:
# scripts/format_currency.py
"""Format numbers as USD currency strings."""
import sys
def format_currency(amount):
"""Returns a properly formatted USD string."""
return f"${amount:,.2f}"
if __name__ == "__main__":
amount = float(sys.argv[1])
print(format_currency(amount))
Instead of Claude "guessing" the format, the script provides deterministic, reliable output.
4. Portability
The same skill works across all Claude environments:
- →Claude.ai — Add the skill folder to a Project
- →Claude Code — Place in
.claude/skills/directory - →Anthropic API — Reference in the system message
Building Your First Skill
Step 1: Define the Use Case
Before writing code, answer these questions:
- →What specific task does this skill perform?
- →When should Claude activate it? (trigger conditions)
- →What inputs does it need from the user?
- →What outputs should it produce?
- →Are there deterministic steps that should use scripts?
Step 2: Create the Folder Structure
mkdir -p meeting-summarizer/scripts meeting-summarizer/references
Step 3: Write the SKILL.md
---
name: Meeting Summarizer
description: |
Summarize meeting transcripts into structured action items,
decisions, and follow-ups. Use when the user provides a
meeting transcript, recording summary, or meeting notes.
---
# Meeting Summarizer
## Process
1. **Parse the transcript** — Identify speakers, timestamps, topics
2. Run `scripts/extract_topics.py` to identify main discussion areas
3. For each topic, extract:
- Key points discussed
- Decisions made (with who made them)
- Action items (with owners and deadlines)
- Open questions
4. Format output using the template in `references/summary_template.md`
5. Highlight any action items without clear owners
## Output Format
Always produce:
- Executive summary (2-3 sentences)
- Decisions list (numbered)
- Action items table (who / what / when)
- Open questions list
- Next meeting suggestions
## Constraints
- Never invent information not in the transcript
- Flag unclear speaker attribution
- If deadlines are mentioned ambiguously, note the ambiguity
Step 4: Add Scripts (Optional)
# scripts/extract_topics.py
"""Extract main topics from a meeting transcript."""
import sys
import re
def extract_topics(transcript):
"""Identify topic transitions in meeting text."""
# Simple topic detection by paragraph breaks and keywords
topics = []
paragraphs = transcript.split('\n\n')
for p in paragraphs:
if any(keyword in p.lower() for keyword in
['next topic', 'moving on', 'let\'s discuss',
'agenda item', 'regarding']):
topics.append(p.strip()[:100])
return topics
if __name__ == "__main__":
transcript = sys.stdin.read()
topics = extract_topics(transcript)
for i, topic in enumerate(topics, 1):
print(f"{i}. {topic}")
Step 5: Add References (Optional)
<!-- references/summary_template.md -->
# Meeting Summary: [TITLE]
**Date:** [DATE] | **Duration:** [DURATION] | **Attendees:** [LIST]
## Executive Summary
[2-3 sentence overview]
## Decisions Made
1. [Decision] — Decided by [WHO]
## Action Items
| Owner | Task | Deadline | Status |
|-------|------|----------|--------|
| [Name] | [Task description] | [Date] | Pending |
## Open Questions
- [Question requiring follow-up]
## Next Steps
- [Suggested follow-up actions]
Step 6: Test Your Skill
Testing is critical. Verify:
- →Trigger accuracy — Does the skill activate for correct requests?
- →No false triggers — Does it stay inactive for unrelated requests?
- →No hallucinations — Does Claude follow the instructions exactly?
- →Script execution — Do scripts run correctly?
- →Edge cases — What happens with incomplete or unusual input?
Advanced Patterns
Pattern 1: Multi-Step Workflow Skills
For complex workflows with multiple phases:
---
name: Code Review Assistant
description: |
Perform a comprehensive code review following our team's
standards. Use when the user asks for a code review,
PR review, or code quality check.
---
# Code Review Assistant
## Phase 1: Static Analysis
1. Run `scripts/lint_check.sh` on the provided code
2. Identify syntax errors, style violations, and common bugs
## Phase 2: Logic Review
1. Analyze the code's logical flow
2. Identify potential edge cases not handled
3. Check for security vulnerabilities
## Phase 3: Architecture Review
1. Load `references/architecture_standards.md`
2. Evaluate naming conventions, code organization
3. Check for SOLID principle violations
## Phase 4: Report Generation
1. Combine findings from all phases
2. Prioritize by severity (Critical > High > Medium > Low)
3. Provide specific fix suggestions for each finding
Pattern 2: State Management Skills
For skills that need to track progress across interactions:
## State Tracking
After each interaction, update the status file:
scripts/update_status.py --task "[TASK_ID]" --status "[STATUS]"
Always check current state before proceeding:
scripts/get_status.py --task "[TASK_ID]"
Pattern 3: Error Handling Skills
## Error Handling
If any script fails:
1. Log the error: `scripts/log_error.py --error "[MESSAGE]"`
2. Inform the user of what went wrong
3. Suggest alternative approaches
4. Never proceed with partial or assumed data
Pattern 4: Composable Skills with Dependencies
---
name: Quarterly Report Generator
description: Generate quarterly business reports.
dependencies:
- data-analysis
- brand-writing
- chart-generator
---
## Step 1: Data Collection
Activate the `data-analysis` skill to process raw metrics.
## Step 2: Visualization
Use the `chart-generator` skill to create charts.
## Step 3: Writing
Apply the `brand-writing` skill to the final report text.
Skills + MCP Integration
How Skills and MCP Complement Each Other
| Capability | MCP (Model Context Protocol) | Claude Skills |
|---|---|---|
| Role | Provides external connections | Provides procedural knowledge |
| Database | Queries, reads, writes | Decides when/what to query, processes results |
| APIs | HTTP calls, authentication | Knows which API to call, formats requests |
| File System | Read/write files | Determines which files to process, generates output |
| Composability | Plug-and-play tools | Orchestrates tools into workflows |
Example: Customer Support Skill with MCP
---
name: Customer Issue Resolver
description: |
Handle customer support tickets by looking up account info,
checking order status, and drafting responses.
---
# Customer Issue Resolver
## Process
1. Extract customer ID from the ticket
2. **[MCP: database]** Query customer account:
`SELECT * FROM customers WHERE id = {customer_id}`
3. **[MCP: api]** Check order status:
`GET /api/orders?customer={customer_id}&status=active`
4. Analyze the issue against `references/common_issues.md`
5. Draft a response using `references/response_templates.md`
6. **[MCP: email]** Send the drafted response to the customer
Deployment & Production
Claude.ai (Projects)
- →Create a new Project in Claude.ai
- →Upload the skill folder to the Project's knowledge base
- →The skill will automatically activate when matching conversations occur
Claude Code
- →Place the skill folder in
.claude/skills/in your project - →Claude Code will discover and use skills automatically
- →Scripts execute directly in your development environment
Anthropic API
import anthropic
client = anthropic.Anthropic()
# Load skill content
with open('my-skill/SKILL.md', 'r') as f:
skill_content = f.read()
response = client.messages.create(
model="claude-opus-4-6",
max_tokens=4096,
system=f"""You have the following skill available:
{skill_content}
Activate this skill when the user's request matches
the skill description. Follow the instructions exactly.""",
messages=[
{"role": "user", "content": "Please review this code..."}
]
)
Production Best Practices
- →Version control your skills — Treat them like code, use Git
- →Write tests — Create test cases for trigger accuracy and output quality
- →Monitor activation — Log when skills fire to catch false triggers
- →Iterate based on feedback — Continuously refine instructions
- →Security review scripts — Audit all executable code
- →Document dependencies — Note which MCP tools or external services a skill requires
Real-World Examples
Example 1: Brand Document Formatter
Automatically apply brand guidelines to any document:
- →Script checks tone, voice, and terminology
- →Reference file contains current brand guidelines
- →Template formats output as branded PDF-ready Markdown
Example 2: Git PR Reviewer
Automate code review for pull requests:
- →Script runs linting and static analysis
- →Reference contains team coding standards
- →Produces structured review with severity levels and fix suggestions
Example 3: Data Pipeline Monitor
Monitor and troubleshoot data pipelines:
- →Script queries pipeline metrics via MCP
- →Reference contains SLA definitions
- →Auto-generates incident reports for failures
Pre-Built Skills
Anthropic provides several pre-built document skills available out of the box on Pro, Max, Team, and Enterprise plans:
| Pre-Built Skill | What It Does | Available On |
|---|---|---|
| Excel Creator | Generates and edits .xlsx spreadsheets | Pro, Max, Team, Enterprise |
| PowerPoint Builder | Creates .pptx presentations with layouts | Pro, Max, Team, Enterprise |
| Word Document Editor | Generates and modifies .docx files | Pro, Max, Team, Enterprise |
| PDF Form Filler | Extracts form fields and fills PDF forms | Pro, Max, Team, Enterprise |
These skills use the Code Execution Tool internally — for example, the PDF skill runs a small Python script to extract form field definitions, then Claude fills them based on user instructions.
The Skill Creator (Meta-Skill)
Anthropic provides a "skill-creator" skill — a meta-skill specifically designed to help you build new skills.
How It Works
- →Describe what you want your skill to do
- →The skill-creator guides you through structuring the SKILL.md
- →It suggests folder layout, trigger descriptions, and instruction patterns
- →It helps you test trigger behavior with example user requests
- →It iterates on the skill with you until trigger accuracy is high
What the Skill Creator Checks
- →Trigger description quality — Is the description specific enough to activate correctly?
- →Instruction clarity — Will Claude follow the steps without ambiguity?
- →Hallucination risk — Are there gaps where Claude might invent steps not in the skill?
- →Composability — Does the skill work well alongside other skills?
Skills API — /v1/skills Endpoint
For programmatic skill management, Anthropic provides a REST API endpoint:
import requests
# Create a new skill
response = requests.post(
"https://api.anthropic.com/v1/skills",
headers={
"x-api-key": "your-api-key",
"anthropic-version": "2026-01-29"
},
json={
"name": "data-analyzer",
"description": "Analyze CSV datasets and produce summary reports",
"content": open("SKILL.md").read(),
"version": "1.0.0"
}
)
# List available skills
skills = requests.get(
"https://api.anthropic.com/v1/skills",
headers={...}
)
# Upgrade a skill version
response = requests.put(
"https://api.anthropic.com/v1/skills/data-analyzer",
headers={...},
json={
"content": open("SKILL_v2.md").read(),
"version": "2.0.0"
}
)
The API supports:
- →Create — Upload new skills
- →List — View all available skills
- →Read — Get a skill's current content and metadata
- →Update — Upgrade skill versions
- →Delete — Remove skills
Distribution & Sharing
Skills are designed to be portable and shareable:
Sharing Methods
| Method | Best For | How |
|---|---|---|
| Git repository | Teams, open-source | Push skill folder to Git; clone to use |
| ZIP archive | Quick sharing | Zip the folder; import into Claude.ai Project |
| API upload | CI/CD pipelines | Use /v1/skills endpoint to deploy programmatically |
| npm/pip package | Developer ecosystem | Package skills with existing tooling |
Versioning Strategy
my-skill/
├── SKILL.md ← Current version
├── CHANGELOG.md ← Version history
├── package.json ← Version metadata (optional)
└── ...
Best practices:
- →Use semantic versioning (1.0.0, 1.1.0, 2.0.0) for skill iterations
- →Keep a CHANGELOG.md documenting what changed between versions
- →Test trigger accuracy after every update — even small wording changes can affect activation
Troubleshooting
Common Issues & Solutions
| Problem | Cause | Solution |
|---|---|---|
| Skill doesn't activate | Description doesn't match user intent | Rewrite the YAML description with more trigger phrases |
| Skill activates for wrong requests | Description is too broad | Make the description more specific; add exclusion notes |
| Claude invents steps not in the skill | Instructions have gaps or ambiguity | Fill in the gaps; be explicit about what Claude should NOT do |
| Script fails silently | No error handling in script | Add try/except and print error messages for Claude to read |
| Skill loads but produces wrong output | References not loaded or outdated | Explicitly reference file paths in instructions; update references |
| Multiple skills conflict | Overlapping descriptions | Differentiate descriptions; add priority hints |
Debugging Checklist
- →Check trigger — Does the YAML description clearly match the user's request phrasing?
- →Check instructions — Are there any ambiguous steps Claude might interpret differently?
- →Check scripts — Do scripts run correctly outside of Claude? Test independently.
- →Check references — Are all referenced files present and up to date?
- →Check composability — Does this skill conflict with other active skills?
- →Check hallucinations — Is Claude adding steps or information not present in the skill?
Related Articles
- →Claude Opus 4.6 Guide — The model that powers Skills
- →Claude Code Agent Skills — Skills in the CLI environment
- →MCP Protocol Explained — The connection protocol
- →AI Agents ReAct Explained — Agent reasoning patterns
- →Prompt Engineering Best Practices — Foundation techniques
Key Takeaways
- →
Skills transform Claude from general assistant to specialized agent with procedural knowledge, executable scripts, and on-demand references
- →
SKILL.md is the only required file — it contains YAML frontmatter for trigger matching and Markdown instructions for execution
- →
Progressive disclosure saves 70-90% of tokens by loading skill details only when triggered by matching user intent
- →
Scripts provide deterministic reliability for tasks that must produce consistent, exact results every time
- →
Skills and MCP are complementary — MCP provides external connections, Skills provide the procedures for what to do with that data
- →
Same skill format works everywhere — Claude.ai Projects, Claude Code, and the Anthropic API
- →
Composability is built-in — Claude can automatically identify, sequence, and coordinate multiple skills for complex multi-step tasks
- →
Treat Skills like code — version control, test systematically, iterate based on real-world feedback
Level Up Your AI Engineering Skills
Building effective Claude Skills requires deep understanding of how LLMs process instructions, handle context, and execute multi-step procedures.
In our Module 5 — AI Agents & Automation, you'll learn:
- →How to design reliable agent procedures
- →Context management and token optimization
- →Multi-step task decomposition
- →Testing and debugging AI agents
- →Production deployment patterns
→ Explore Module 5: AI Agents & Automation
Last Updated: February 13, 2026 Based on Anthropic's official 32-page "Complete Guide to Building Skills for Claude" (January 29, 2026), Claude.ai documentation, developer community resources, and production deployment experience.
Module 5 — RAG (Retrieval-Augmented Generation)
Ground AI responses in your own documents and data sources.