Headless & Programmatic Claude Code: SDK & Automation
By Learnia Team
Headless & Programmatic Claude Code: SDK & Automation
This article is written in English. Our training modules are available in French.
Headless mode transforms Claude Code from an interactive tool into a programmable automation engine. Build scripts, pipelines, and applications that leverage Claude's capabilities without human intervention.
What is Headless Mode?
Headless mode runs Claude Code without interactive prompts:
Interactive Mode (Default)
$ claude
> How can I help you today?
_
Headless Mode
$ claude --print -p "Analyze this codebase" > report.md
$ echo $? # Exit code: 0
No interaction needed. Input goes in, output comes out.
Headless CLI Flags
| Flag | Description |
|---|---|
/ | Output to stdout instead of interactive UI |
/ | Specify the prompt (combined with --print) |
| Output format: , , |
| Skip all permission prompts |
| Limit output tokens |
| Specify model: , , |
| Disable colored output |
/ | Suppress non-essential output |
Basic Headless Usage
Simple Prompt
claude --print -p "Explain this function" < src/utils.ts
Output to File
claude --print -p "Generate API documentation" > docs/api.md
JSON Output
claude --print --output-format json -p "List all TODO comments as JSON array"
Piping Input
cat error.log | claude --print -p "Analyze this error log and suggest fixes"
Multiple Files
claude --print -p "Compare these implementations" < <(cat file1.ts file2.ts)
Automation Patterns
Script Integration
#!/bin/bash
# analyze.sh
# Get changed files
changed=$(git diff --name-only HEAD~1)
# Analyze each file
for file in $changed; do
echo "Analyzing $file..."
claude --print --quiet -p "Review $file for issues" > "reviews/${file}.md"
done
echo "Analysis complete"
Error Handling
#!/bin/bash
output=$(claude --print -p "Generate migration" 2>&1)
exit_code=$?
if [ $exit_code -ne 0 ]; then
echo "Error: $output"
exit 1
fi
echo "$output" > migration.sql
Conditional Logic
#!/bin/bash
# Ask Claude to categorize
category=$(claude --print --output-format json \
-p "Categorize this issue. Return JSON: {\"category\": \"bug|feature|docs\"}" \
< issue.txt | jq -r '.category')
case $category in
bug)
echo "Routing to bug triage..."
;;
feature)
echo "Adding to feature backlog..."
;;
docs)
echo "Assigning to documentation team..."
;;
esac
The Anthropic SDK
For deeper integration, use the official SDK:
Installation
npm install @anthropic-ai/sdk
Basic Usage
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
async function main() {
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [
{
role: "user",
content: "Explain the concept of dependency injection"
}
]
});
console.log(message.content[0].text);
}
main();
With System Prompt
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
system: "You are a senior software engineer. Provide detailed, practical advice.",
messages: [
{ role: "user", content: "How should I structure a microservices project?" }
]
});
Streaming Responses
const stream = await client.messages.stream({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [
{ role: "user", content: "Write a comprehensive testing guide" }
]
});
for await (const chunk of stream) {
if (chunk.type === "content_block_delta" && chunk.delta.type === "text_delta") {
process.stdout.write(chunk.delta.text);
}
}
Multi-Turn Conversations
const conversation = [
{ role: "user", content: "I'm building a REST API in Node.js" },
{ role: "assistant", content: "Great! What framework are you using?" },
{ role: "user", content: "Express. How should I structure my routes?" }
];
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: conversation
});
Tool Use with SDK
Claude can use tools programmatically:
Define Tools
const tools = [
{
name: "get_weather",
description: "Get current weather for a location",
input_schema: {
type: "object",
properties: {
location: {
type: "string",
description: "City name"
}
},
required: ["location"]
}
},
{
name: "search_code",
description: "Search codebase for patterns",
input_schema: {
type: "object",
properties: {
query: { type: "string" },
file_pattern: { type: "string" }
},
required: ["query"]
}
}
];
Execute Tools
async function runWithTools(prompt: string) {
let messages = [{ role: "user", content: prompt }];
while (true) {
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
tools,
messages
});
// Check if Claude wants to use a tool
const toolUse = response.content.find(block => block.type === "tool_use");
if (!toolUse) {
// No more tools, return final response
return response.content.find(block => block.type === "text")?.text;
}
// Execute the tool
const result = await executeToolCall(toolUse.name, toolUse.input);
// Add tool result to conversation
messages.push({
role: "assistant",
content: response.content
});
messages.push({
role: "user",
content: [{
type: "tool_result",
tool_use_id: toolUse.id,
content: result
}]
});
}
}
async function executeToolCall(name: string, input: any): Promise<string> {
switch (name) {
case "get_weather":
return JSON.stringify({ temp: 72, condition: "sunny" });
case "search_code":
return `Found 5 matches for "${input.query}"`;
default:
return "Tool not found";
}
}
Batch Processing
Process multiple items efficiently:
Sequential Processing
import { readdir, readFile, writeFile } from "fs/promises";
async function processFiles(directory: string) {
const files = await readdir(directory);
for (const file of files) {
if (!file.endsWith(".ts")) continue;
const content = await readFile(`${directory}/${file}`, "utf-8");
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 2048,
messages: [{
role: "user",
content: `Generate JSDoc comments for this file:\n\n${content}`
}]
});
const documented = response.content[0].text;
await writeFile(`${directory}/${file}`, documented);
console.log(`Processed: ${file}`);
}
}
Parallel Processing (with rate limiting)
import pLimit from "p-limit";
const limit = pLimit(5); // Max 5 concurrent requests
async function processFilesParallel(files: string[]) {
const tasks = files.map(file =>
limit(async () => {
const content = await readFile(file, "utf-8");
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{
role: "user",
content: `Analyze: ${content}`
}]
});
return { file, analysis: response.content[0].text };
})
);
return Promise.all(tasks);
}
Batch API (for large workloads)
// For very large batches, use the Batch API
const batch = await client.batches.create({
requests: files.map((file, i) => ({
custom_id: `file-${i}`,
params: {
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: `Analyze: ${file}` }]
}
}))
});
// Poll for completion
while (batch.status !== "completed") {
await new Promise(r => setTimeout(r, 60000));
batch = await client.batches.retrieve(batch.id);
}
// Get results
const results = await client.batches.results(batch.id);
Building Pipelines
Code Analysis Pipeline
interface AnalysisResult {
file: string;
issues: Issue[];
complexity: number;
suggestions: string[];
}
async function analyzeCodebase(directory: string): Promise<AnalysisResult[]> {
const files = await glob(`${directory}/**/*.ts`);
const results: AnalysisResult[] = [];
for (const file of files) {
const content = await readFile(file, "utf-8");
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 2048,
messages: [{
role: "user",
content: `Analyze this TypeScript file and return JSON:
{
"issues": [{"type": "string", "line": number, "message": "string"}],
"complexity": number (1-10),
"suggestions": ["string"]
}
File: ${file}
\`\`\`typescript
${content}
\`\`\``
}]
});
const analysis = JSON.parse(response.content[0].text);
results.push({ file, ...analysis });
}
return results;
}
// Use the pipeline
const analysis = await analyzeCodebase("./src");
const highComplexity = analysis.filter(r => r.complexity > 7);
console.log(`Files needing refactoring: ${highComplexity.length}`);
Documentation Pipeline
async function generateDocs(sourceDir: string, outputDir: string) {
// Step 1: Analyze structure
const structure = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 2048,
messages: [{
role: "user",
content: `Analyze ${sourceDir} and create a documentation outline`
}]
});
// Step 2: Generate README
const readme = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 4096,
messages: [{
role: "user",
content: `Generate README.md based on: ${structure.content[0].text}`
}]
});
await writeFile(`${outputDir}/README.md`, readme.content[0].text);
// Step 3: Generate API reference
const apiFiles = await glob(`${sourceDir}/api/**/*.ts`);
for (const file of apiFiles) {
const content = await readFile(file, "utf-8");
const doc = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 2048,
messages: [{
role: "user",
content: `Generate API documentation for:\n${content}`
}]
});
const docPath = file.replace(sourceDir, outputDir).replace(".ts", ".md");
await writeFile(docPath, doc.content[0].text);
}
}
Error Handling & Retries
Robust API Calls
import { setTimeout } from "timers/promises";
async function callWithRetry<T>(
fn: () => Promise<T>,
maxRetries = 3,
baseDelay = 1000
): Promise<T> {
let lastError: Error;
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
lastError = error as Error;
// Don't retry on validation errors
if (error.status === 400) throw error;
// Exponential backoff
const delay = baseDelay * Math.pow(2, attempt);
console.log(`Attempt ${attempt + 1} failed, retrying in ${delay}ms`);
await setTimeout(delay);
}
}
throw lastError;
}
// Usage
const response = await callWithRetry(() =>
client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: prompt }]
})
);
Rate Limit Handling
import Bottleneck from "bottleneck";
const limiter = new Bottleneck({
reservoir: 100, // 100 requests
reservoirRefreshAmount: 100,
reservoirRefreshInterval: 60 * 1000, // per minute
maxConcurrent: 5
});
async function rateLimitedCall(prompt: string) {
return limiter.schedule(() =>
client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: prompt }]
})
);
}
Production Patterns
Configuration Management
// config.ts
interface Config {
model: string;
maxTokens: number;
temperature: number;
systemPrompt: string;
}
const configs: Record<string, Config> = {
analysis: {
model: "claude-sonnet-4-20250514",
maxTokens: 2048,
temperature: 0,
systemPrompt: "You are a code analysis expert. Be thorough and precise."
},
creative: {
model: "claude-sonnet-4-20250514",
maxTokens: 4096,
temperature: 0.7,
systemPrompt: "You are a creative technical writer."
},
fast: {
model: "claude-3-5-haiku-20241022",
maxTokens: 512,
temperature: 0,
systemPrompt: "Be concise."
}
};
async function query(prompt: string, configName: keyof typeof configs) {
const config = configs[configName];
return client.messages.create({
model: config.model,
max_tokens: config.maxTokens,
temperature: config.temperature,
system: config.systemPrompt,
messages: [{ role: "user", content: prompt }]
});
}
Logging & Monitoring
import { createLogger, transports, format } from "winston";
const logger = createLogger({
level: "info",
format: format.combine(
format.timestamp(),
format.json()
),
transports: [
new transports.File({ filename: "claude.log" })
]
});
async function trackedQuery(prompt: string, metadata: Record<string, any>) {
const startTime = Date.now();
try {
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: prompt }]
});
logger.info("API call successful", {
...metadata,
duration: Date.now() - startTime,
inputTokens: response.usage.input_tokens,
outputTokens: response.usage.output_tokens
});
return response;
} catch (error) {
logger.error("API call failed", {
...metadata,
duration: Date.now() - startTime,
error: error.message
});
throw error;
}
}
Caching
import { createHash } from "crypto";
import { Redis } from "ioredis";
const redis = new Redis();
function hashPrompt(prompt: string, model: string): string {
return createHash("sha256")
.update(`${model}:${prompt}`)
.digest("hex");
}
async function cachedQuery(prompt: string, ttl = 3600) {
const cacheKey = hashPrompt(prompt, "claude-sonnet-4-20250514");
// Check cache
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
// Make API call
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: prompt }]
});
// Cache result
await redis.setex(cacheKey, ttl, JSON.stringify(response));
return response;
}
Use Cases
Automated Code Review Service
// review-service.ts
import express from "express";
import { Anthropic } from "@anthropic-ai/sdk";
const app = express();
const client = new Anthropic();
app.post("/review", async (req, res) => {
const { code, language, rules } = req.body;
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 2048,
system: `You are a code reviewer. Review ${language} code following these rules: ${rules}`,
messages: [{
role: "user",
content: `Review this code:\n\`\`\`${language}\n${code}\n\`\`\``
}]
});
res.json({
review: response.content[0].text,
tokens: response.usage
});
});
app.listen(3000);
Documentation Generator
// doc-generator.ts
async function generateModuleDocs(modulePath: string) {
const files = await glob(`${modulePath}/**/*.ts`);
const docs: string[] = [];
for (const file of files) {
const content = await readFile(file, "utf-8");
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 2048,
messages: [{
role: "user",
content: `Generate markdown documentation for:\n${content}`
}]
});
docs.push(`## ${file}\n\n${response.content[0].text}`);
}
return docs.join("\n\n---\n\n");
}
Test Generator
// test-generator.ts
async function generateTests(sourceFile: string, testFramework = "jest") {
const source = await readFile(sourceFile, "utf-8");
const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 4096,
system: `Generate comprehensive ${testFramework} tests. Include edge cases and error scenarios.`,
messages: [{
role: "user",
content: `Generate tests for:\n\`\`\`typescript\n${source}\n\`\`\``
}]
});
const testFile = sourceFile.replace(".ts", ".test.ts");
await writeFile(testFile, response.content[0].text);
return testFile;
}
Integration with Claude Code Features
Using Skills Programmatically
// Load skill definitions
const skills = await loadSkills(".claude/skills");
async function runSkill(skillName: string, input: any) {
const skill = skills[skillName];
return client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 4096,
system: skill.systemPrompt,
messages: [{
role: "user",
content: `Execute skill "${skillName}" with:\n${JSON.stringify(input)}`
}]
});
}
See Agent Skills in Claude Code: Extend Claude's Capabilities.
Programmatic MCP
import { MCPClient } from "@modelcontextprotocol/sdk";
const mcp = new MCPClient();
await mcp.connect("github", "https://api.github.com/mcp/");
// Use MCP tools in SDK calls
const tools = await mcp.listTools();
See Model Context Protocol (MCP) for Claude Code: Complete Guide.
Key Takeaways
- →
for scripting: Essential flag for headless automation.--print - →
SDK for complex workflows: Full control with the Anthropic SDK.
- →
Handle errors gracefully: Implement retries and rate limiting.
- →
Cache when possible: Reduce costs and latency.
- →
Monitor in production: Log usage, errors, and performance.
Build Production AI Systems
Headless Claude Code is the foundation for production AI systems. Learn to build reliable, scalable AI workflows.
In our Module 6 — Autonomous Agents, you'll learn:
- →Production AI architecture
- →Reliability patterns
- →Scaling strategies
- →Monitoring and observability
Module 6 — AI Agents & ReAct
Create autonomous agents that reason and take actions.