Agent-Computer Interface (ACI): Designing Tools for AI Agents
By Learnia AI Research Team
Agent-Computer Interface (ACI): Designing Tools for AI Agents
📅 Last updated: March 19, 2026 — A deep dive into designing agent-tool interfaces.
📚 Related articles: Agent Architecture Patterns | Tool Use Guide with Claude | MCP Advanced Patterns | Structured Outputs & Strict Mode
What Is the Agent-Computer Interface (ACI)?
For decades, HCI (Human-Computer Interface) guided the design of interfaces for humans: well-placed buttons, visual feedback, intuitive navigation. Today, AI agents interact with computer systems through tools (functions, APIs, commands). They need their own design discipline.
ACI (Agent-Computer Interface) is the set of principles and practices for designing interfaces that AI agents can use effectively. It's HCI for agents.
Why ACI Matters
An AI agent doesn't "see" your graphical interfaces. It reads text descriptions of tools and decides which one to call with which parameters. The quality of this textual interface directly determines agent performance.
| Aspect | Human Interface (HCI) | Agent Interface (ACI) |
|---|---|---|
| Primary channel | Visual (screen, colors, layout) | Textual (descriptions, names, schemas) |
| Feedback | Visual, audio, haptic | Structured return messages (JSON) |
| Error handling | Dialog box, red highlight | Error message with context + suggestion |
| Documentation | Tooltips, tutorials, guides | Description in the tool schema |
| Learning | Trial-and-error, exploration | Zero-shot: must succeed on the first try |
The last row is critical: a human can explore an interface, click around, read a tutorial. An agent must understand the tool and use it correctly on the first attempt. This imposes a much higher standard of clarity.
The 5 Core Principles of ACI Design
Principle 1: Simplicity — One Tool, One Action
Each tool should do one thing and do it well. This is the UNIX principle applied to agents.
❌ Bad design:
# A catch-all tool that does everything
tools = [{
"name": "manage_database",
"description": "Manages the database: create, read, update, delete records, create tables, alter schema, export data",
"input_schema": {
"type": "object",
"properties": {
"action": {"type": "string", "enum": ["create", "read", "update", "delete", "create_table", "alter_table", "export"]},
"table": {"type": "string"},
"data": {"type": "object"},
"query": {"type": "string"},
"format": {"type": "string"}
}
}
}]
✅ Good design:
# Atomic, focused tools
tools = [
{
"name": "find_records",
"description": "Search for records in a table using filter criteria. Returns a list of matching objects sorted by creation date descending. Maximum 100 results per call.",
"input_schema": {
"type": "object",
"properties": {
"table": {
"type": "string",
"description": "Table name. Available tables: users, orders, products, reviews.",
"enum": ["users", "orders", "products", "reviews"]
},
"filters": {
"type": "object",
"description": "Key-value pairs for filtering. E.g., {\"status\": \"active\", \"country\": \"US\"}"
},
"limit": {
"type": "integer",
"description": "Max number of results (1-100). Default: 20.",
"default": 20
}
},
"required": ["table"]
}
},
{
"name": "update_record",
"description": "Update an existing record identified by its ID. Returns the updated record with its new values.",
"input_schema": {
"type": "object",
"properties": {
"table": {
"type": "string",
"enum": ["users", "orders", "products", "reviews"]
},
"record_id": {
"type": "string",
"description": "The unique identifier of the record to update."
},
"updates": {
"type": "object",
"description": "Fields to update with their new values. E.g., {\"status\": \"shipped\", \"tracking_id\": \"ABC123\"}"
}
},
"required": ["table", "record_id", "updates"]
}
}
]
Principle 2: Clarity — Descriptions That Leave No Doubt
A tool's description is the only documentation the agent sees. It must answer these questions:
- →What does the tool do? (first sentence, clear action)
- →When should it be used? (primary use cases)
- →What does it return? (response format)
- →What are the limits? (edge cases, quotas)
# ✅ Exemplary description
{
"name": "search_customers",
"description": (
"Search for customers by name, email, or ID. "
"Use this tool when the user asks about a specific customer's information. "
"Returns a list of matching customers with their contact info and recent purchase history. "
"Search is case-insensitive and supports partial matches for name and email. "
"Returns a maximum of 10 results. If no customer found, returns an empty list."
)
}
Principle 3: Predictability — Consistent Behavior
Tools should follow consistent conventions: naming, return format, error handling. The agent learns these patterns and generalizes.
# ✅ Consistent conventions across all tools
# - Naming: verb_noun (find_orders, create_invoice, update_customer)
# - Return: always { "success": bool, "data": ..., "error": str|null }
# - Errors: always { "success": false, "error": "descriptive message", "suggestion": "..." }
Principle 4: Recoverability — Informative Errors
When a tool fails, the error message should help the agent self-correct.
# ❌ Useless error
{"error": "Invalid input"}
# ✅ Agent-actionable error
{
"success": False,
"error": "The 'date_start' field has an invalid format: '2026-13-01'. Month must be between 01 and 12.",
"suggestion": "Use the YYYY-MM-DD format with a valid month. Example: '2026-03-01'.",
"received": "2026-13-01",
"expected_format": "YYYY-MM-DD"
}
Principle 5: Composability — Tools That Work Together
The output of one tool should be usable as input for another, without manual transformation.
# The agent can chain naturally:
# 1. find_customer(email="alice@example.com") → returns customer_id
# 2. get_orders(customer_id="cust_123") → returns list of orders
# 3. get_order_details(order_id="ord_456") → returns details
Best Practices for Tool Descriptions
A tool's description is the single point of contact between the agent and the functionality. Here are the elements to systematically include.
Tool Description Template
{
"name": "verb_noun",
"description": (
"[Primary action in one sentence.] "
"[When to use this tool.] "
"[What the tool returns — format and structure.] "
"[Limits, quotas, edge cases.] "
"[When NOT to use this tool — redirect to the right tool.]"
)
}
Parameter Design: Reducing Cognitive Load
Parameters are the language through which the agent communicates with the tool. Every design decision impacts agent reliability.
Rule 1: Enums Over Free Text
# ❌ The agent can generate anything
"status": {"type": "string", "description": "The order status"}
# ✅ The agent picks from valid options
"status": {
"type": "string",
"description": "Order status to filter by.",
"enum": ["pending", "confirmed", "shipped", "delivered", "cancelled"]
}
Rule 2: Sensible Defaults
# ✅ The agent doesn't need to specify every parameter
"limit": {
"type": "integer",
"description": "Number of results to return (1-100). Default: 20.",
"default": 20
},
"sort_by": {
"type": "string",
"description": "Sort field. Default: 'created_at'.",
"default": "created_at",
"enum": ["created_at", "updated_at", "name", "amount"]
}
Rule 3: Explicit Formats
# ❌ Ambiguous
"date": {"type": "string", "description": "The date"}
# ✅ Format specified with example
"date": {
"type": "string",
"description": "Date in ISO 8601 format (YYYY-MM-DD). E.g., '2026-03-19'."
}
Rule 4: Minimize Required Parameters
Only make parameters strictly necessary ones required. Everything else should have a default.
# ✅ One required parameter, the rest are optional with defaults
"required": ["query"],
"properties": {
"query": {"type": "string", "description": "Search text."},
"category": {"type": "string", "enum": [...], "description": "Filter by category. Default: all."},
"limit": {"type": "integer", "default": 10},
"include_archived": {"type": "boolean", "default": False}
}
Error Handling for Agents
Error messages are a critical communication channel with the agent. Unlike a human who can interpret a vague error code, an agent needs structured information to self-correct.
The 4 Components of a Good ACI Error Message
{
"success": False,
"error": {
# 1. What: clear problem description
"message": "The customer with ID 'cust_999' does not exist.",
# 2. Why: context about the cause
"reason": "The identifier does not match any record in the customers database.",
# 3. How to fix: actionable suggestion
"suggestion": "Check the identifier. Use search_customers to find the correct ID.",
# 4. Context: useful data for the next call
"context": {
"received_id": "cust_999",
"similar_ids": ["cust_099", "cust_990"]
}
}
}
Error Handling Anti-Patterns
# ❌ Useless errors for an agent
"Error 500: Internal Server Error"
"Something went wrong"
"Invalid request"
"null"
# ✅ Actionable errors
"The 'email' parameter is required but was not provided. Add a valid email in the format user@domain.com."
"The amount 150.00 exceeds the refund limit of 100.00 for this order. The maximum refundable amount is 100.00."
"No products found with the filter category='electronics'. Available categories: books, clothing, home, food."
Tool Composition Patterns
When to Combine vs Split Tools
| Criterion | Combine into one tool | Split into multiple tools |
|---|---|---|
| Actions are always executed together | ✅ | |
| Agent might need one without the other | ✅ | |
| Intermediate data is useful to the agent | ✅ | |
| Latency is critical (reduce round-trips) | ✅ | |
| The workflow varies by context | ✅ | |
| There are irreversible side effects | ✅ (explicit confirmation) |
Example: Composition Pipeline
import anthropic
client = anthropic.Anthropic()
# The agent solves "Refund Alice's latest order" in 3 calls:
# Step 1: find the customer
# Step 2: get their latest order
# Step 3: process the refund
tools = [
{
"name": "search_customers",
"description": "Search for customers by name or email. Returns a list of customers with their ID, name, and email.",
"input_schema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Customer name or email to search for."
}
},
"required": ["query"]
}
},
{
"name": "get_recent_orders",
"description": "Retrieve a customer's recent orders. Returns orders sorted by date descending with id, date, amount, status.",
"input_schema": {
"type": "object",
"properties": {
"customer_id": {
"type": "string",
"description": "The customer's unique identifier (format: cust_XXX)."
},
"limit": {
"type": "integer",
"description": "Number of orders to return. Default: 5.",
"default": 5
}
},
"required": ["customer_id"]
}
},
{
"name": "create_refund",
"description": "Create a refund for an order. The refund is processed within 3-5 business days. Returns the refund ID and refunded amount.",
"input_schema": {
"type": "object",
"properties": {
"order_id": {
"type": "string",
"description": "The order ID to refund (format: ord_XXX)."
},
"reason": {
"type": "string",
"description": "Reason for the refund.",
"enum": ["customer_request", "defective_product", "wrong_item", "late_delivery"]
}
},
"required": ["order_id", "reason"]
}
}
]
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=tools,
messages=[{"role": "user", "content": "Refund Alice's latest order, she requested a refund."}]
)
Case Study: Transforming a REST API into Agent-Friendly Tools
Testing Your Tools with an Agent
Evaluation is the feedback loop of ACI design. Without systematic testing, you're optimizing blind.
The 3 Levels of Testing
| Level | Question | Method |
|---|---|---|
| Selection | Does the agent pick the right tool? | Test with varied prompts → check tool_use |
| Parameters | Does the agent fill parameters correctly? | Compare generated params to expected |
| End-to-end | Does the agent complete the full task? | Multi-step scenarios with result verification |
import anthropic
client = anthropic.Anthropic()
def test_tool_selection(prompt: str, expected_tool: str, tools: list) -> bool:
"""Test whether the agent selects the correct tool for a given prompt."""
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=512,
tools=tools,
messages=[{"role": "user", "content": prompt}]
)
tool_calls = [block for block in response.content if block.type == "tool_use"]
if not tool_calls:
return False
return tool_calls[0].name == expected_tool
# Selection test suite
selection_tests = [
("Find customer Pierre Martin", "search_customers"),
("What is cust_123's latest order?", "get_recent_orders"),
("Refund order ord_456", "create_refund"),
("Where is my order's shipment?", "get_shipping_status"),
]
for prompt, expected in selection_tests:
result = test_tool_selection(prompt, expected, tools)
status = "✅" if result else "❌"
print(f"{status} '{prompt}' → expected: {expected}")
Key Metrics to Track
- →Correct selection rate: % of times the agent picks the right tool (target: > 95%)
- →Valid parameter rate: % of parameters that are syntactically and semantically correct (target: > 93%)
- →Resolution rate: % of complete tasks resolved successfully (target: > 90%)
- →Average call count: tool calls needed per task (fewer = better)
ACI Design Checklist
Before deploying a tool for an agent, verify each point:
Naming
- → Name follows
verb_nounformat (clear action) - → Name is consistent with other tools in the set
- → No ambiguous abbreviations
Description
- → First sentence = primary action
- → Use case specified (when to use)
- → Return format documented
- → Limits and quotas mentioned
- → Redirection to other tools if relevant
Parameters
- → Enums over free text where possible
- → Default values for optional parameters
- → Formats specified with examples
- → Minimum required parameters
Errors
- → Descriptive error messages (what, why, how to fix)
- → Actionable suggestions
- → Contextual data (valid values, similar IDs)
Testing
- → Selection tests on > 20 varied prompts
- → Parameter tests on edge cases
- → End-to-end tests on real scenarios
Summary
ACI is an emerging but essential discipline. AI agents only work as well as the tools they're given. By applying the 5 principles (simplicity, clarity, predictability, recoverability, composability), crafting precise descriptions, and testing iteratively, you transform mediocre APIs into interfaces your agents use reliably.
Going from 47 REST endpoints to 12 well-designed tools isn't simplification — it's a paradigm shift: moving from a developer-oriented interface to an agent-oriented interface.
To dive deeper into using tools with Claude, see our Tool Use guide. For agent architecture patterns, check our article on Claude agent architectures. And for the MCP protocol that standardizes agent-tool communication, explore our advanced MCP patterns.
Frequently Asked Questions
What is the difference between HCI and ACI?
HCI (Human-Computer Interface) optimizes interactions for humans: visuals, clicks, intuitive feedback. ACI (Agent-Computer Interface) optimizes for AI agents: clear text descriptions, unambiguous parameters, actionable error messages, and atomic operations. A good HCI does not guarantee a good ACI.
How many tools should an agent have access to?
There is no magic number, but the principle is: the minimum needed with maximum clarity. In practice, 8 to 15 well-designed tools cover most use cases. Beyond 20 tools, the agent's selection accuracy degrades significantly.
Should I split a complex tool into several simpler tools?
Yes, in most cases. A tool that does one thing well is preferable to a multi-purpose tool with conditional parameters. The exception: when actions are always executed together and splitting them would create consistency issues.
How do I test whether my tools are well-designed for an agent?
Three complementary methods: 1) selection tests (does the agent pick the right tool?), 2) parameter tests (does the agent fill parameters correctly?), 3) end-to-end tests (does the agent complete the full task?). Measure success rate across a representative set of scenarios.
Module 0 — Prompting Fundamentals
Build your first effective prompts from scratch with hands-on exercises.
Weekly AI Insights
Tools, techniques & news — curated for AI practitioners. Free, no spam.
Free, no spam. Unsubscribe anytime.
→Related Articles
FAQ
What is the difference between HCI and ACI?+
HCI (Human-Computer Interface) optimizes interactions for humans: visuals, clicks, intuitive feedback. ACI (Agent-Computer Interface) optimizes for AI agents: clear text descriptions, unambiguous parameters, actionable error messages, and atomic operations. A good HCI does not guarantee a good ACI.
How many tools should an agent have access to?+
There is no magic number, but the principle is: the minimum needed with maximum clarity. In practice, 8 to 15 well-designed tools cover most use cases. Beyond 20 tools, the agent's selection accuracy degrades significantly.
Should I split a complex tool into several simpler tools?+
Yes, in most cases. A tool that does one thing well is preferable to a multi-purpose tool with conditional parameters. The exception: when actions are always executed together and splitting them would create consistency issues.
How do I test whether my tools are well-designed for an agent?+
Three complementary methods: 1) selection tests (does the agent pick the right tool?), 2) parameter tests (does the agent fill parameters correctly?), 3) end-to-end tests (does the agent complete the full task?). Measure success rate across a representative set of scenarios.