Agents
Agents are the core building blocks of AA Kit. Learn how to create, configure, and use agents effectively.
What is an Agent?
An agent is an AI-powered entity that can understand natural language, reason about tasks, use tools, and maintain conversation context. Every agent in AA Kit is also an MCP server, making it universally compatible.
Agent Anatomy
Core Components
- •Name: Unique identifier for the agent
- •Instruction: System prompt defining behavior
- •Model: LLM provider and model selection
- •Tools: Functions the agent can use
Optional Features
- •Memory: Conversation persistence
- •Reasoning: Thought process patterns
- •Config: Fine-tuned behavior settings
- •Middleware: Request/response processing
Creating Agents
Basic Agent
from aakit import Agent
# Create a simple agent
agent = Agent(
name="assistant",
instruction="You are a helpful AI assistant",
model="gpt-4"
)
# Chat with the agent - no async needed!
response = agent.chat("What can you help me with?")
print(response)
# Or use async when needed
response = await agent.achat("What can you help me with?")
print(response)Agent with Configuration
from aakit import Agent, AgentConfig
# Configure agent behavior
config = AgentConfig(
temperature=0.7, # Control randomness
max_tokens=2000, # Limit response length
timeout=30, # Request timeout
retry_max=3, # Retry failed requests
cache_ttl=3600, # Cache responses for 1 hour
)
agent = Agent(
name="configured_agent",
instruction="You provide detailed technical answers",
model="gpt-4",
config=config
)Multi-Model Agent
# Automatic fallback chain
agent = Agent(
name="resilient_agent",
instruction="You are a reliable assistant",
model=["gpt-4", "claude-3-opus", "gpt-3.5-turbo"]
)
# Auto-detect best available model
agent = Agent(
name="smart_agent",
instruction="You adapt to available resources",
model="auto" # Automatically selects best available model
)Stateful Conversations
from aakit import Agent
agent = Agent(
name="conversational_agent",
instruction="You remember our conversation history",
model="gpt-4",
memory="redis" # Enable persistent memory
)
# Method 1: Manual session management
response1 = agent.chat("My name is Alice", session_id="user_123")
response2 = agent.chat("What's my name?", session_id="user_123")
# Response: "Your name is Alice"
# Method 2: Conversation context manager (recommended)
with agent.conversation() as chat:
r1 = chat.send("My name is Bob")
r2 = chat.send("What's my name?") # Automatically remembers!
chat.save("conversation.json") # Save for laterAgent Lifecycle
Initialization
Agent is created with name, instruction, and model. Configuration is validated.
Tool Registration
Functions are automatically converted to MCP-compatible tools.
Memory Setup
If configured, memory backend is initialized for conversation persistence.
Ready State
Agent is ready to receive messages and execute tasks.
Agent Methods
agent.chat(message, session_id=None)
Send a message to the agent and receive a response.
response = await agent.chat("Hello!", session_id="user_123")agent.stream(message, session_id=None)
Stream responses in real-time for better UX.
async for chunk in agent.stream("Tell me a story"): print(chunk)agent.serve_mcp(port=8080)
Serve the agent as an MCP server for universal access.
agent.serve_mcp(port=8080, name="My Agent")agent.add_tool(function)
Dynamically add tools to an agent after creation.
agent.add_tool(my_custom_function)Best Practices
Do's
- ✓ Use descriptive agent names
- ✓ Write clear, specific instructions
- ✓ Enable memory for conversations
- ✓ Use model fallbacks for reliability
- ✓ Configure appropriate timeouts
Don'ts
- ✗ Don't hardcode API keys
- ✗ Don't ignore error handling
- ✗ Don't use blocking operations
- ✗ Don't skip input validation
- ✗ Don't forget cleanup on shutdown
Next Steps
Now that you understand agents, learn how to extend their capabilities with tools.
Continue to Tools →