Strands Agents is a simple yet powerful SDK that takes a model-driven approach to building and running AI agents. From simple conversational assistants to complex autonomous workflows, from local development to production deployment, Strands Agents scales with your needs.
Feature Overview
- Lightweight & Flexible: Simple agent loop that just works and is fully customizable
- Model Agnostic: Support for Amazon Bedrock, Anthropic, Gemini, LiteLLM, Llama, Ollama, OpenAI, Writer, and custom providers
- Advanced Capabilities: Multi-agent systems, autonomous agents, and streaming support
- Built-in MCP: Native support for Model Context Protocol (MCP) servers, enabling access to thousands of pre-built tools
Quick Start
# Install Strands Agents
pip install strands-agents strands-agents-toolsfrom strands import Agent from strands_tools import calculator agent = Agent(tools=[calculator]) agent("What is the square root of 1764")
Note: For the default Amazon Bedrock model provider, you'll need AWS credentials configured and model access enabled for Claude 4 Sonnet in the us-west-2 region. See the Quickstart Guide for details on configuring other model providers.
Installation
Ensure you have Python 3.10+ installed, then:
# Create and activate virtual environment python -m venv .venv source .venv/bin/activate # On Windows use: .venv\Scripts\activate # Install Strands and tools pip install strands-agents strands-agents-tools
Features at a Glance
Python-Based Tools
Easily build tools using Python decorators:
from strands import Agent, tool @tool def word_count(text: str) -> int: """Count words in text. This docstring is used by the LLM to understand the tool's purpose. """ return len(text.split()) agent = Agent(tools=[word_count]) response = agent("How many words are in this sentence?")
Hot Reloading from Directory:
Enable automatic tool loading and reloading from the ./tools/ directory:
from strands import Agent # Agent will watch ./tools/ directory for changes agent = Agent(load_tools_from_directory=True) response = agent("Use any tools you find in the tools directory")
MCP Support
Seamlessly integrate Model Context Protocol (MCP) servers:
from strands import Agent from strands.tools.mcp import MCPClient from mcp import stdio_client, StdioServerParameters aws_docs_client = MCPClient( lambda: stdio_client(StdioServerParameters(command="uvx", args=["awslabs.aws-documentation-mcp-server@latest"])) ) with aws_docs_client: agent = Agent(tools=aws_docs_client.list_tools_sync()) response = agent("Tell me about Amazon Bedrock and how to use it with Python")
Multiple Model Providers
Support for various model providers:
from strands import Agent from strands.models import BedrockModel from strands.models.ollama import OllamaModel from strands.models.llamaapi import LlamaAPIModel from strands.models.gemini import GeminiModel from strands.models.llamacpp import LlamaCppModel # Bedrock bedrock_model = BedrockModel( model_id="us.amazon.nova-pro-v1:0", temperature=0.3, streaming=True, # Enable/disable streaming ) agent = Agent(model=bedrock_model) agent("Tell me about Agentic AI") # Google Gemini gemini_model = GeminiModel( client_args={ "api_key": "your_gemini_api_key", }, model_id="gemini-2.5-flash", params={"temperature": 0.7} ) agent = Agent(model=gemini_model) agent("Tell me about Agentic AI") # Ollama ollama_model = OllamaModel( host="http://localhost:11434", model_id="llama3" ) agent = Agent(model=ollama_model) agent("Tell me about Agentic AI") # Llama API llama_model = LlamaAPIModel( model_id="Llama-4-Maverick-17B-128E-Instruct-FP8", ) agent = Agent(model=llama_model) response = agent("Tell me about Agentic AI")
Built-in providers:
- Amazon Bedrock
- Anthropic
- Gemini
- Cohere
- LiteLLM
- llama.cpp
- LlamaAPI
- MistralAI
- Ollama
- OpenAI
- OpenAI Responses API
- SageMaker
- Writer
Custom providers can be implemented using Custom Providers
Example tools
Strands offers an optional strands-agents-tools package with pre-built tools for quick experimentation:
from strands import Agent from strands_tools import calculator agent = Agent(tools=[calculator]) agent("What is the square root of 1764")
It's also available on GitHub via strands-agents/tools.
Bidirectional Streaming
⚠️ Experimental Feature: Bidirectional streaming is currently in experimental status. APIs may change in future releases as we refine the feature based on user feedback and evolving model capabilities.
Build real-time voice and audio conversations with persistent streaming connections. Unlike traditional request-response patterns, bidirectional streaming maintains long-running conversations where users can interrupt, provide continuous input, and receive real-time audio responses. Get started with your first BidiAgent by following the Quickstart guide.
Supported Model Providers:
- Amazon Nova Sonic (v1, v2)
- Google Gemini Live
- OpenAI Realtime API
Installation:
# Server-side only (no audio I/O dependencies) pip install strands-agents[bidi] # With audio I/O support (includes PyAudio dependency) pip install strands-agents[bidi,bidi-io]
Quick Example:
import asyncio from strands.experimental.bidi import BidiAgent from strands.experimental.bidi.models import BidiNovaSonicModel from strands.experimental.bidi.io import BidiAudioIO, BidiTextIO from strands.experimental.bidi.tools import stop_conversation from strands_tools import calculator async def main(): # Create bidirectional agent with Nova Sonic v2 model = BidiNovaSonicModel() agent = BidiAgent(model=model, tools=[calculator, stop_conversation]) # Setup audio and text I/O (requires bidi-io extra) audio_io = BidiAudioIO() text_io = BidiTextIO() # Run with real-time audio streaming # Say "stop conversation" to gracefully end the conversation await agent.run( inputs=[audio_io.input()], outputs=[audio_io.output(), text_io.output()] ) if __name__ == "__main__": asyncio.run(main())
Note:
BidiAudioIOandBidiTextIOrequire thebidi-ioextra. For server-side deployments where audio I/O is handled by clients (browsers, mobile apps), install onlystrands-agents[bidi]and implement custom input/output handlers using theBidiInputandBidiOutputprotocols.
Configuration Options:
from strands.experimental.bidi.models import BidiNovaSonicModel # Configure audio settings and turn detection (v2 only) model = BidiNovaSonicModel( provider_config={ "audio": { "input_rate": 16000, "output_rate": 16000, "voice": "matthew" }, "turn_detection": { "endpointingSensitivity": "MEDIUM" # HIGH, MEDIUM, or LOW }, "inference": { "max_tokens": 2048, "temperature": 0.7 } } ) # Configure I/O devices audio_io = BidiAudioIO( input_device_index=0, # Specific microphone output_device_index=1, # Specific speaker input_buffer_size=10, output_buffer_size=10 ) # Text input mode (type messages instead of speaking) text_io = BidiTextIO() await agent.run( inputs=[text_io.input()], # Use text input outputs=[audio_io.output(), text_io.output()] ) # Multi-modal: Both audio and text input await agent.run( inputs=[audio_io.input(), text_io.input()], # Speak OR type outputs=[audio_io.output(), text_io.output()] )
Documentation
For detailed guidance & examples, explore our documentation:
Contributing ❤️
We welcome contributions! See our Contributing Guide for details on:
- Reporting bugs & features
- Development setup
- Contributing via Pull Requests
- Code of Conduct
- Reporting of security issues
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Security
See CONTRIBUTING for more information.