Claude API Deep Dive: Building with Anthropic’s Models

Anthropic’s Claude has emerged as one of the most capable AI models available, particularly excelling in nuanced reasoning, coding, and following complex instructions. With the release of Claude 3.5 Sonnet, Anthropic has delivered a model that rivals GPT-4 while offering unique capabilities like extended thinking and exceptional code generation.

In this deep dive, I’ll walk you through everything you need to build production applications with the Claude API, from basic completions to advanced features like tool use and computer interaction.

What You’ll Learn

  • Understanding the Claude 3 model family
  • Messages API for conversations
  • Tool use (function calling) with Claude
  • Vision capabilities for image analysis
  • Computer use for UI automation
  • Production patterns and best practices

Table of Contents

Claude 3 Model Family Comparison
Figure 1: The Claude 3 Model Family – Comparing capabilities and use cases

The Claude 3 Model Family

Anthropic offers three tiers of Claude 3 models, plus the newer Claude 3.5 Sonnet:

Model Context Best For Input/Output (per 1M tokens)
Claude 3.5 Sonnet 200K tokens Best overall, coding, reasoning $3 / $15
Claude 3 Opus 200K tokens Complex analysis, research $15 / $75
Claude 3 Sonnet 200K tokens Balanced performance $3 / $15
Claude 3 Haiku 200K tokens Fast, cost-effective $0.25 / $1.25

💡 Recommendation

Claude 3.5 Sonnet is the sweet spot for most applications. It outperforms Claude 3 Opus on many benchmarks while being 5x cheaper. Use Haiku for high-volume, latency-sensitive tasks where cost is critical.

Getting Started

Installation

# Install the Anthropic Python SDK
pip install anthropic

# For async support
pip install anthropic httpx

Authentication

from anthropic import Anthropic
import os

# The SDK automatically uses ANTHROPIC_API_KEY environment variable
client = Anthropic()

# Or explicitly pass the key
client = Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY"))

Messages API

The Messages API is Claude’s primary interface for conversation:

Basic Usage

from anthropic import Anthropic

client = Anthropic()

message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    system="You are a helpful assistant specializing in Python programming.",
    messages=[
        {"role": "user", "content": "How do I implement a binary search in Python?"}
    ]
)

print(message.content[0].text)

# Access usage information
print(f"Input tokens: {message.usage.input_tokens}")
print(f"Output tokens: {message.usage.output_tokens}")

Multi-Turn Conversations

from anthropic import Anthropic
from typing import List, Dict

class ClaudeConversation:
    """Manage multi-turn conversations with Claude."""
    
    def __init__(self, system_prompt: str, model: str = "claude-3-5-sonnet-20241022"):
        self.client = Anthropic()
        self.model = model
        self.system = system_prompt
        self.messages: List[Dict] = []
    
    def chat(self, user_message: str) -> str:
        """Send a message and get a response."""
        self.messages.append({"role": "user", "content": user_message})
        
        response = self.client.messages.create(
            model=self.model,
            max_tokens=2048,
            system=self.system,
            messages=self.messages
        )
        
        assistant_message = response.content[0].text
        self.messages.append({"role": "assistant", "content": assistant_message})
        
        return assistant_message
    
    def clear(self):
        """Clear conversation history."""
        self.messages = []


# Usage
conv = ClaudeConversation("You are a helpful coding assistant.")
print(conv.chat("What is a decorator in Python?"))
print(conv.chat("Show me an example with arguments."))  # Claude remembers context

Tool Use

Claude’s tool use (function calling) allows the model to invoke external functions:

Claude Tool Use Flow
Figure 2: Claude Tool Use – How Claude invokes external functions
from anthropic import Anthropic
import json

client = Anthropic()

# Define tools
tools = [
    {
        "name": "get_weather",
        "description": "Get the current weather in a location",
        "input_schema": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "City and state, e.g., San Francisco, CA"
                },
                "unit": {
                    "type": "string",
                    "enum": ["celsius", "fahrenheit"],
                    "description": "Temperature unit"
                }
            },
            "required": ["location"]
        }
    },
    {
        "name": "calculate",
        "description": "Perform mathematical calculations",
        "input_schema": {
            "type": "object",
            "properties": {
                "expression": {
                    "type": "string",
                    "description": "Mathematical expression to evaluate"
                }
            },
            "required": ["expression"]
        }
    }
]

# Implement the actual functions
def get_weather(location: str, unit: str = "fahrenheit") -> dict:
    """Mock weather API."""
    return {"location": location, "temperature": 72, "unit": unit, "conditions": "sunny"}

def calculate(expression: str) -> str:
    """Safe calculator."""
    try:
        # Only allow safe operations
        allowed = set("0123456789+-*/.() ")
        if all(c in allowed for c in expression):
            return str(eval(expression))
        return "Invalid expression"
    except:
        return "Error"

tool_functions = {
    "get_weather": get_weather,
    "calculate": calculate
}

def chat_with_tools(user_message: str):
    """Complete conversation with tool use."""
    messages = [{"role": "user", "content": user_message}]
    
    # First API call
    response = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1024,
        tools=tools,
        messages=messages
    )
    
    # Check if Claude wants to use tools
    while response.stop_reason == "tool_use":
        # Find tool use blocks
        tool_uses = [block for block in response.content if block.type == "tool_use"]
        
        # Add assistant response to messages
        messages.append({"role": "assistant", "content": response.content})
        
        # Execute tools and collect results
        tool_results = []
        for tool_use in tool_uses:
            # Execute the function
            func = tool_functions[tool_use.name]
            result = func(**tool_use.input)
            
            tool_results.append({
                "type": "tool_result",
                "tool_use_id": tool_use.id,
                "content": json.dumps(result)
            })
        
        # Add tool results
        messages.append({"role": "user", "content": tool_results})
        
        # Continue the conversation
        response = client.messages.create(
            model="claude-3-5-sonnet-20241022",
            max_tokens=1024,
            tools=tools,
            messages=messages
        )
    
    # Return final text response
    return response.content[0].text

# Usage
print(chat_with_tools("What's the weather in Boston and what is 15% of 250?"))

Vision Capabilities

Claude 3 models can analyze images directly:

from anthropic import Anthropic
import base64
import httpx

client = Anthropic()

# Method 1: URL-based image
response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image",
                    "source": {
                        "type": "url",
                        "url": "https://example.com/image.jpg"
                    }
                },
                {
                    "type": "text",
                    "text": "Describe this image in detail."
                }
            ]
        }
    ]
)

print(response.content[0].text)

# Method 2: Base64-encoded image
def analyze_local_image(image_path: str, prompt: str) -> str:
    """Analyze a local image file."""
    # Read and encode the image
    with open(image_path, "rb") as f:
        image_data = base64.standard_b64encode(f.read()).decode("utf-8")
    
    # Determine media type
    if image_path.endswith(".png"):
        media_type = "image/png"
    elif image_path.endswith((".jpg", ".jpeg")):
        media_type = "image/jpeg"
    elif image_path.endswith(".gif"):
        media_type = "image/gif"
    elif image_path.endswith(".webp"):
        media_type = "image/webp"
    else:
        raise ValueError("Unsupported image format")
    
    response = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1024,
        messages=[
            {
                "role": "user",
                "content": [
                    {
                        "type": "image",
                        "source": {
                            "type": "base64",
                            "media_type": media_type,
                            "data": image_data
                        }
                    },
                    {"type": "text", "text": prompt}
                ]
            }
        ]
    )
    
    return response.content[0].text


# Usage
result = analyze_local_image("screenshot.png", "Extract all text from this image")

Computer Use

Claude can interact with computer interfaces through the Computer Use feature (beta):

⚠️ Beta Feature

Computer Use is currently in beta. Use with appropriate safeguards and only in controlled environments. Never use on production systems without proper sandboxing.

from anthropic import Anthropic

client = Anthropic()

# Computer use tools
computer_tools = [
    {
        "type": "computer_20241022",
        "name": "computer",
        "display_width_px": 1920,
        "display_height_px": 1080,
        "display_number": 1
    }
]

# Example: Ask Claude to interact with a desktop
response = client.beta.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=4096,
    tools=computer_tools,
    messages=[
        {
            "role": "user",
            "content": "Open the calculator app and compute 25 * 17"
        }
    ],
    betas=["computer-use-2024-10-22"]
)

# Claude will return tool_use blocks with actions like:
# - mouse_move(x, y)
# - left_click()
# - type(text)
# - screenshot()
# Your code needs to execute these and return results

Streaming Responses

For real-time applications, stream responses:

from anthropic import Anthropic

client = Anthropic()

# Streaming response
def stream_response(prompt: str):
    with client.messages.stream(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1024,
        messages=[{"role": "user", "content": prompt}]
    ) as stream:
        for text in stream.text_stream:
            print(text, end="", flush=True)
    print()  # Newline at end

stream_response("Explain quantum computing in simple terms")

# Async streaming
async def async_stream(prompt: str):
    from anthropic import AsyncAnthropic
    
    async_client = AsyncAnthropic()
    
    async with async_client.messages.stream(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1024,
        messages=[{"role": "user", "content": prompt}]
    ) as stream:
        async for text in stream.text_stream:
            yield text

Best Practices

Error Handling

from anthropic import Anthropic, APIError, RateLimitError, APIConnectionError
import time
from functools import wraps

def retry_with_backoff(max_retries: int = 3, base_delay: float = 1.0):
    """Decorator for retrying API calls."""
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            delay = base_delay
            for attempt in range(max_retries + 1):
                try:
                    return func(*args, **kwargs)
                except RateLimitError as e:
                    if attempt == max_retries:
                        raise
                    # Use retry-after if available
                    wait = getattr(e, 'retry_after', delay)
                    print(f"Rate limited. Waiting {wait}s...")
                    time.sleep(wait)
                    delay *= 2
                except APIConnectionError:
                    if attempt == max_retries:
                        raise
                    time.sleep(delay)
                    delay *= 2
                except APIError as e:
                    if e.status_code and 400 <= e.status_code < 500:
                        raise  # Don't retry client errors
                    if attempt == max_retries:
                        raise
                    time.sleep(delay)
                    delay *= 2
        return wrapper
    return decorator


@retry_with_backoff()
def safe_message(prompt: str) -> str:
    client = Anthropic()
    response = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1024,
        messages=[{"role": "user", "content": prompt}]
    )
    return response.content[0].text

System Prompt Best Practices

EFFECTIVE_SYSTEM_PROMPT = """
You are an expert Python developer assistant.

## Your Capabilities
- Write clean, well-documented Python code
- Explain complex concepts clearly
- Debug and optimize existing code
- Follow PEP 8 style guidelines

## Response Format
- Use code blocks with language specification
- Include docstrings and type hints
- Provide brief explanations before code
- Mention potential edge cases

## Constraints
- Only suggest standard library or widely-used packages
- Prioritize readability over cleverness
- Always consider error handling
"""

# Use in API call
response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=2048,
    system=EFFECTIVE_SYSTEM_PROMPT,
    messages=[{"role": "user", "content": "Write a function to parse JSON safely"}]
)

Cost Optimization

Strategy Savings Implementation
Use Haiku for simple tasks 12x cheaper than Sonnet Classification, extraction, routing
Prompt caching 90% on repeated prefixes Cache long system prompts
Limit max_tokens Variable Set appropriate limits
Batching 50% via Message Batches API For async, non-urgent tasks

Key Takeaways

  • Claude 3.5 Sonnet offers the best balance of capability and cost
  • Tool use enables powerful integrations with external systems
  • Vision capabilities support image analysis out of the box
  • Computer use opens new automation possibilities (use carefully!)
  • Always implement error handling and retries
  • Use Haiku for high-volume, simpler tasks
  • Stream responses for better user experience

References


Claude’s combination of strong reasoning, coding ability, and unique features like computer use makes it a compelling choice for many AI applications. With these patterns and best practices, you’re ready to build production-grade systems with the Anthropic API.

Building with Claude? I’d love to hear about your projects—connect with me on LinkedIn to share your experiences.


Discover more from C4: Container, Code, Cloud & Context

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.