Skip to main content
Chapter 2 Claude API Fundamentals

The Messages API Deep Dive

22 min read Lesson 6 / 28

The Messages API

The Messages API is the foundation of every Claude integration. Understanding its structure — requests, responses, roles, and content blocks — is essential before building anything more complex.

Request Structure

import anthropic

client = anthropic.Anthropic()

response = client.messages.create(
    model="claude-sonnet-4-5",       # Required: model identifier
    max_tokens=4096,                  # Required: maximum tokens to generate
    system="You are a senior Python engineer. Be concise.",  # Optional system prompt
    messages=[                        # Required: conversation history
        {"role": "user", "content": "What is the difference between a list and a tuple?"},
        {"role": "assistant", "content": "Lists are mutable; tuples are immutable."},
        {"role": "user", "content": "Give me a practical example of when to use each."},
    ],
    temperature=0.3,                  # Optional: 0.0 = deterministic, 1.0 = creative
    top_p=0.9,                        # Optional: nucleus sampling
)

Response Structure

print(response.id)             # msg_01XFDUDYJgAACzvnptvVoYEL
print(response.model)          # claude-sonnet-4-5-20251101
print(response.stop_reason)    # end_turn | max_tokens | tool_use | stop_sequence
print(response.usage.input_tokens)   # 87
print(response.usage.output_tokens)  # 312

# Content is a list — can contain text blocks and tool_use blocks
for block in response.content:
    if block.type == "text":
        print(block.text)

Multi-Turn Conversations

Build a simple REPL:

def chat():
    messages = []
    print("Chat with Claude (type 'quit' to exit)\n")

    while True:
        user_input = input("You: ").strip()
        if user_input.lower() == "quit":
            break

        messages.append({"role": "user", "content": user_input})

        response = client.messages.create(
            model="claude-sonnet-4-5",
            max_tokens=2048,
            system="You are a helpful assistant.",
            messages=messages,
        )

        assistant_text = response.content[0].text
        messages.append({"role": "assistant", "content": assistant_text})

        print(f"\nClaude: {assistant_text}\n")

chat()
import Anthropic from "@anthropic-ai/sdk";
import * as readline from "readline";

const client = new Anthropic();
const messages = [];

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

async function chat() {
  rl.question("You: ", async (input) => {
    messages.push({ role: "user", content: input });

    const response = await client.messages.create({
      model: "claude-sonnet-4-5",
      max_tokens: 2048,
      messages,
    });

    const assistantText = response.content[0].text;
    messages.push({ role: "assistant", content: assistantText });
    console.log(`\nClaude: ${assistantText}\n`);

    if (input.toLowerCase() !== "quit") chat();
    else rl.close();
  });
}

chat();

The conversation history you maintain in the messages array is what gives Claude memory across turns.