useChat reconnect to ongoing message generation after page reloads, network interruptions, or tab switches. Without this, users lose partial AI responses and must regenerate from scratch.
Architecture Overview
- POST /api/chat - Creates stream, saves message with
activeStreamId, publishes chunks to Redis - GET /api/chat/[id]/stream - Resumes stream from Redis or returns finalized message
How It Works
1. Stream Creation (POST /api/chat)
When you receive a new chat request:activeStreamId in message metadata signals that the stream is in-flight and can be resumed.
2. Stream Context Initialization
You need Redis for cross-instance pub/sub:3. Client Resume Detection
Your client detects partial messages and triggers resume:4. Stream Resume (GET /api/chat/[id]/stream)
The resume endpoint handles three scenarios:5. Stream Finalization
When streaming completes, you clear theactiveStreamId:
Redis Key Cleanup
Set TTL on stream keys to prevent Redis bloat:Fallback Behavior
Without Redis, resumable streams are disabled:Message Flow
Key Implementation Details
- activeStreamId as state marker - Message metadata tracks stream state
- Placeholder message first - Save before streaming so resume can find it
- Redis pub/sub - Enables cross-instance resumption in multi-pod deployments
- appendMessage fallback - If stream finished, send the complete message as a data chunk
- Race condition handling - Double-check DB if Redis stream is missing
Stopping streams
Resumable streams stay compatible with stopping as long as you do not wirerequest.signal into streamText. Use a server-owned AbortController and a persisted stop flag instead.
See Stop Resumable Streams.