

The Rate Limiter workflow is designed to prevent abuse and ensure fair usage of your AI chatbot by controlling the number of requests a user (identified by IP) can make within a specified time period.
This workflow uses Valkey with Redis nodes as the backend store to efficiently count requests and enforce limits.
Triggered by Another Workflow
ip → the identifier for the requester (usually the client’s IP address).Define Rate Limiter Settings
Generates a Redis key for the requester:
rateLimiter:<ip>
Defines two parameters:
rateLimit: maximum number of requests allowed (default 20).rateLimitTTL: time window in seconds (default 3600 → 1 hour).Redis Counter
Rate Limit Check
rateLimit.rateLimitReached:
true → user exceeded the limit.false → user is within the allowed quota.Suppose the rate limit is set to 20 requests per 24 hours:
192.168.0.10 makes a request.rateLimiter:192.168.0.10.rateLimitReached = false.rateLimitReached = true.This workflow is called from the chat and text-to-speech workflows before processing the request.
Check the output rateLimitReached.
true → block or delay the request, and return a rate-limit message.false → allow normal chatbot processing.You can adjust the limit and TTL in the Define Rate Limiter Settings node to fit your needs.
This is a simple implementation and can be extended with features like user subscriptions or premium tiers.