Skip to content

Latency and Performance

Siraya AI is designed with performance as a top priority. Siraya AI is heavily optimized to add as little latency as possible to your requests.

Base Latency

Under typical production conditions, Siraya AI AI adds approximately 100ms of latency to your requests. This minimal overhead is achieved through:

  • Edge computing using Cloudflare Workers to stay as close as possible to your application
  • Efficient caching of user and API key data at the edge
  • Optimized routing logic that minimizes processing time

Performance Considerations

Cache Warming

When Siraya AI's edge caches are cold (typically during the first 5 minutes of operation in a new region), you may experience slightly higher latency as the caches warm up. This normalizes once the caches are populated.

Credit Balance Checks

To maintain accurate billing and prevent overages, Siraya AI AI performs additional database checks when:

  • A user's credit balance is low (single digit dollars)

Siraya AI expires caches more aggressively under these conditions to ensure proper billing, which increases latency until additional credits are added.

Model Fallback

When using provider routing, if the primary model or provider fails, Siraya AI AI will automatically try the next option. A failed initial completion unsurprisingly adds latency to the specific request. Siraya AI tracks provider failures, and will attempt to intelligently route around unavailable providers so that this latency is not incurred on every request.

Best Practices

To achieve optimal performance with Siraya AI AI:

  1. Maintain Healthy Credit Balance
  2. Recommended minimum balance: $50-100 to ensure smooth operation
  3. Use Provider Preferences
  4. If you have specific latency requirements (whether time to first token, or time to last), there are provider routing features to help you achieve your performance and cost goals.