LiteLLM
Integration with LiteLLM's OpenAI-Compatible Endpoints with Siraya AI AI
Account & API Keys Setup
The first step to start using Siraya AI is to create an account and get your API key. The second step to start using Google AI Studio is create a project and get your API Key.
Usage - completion
import litellm
import os
response = litellm.completion(
model="openai/<<Model Name>>", # add `openai/` prefix to model so litellm knows to route to OpenAI
api_key="<<API key>>", # api key to your openai compatible endpoint
api_base="https://llm.siraya.pro/v1", # set API Base of your Custom OpenAI Endpoint
messages=[
{
"role": "user",
"content": "Hey, how's it going?",
}
],
)
print(response.json())
Usage - embedding
import litellm
import os
response = litellm.embedding(
model="openai/qwen/qwen3-embedding-0.6b", # add `openai/` prefix to model so litellm knows to route to OpenAI
api_key="<<API key>>", # api key to your openai compatible endpoint
api_base="https://llm.siraya.pro/v1", # set API Base of your Custom OpenAI Endpoint
input=["good morning from litellm"]
)
print(response.json())
Usage with LiteLLM Proxy Server
- Modify the
config.yaml - Start the proxy
- Send Request to LiteLLM Proxy Server
import openai client = openai.OpenAI( api_key="sk-1234", # pass litellm proxy key, if you're using virtual keys base_url="http://0.0.0.0:4000" # litellm-proxy-base url ) response = client.chat.completions.create( model="qwen/qwen3-next-80b-a3b-instruct", messages = [ { "role": "user", "content": "what llm are you" } ], ) print(response.json())