Quickstart
Modelrules API provides a simple way to override any API parameters for OpenAI-compatible LLM providers. It's ideal for environments where LLM clients are constrained to specific parameters or can't offer flexible customization.
All configuration rules are applied server-side; if you prefer to manage them client-side, you'll need to locally run the project by cloning the repository.
Creating your first virtual API key
To get started with Modelrules, you'll need to create a virtual API key.
Create a ruleset for a given LLM provider
Use the new ruleset page to create a ruleset for a specific LLM provider or model. There, you can override API parameters and supply the required provider credentials.
Making your first request
After creating a ruleset, send a request to the Modelrules API as usual. The API will automatically apply your ruleset's parameters. To specify which ruleset to use, prepend its name and two colons to the model name. For example, with a ruleset named "my-ruleset" and the "gpt-3.5-turbo" model, set the model to "my-ruleset::gpt-3.5-turbo".
curl -X POST https://rules.exectx.run/api/chat/completions \
-H "Authorization: Bearer $RULES_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "my-ruleset::gpt-3.5-turbo",
"messages": [{
"role": "user",
"content": "What is the capital of France?"
}],
}'