Quickstart
Go from zero to a fine-tuned model in three API calls. This guide walks through a real example: building a customer support bot.
Prerequisites
- A Tuned Tensor account
- An API key (create one in Dashboard → Settings → API Keys)
All examples use curl. Replace tt_your_api_key with your actual API key. For CLI usage, see the CLI Tool docs.
Step 1: Create a Behaviour Spec
A behaviour spec defines what your model should do. Here we're creating a customer support bot with guidelines, constraints, and example interactions:
curl -X POST https://api.tunedtensor.com/v1/behavior-specs \
-H "Authorization: Bearer tt_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"name": "Customer Support Bot",
"description": "Handles billing, account, and technical support questions",
"system_prompt": "You are a helpful customer support agent for Acme SaaS...",
"guidelines": [
"Keep responses under 150 words",
"Always acknowledge the user concern before providing a solution",
"Use bullet points for multi-step instructions"
],
"constraints": [
"Never promise refunds without directing to the refund policy",
"Do not make up pricing — refer to the pricing page"
],
"examples": [
{
"input": "How do I cancel my subscription?",
"output": "I understand you would like to cancel. Go to Settings > Billing > Cancel Plan."
},
{
"input": "I was charged twice this month",
"output": "I am sorry about the double charge. Please contact billing@acme.com."
},
{
"input": "How much does the Pro plan cost?",
"output": "For up-to-date pricing, please check acme.com/pricing."
},
{
"input": "My dashboard is loading slowly",
"output": "Try clearing your cache, using a different browser, or incognito mode."
},
{
"input": "Can I get a refund?",
"output": "Please review our refund policy at acme.com/refund-policy."
}
],
"base_model": "Qwen/Qwen2.5-7B-Instruct-Turbo"
}'The response includes the spec id — save it for the next step:
{
"data": {
"id": "cafd8799-9180-482e-b0b2-c46d08e4b045",
"name": "Customer Support Bot",
"base_model": "Qwen/Qwen2.5-7B-Instruct-Turbo",
"examples": [ ... ],
"guidelines": [ ... ],
"constraints": [ ... ],
"created_at": "2026-03-06T08:27:33.492Z"
}
}Step 2: Start a Run
A run compiles your spec into training data, augments your 5 examples into ~36 diverse training rows using AI, fine-tunes the model, and auto-evaluates the result:
curl -X POST https://api.tunedtensor.com/v1/behavior-specs/cafd8799-.../runs \
-H "Authorization: Bearer tt_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"augment": true,
"hyperparameters": {
"n_epochs": 4,
"lora_rank": 8,
"lora_alpha": 16
}
}'The run returns immediately with status preparing:
{
"data": {
"id": "e0b7694b-2c65-4199-89a1-fc54a6a6010c",
"run_number": 1,
"status": "preparing",
"spec_snapshot": { ... },
"hyperparameters": { "augment": true, "n_epochs": 4, "lora_rank": 8 }
}
}Behind the scenes, the platform:
- Compiles your spec into JSONL training format
- Augments your examples into ~36 diverse training rows using Claude
- Uploads the dataset to Together AI
- Starts a LoRA fine-tuning job
Step 3: Check Run Status
Poll the run to track progress:
curl https://api.tunedtensor.com/v1/runs/e0b7694b-... \
-H "Authorization: Bearer tt_your_api_key"The run moves through these statuses:
| Status | What's happening |
|---|---|
preparing | Compiling spec, augmenting examples, uploading dataset |
training | Fine-tuning in progress on Together AI |
evaluating | Auto-evaluating the fine-tuned model against your spec |
completed | Done — eval results are available |
failed | Something went wrong — check the error field |
When completed, the response includes eval results:
{
"data": {
"status": "completed",
"eval_summary": {
"total": 5,
"avg_score": 0.82,
"pass_rate": 0.8,
"scoring_method": "llm_judge",
"regressions": 0
},
"_evals": [
{
"prompt": "Can I get a refund?",
"expected": "Please review our refund policy...",
"actual": "I understand you're looking for a refund...",
"score": 0.9,
"passed": true,
"reasoning": "Correctly directs to the refund policy..."
}
]
}
}Step 4: Test in the Playground
Before iterating on your spec, test the fine-tuned model interactively in the Playground. You can compare it against the base model to see the effect of fine-tuning:
# List available models (base + your fine-tuned models)
curl https://api.tunedtensor.com/v1/playground/models \
-H "Authorization: Bearer tt_your_api_key"
# Run inference against your fine-tuned model
curl -X POST https://api.tunedtensor.com/v1/playground/completions \
-H "Authorization: Bearer tt_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "your-fine-tuned-model-id",
"messages": [
{ "role": "system", "content": "You are a helpful customer support agent for Acme SaaS..." },
{ "role": "user", "content": "How do I cancel my subscription?" }
],
"temperature": 0.7,
"max_tokens": 512
}'The response includes the generated text, latency, and token usage:
{
"data": {
"content": "I understand you would like to cancel your subscription. You can do so by going to Settings > Billing > Cancel Plan...",
"latency_ms": 645,
"usage": {
"prompt_tokens": 42,
"completion_tokens": 38
}
}
}Or use the dashboard: go to Playground, enable Compare mode, select the base model and your fine-tuned model, and run the same prompt side by side.
Step 5: Iterate
Review the eval results. If some examples failed, update your spec with better examples or clearer guidelines, then start another run:
# Update the spec with a new example
curl -X PUT https://api.tunedtensor.com/v1/behavior-specs/cafd8799-... \
-H "Authorization: Bearer tt_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"examples": [
...existing examples...,
{
"input": "How do I change my email address?",
"output": "Go to Settings > Profile > Email and verify the new address."
}
]
}'
# Start another run
curl -X POST .../v1/behavior-specs/cafd8799-.../runs \
-H "Authorization: Bearer tt_your_api_key" \
-d '{"augment": true}'Run #2 will be automatically compared to Run #1. The eval summary shows regressions (examples that got worse) and improvements.
What's Next
- CLI Tool — full command reference
- Behaviour Specs API reference — full schema and endpoints
- Runs — cancellation, eval results, regression detection
- Playground — test and compare models interactively
- Authentication — API keys and response format