POST /analyze-response
Checks an LLM response for bias, profanity, or semantic system-prompt leaks.
| Method | Auth | Time-out |
|---|---|---|
| POST | API Key | 5 s |
/analyze-prompt with a few additions as mentioned below -
Added field in request_body for detecting system prompt leakage :
this value is needed in the request body to check for system prompt leaks in the response
Cosine similarity (0 → 1) between your system prompt and the model’s reply.

