Skip to main content

POST /analyze-response

Checks an LLM response for bias, profanity, or semantic system-prompt leaks.
MethodAuthTime-out
POSTAPI Key5 s
Request and Response bodies are same as in /analyze-prompt with a few additions as mentioned below - Added field in request_body for detecting system prompt leakage :
system_prompt
string
this value is needed in the request body to check for system prompt leaks in the response
Field for system prompt leakage score in response body :
system_prompt_leakage_score
string
Cosine similarity (0 → 1) between your system prompt and the model’s reply.