POST /analyze-prompt
Scans a raw user prompt for injections, toxicity, banned topics, etc.
| Method | Auth | Time-out |
|---|---|---|
| POST | API Key | 5 s |
Request JSON
the text of the prompt sent by the user
if you want to block mentions of any competitors add their names in the request body under this field
if you want to block any topics in the incoming prompt, add them here under this field
to enforce custom policies on the prompts add them to this field as a string, this would ensure the incoming prompts are abiding by the set policies and will be flagged if they don’t
having check_safety as
True activates the safety guardrails that check for things like toxicity, bias, slurs, violence and more in addition to the existing checks for prompt injection/jailbreak attemptsSuccess — 200 OK
final verdict of the guardrails to convey if the prompt can be allowed or not
array of different violations detected in the prompt
determined risk levels for all the recognised violations.
nullablecontains info on the detected language for the prompt
Common Errors
| Code | Reason |
|---|---|
| 401 | Missing / invalid API Key |
| 422 | Body failed validation |
| 429 | Rate limit exceeded |

