Skip to main content

POST /analyze-prompt

Scans a raw user prompt for injections, toxicity, banned topics, etc.
MethodAuthTime-out
POSTAPI Key5 s

Request JSON

{
  "prompt": "string",                  // REQUIRED
  "competitor_list": ["string"] | null,
  "banned_topics":  ["string"] | null,
  "check_safety":   true | false | null,
  "policy":         "string" | null
}
prompt
string
required
the text of the prompt sent by the user
competitor_list
array of string
if you want to block mentions of any competitors add their names in the request body under this field
banned_topics
array of string
if you want to block any topics in the incoming prompt, add them here under this field
policy
string
to enforce custom policies on the prompts add them to this field as a string, this would ensure the incoming prompts are abiding by the set policies and will be flagged if they don’t
check_safety
boolean
required
having check_safety as True activates the safety guardrails that check for things like toxicity, bias, slurs, violence and more in addition to the existing checks for prompt injection/jailbreak attempts

Success — 200 OK

{
  "is_valid": false,
  "recognized_violations": ["TOXIC_PROMPT"],
  "risk_levels": { "TOXIC_PROMPT": "medium" },
  "language_info": { "detected_language": "en", "confidence_score": 0.99 }
}
is_valid
boolean
default:"false"
final verdict of the guardrails to convey if the prompt can be allowed or not
recognised_violations
array
array of different violations detected in the prompt
risk_levels
json
determined risk levels for all the recognised violations. nullable
language_info
json
contains info on the detected language for the prompt

Common Errors

CodeReason
401Missing / invalid API Key
422Body failed validation
429Rate limit exceeded