Initialization
ArgusClient.create()
Initializes a new client instance. This is the required method for creating a client.
api_key(str): Required. Your Argus API key. The key prefix (rsk_orsk_) determines the client tier.url(Optional[str]): An optional URL to connect to a self-hosted or alternative Argus environment.asset_id(Optional[str]): Platform Tier Only. The default Asset ID for all scans made with this client.session_id(Optional[str]): Platform Tier Only. The default Session ID for all scans.policy(Optional[Policy]): APolicydictionary defining the default security rules.save(bool): Platform Tier Only. IfTrue, all scan data is persisted in the Argus platform. Defaults toFalse.strict(bool): IfTrue(default), client-side configuration errors will raise an exception. IfFalse, a warning is issued instead.
ArgusClient instance.
Example:
General Scan Methods
These are the primary methods for performing ad-hoc scans on text.check_content()
The most flexible method for scanning a string of text. This is the primary method used by the Guardrail system.
content(str): The text to be scanned.policies(Optional[Policy]): If provided, this policy will be used for this scan instead of the client’s default policy.- Platform Overrides:
name,node_subtype,session_id,node_metadata,save. These provide granular control for observability.
ApiResult object.
check_prompt() / check_response()
Specialized methods for scanning a user prompt or an LLM response against the active policy.
prompt/response(str): The text to be scanned.policy(Optional[Policy]): Overrides the client’s default policy for this scan.asset_id/session_id/save: Platform Tier Only. Overrides the client’s defaults for this single call.
ApiResult object.
Guardrail Decorators (Platform Tier Only)
Guardrails are Python decorators used to automatically trace and protect functions within an agentic workflow. They log execution spans and can apply policies to function inputs and outputs.guard_entrypoint() / guard_agent() / guard_tool()
These decorators wrap different components of your workflow for observability and protection.
@client.guard_entrypoint(): Use on the main function that starts your workflow.@client.guard_agent(): Use on functions that represent a distinct agent or logical step.@client.guard_tool(): Use on functions that act as tools (e.g., calling an API, querying a database). This is the most common decorator and can apply policies to the tool’s inputs and outputs.
name(Optional[str]): A custom name for the node in the trace. Defaults to the function name.check_input_args(Optional[List[str]]): A list of argument names whose string values should be scanned on function entry.check_output(bool): IfTrue, the function’s return value will be scanned.policies(Optional[Policy]): A specific policy to apply for the decorator’s scans, overriding the client default.
Fine-Grained Scan Methods
These methods are convenient shortcuts for checking a single, specific risk. They all accept optional platform**kwargs (asset_id, session_id, save).
1. check_policy_violation()
Scans text against a custom list of keywords or rules.
rules(List[str]): A list of forbidden words or phrases.
2. check_secrets_keys()
Scans text for hardcoded secrets and keys. Primarily used for responses.
patterns(Optional[List[Tuple[str, str]]]): A list of tuples, where each tuple contains(name, regex_pattern), to whitelist specific patterns that might otherwise be flagged as secrets.
3. check_pii()
Scans text for Personally Identifiable Information (PII) like emails, phone numbers, etc.
4. check_toxicity()
Scans text for toxic content, including insults, threats, and profanity.
5. check_competitor_mention()
Scans text for mentions of specific competitor names.
competitors(List[str]): A list of competitor names to detect.
6. check_banned_topics()
Scans a prompt to see if it pertains to forbidden topics.
topics(List[str]): A list of forbidden topics (e.g., “weapons manufacturing”, “illegal activities”).
7. check_prompt_injection()
Scans a prompt for common prompt injection attack patterns.
8. check_unsafe_prompt()
Scans a prompt for requests that ask the LLM to generate harmful, unethical, or illegal content.
9. check_unsafe_response()
Scans an LLM response to ensure it does not contain harmful, unethical, or illegal content.
10. check_system_prompt_leak()
Scans an LLM response to check if it contains text from its own system prompt.
system_prompt(str): The exact system prompt string to check against.
Client State Management Methods
These methods allow you to inspect and modify the client’s default configuration after it has been created.Policy Management
set_policies(policies_to_set: Policy): Updates the client’s default policy.get_enabled_policies() -> Policy: Returns a dictionary of the currently active policies on the client.clear_policies(): Removes all default policies from the client.
Asset Management (Platform Only)
set_asset_id(asset_id: str): Sets or changes the defaultasset_idfor the client.get_asset_id() -> Optional[str]: Retrieves the current defaultasset_id.clear_asset_id(): Removes the defaultasset_idfrom the client.
Session Management (Platform Only)
set_session_id(session_id: str): Sets or changes the defaultsession_id.get_session_id() -> Optional[str]: Retrieves the current defaultsession_id.clear_session_id(): Removes the defaultsession_id.
Resource Management
close()
Closes the client and its underlying network session. This is a crucial step to release resources gracefully.
Always call close() when you are done with a client instance, either directly or by using atry...finallyblock. The client can also be used as a context manager (with ArgusClient.create(...) as client:), which will automatically callclose()on exit.

