`
Connecting Your System to Asenion Assurance - Language Model
This guide is for customers who want to run Asenion compliance and security tests (Asenion Assurance) against their AI system or LLM. It explains what you need to provide and how to allow Asenion Assurance to connect to your system.
→ Quick start: See Client Checklist for a step-by-step checklist covering Custom Provider and Model testing.
Implementing your own API? → See How to Implement Endpoints for Asenion Assurance for step-by-step instructions and request/response shapes so the test provider can connect and run tests.
What Asenion Assurance Does
Asenion Assurance sends prompts to your system or model (the “target”) and evaluates the responses for compliance and security (e.g. OWASP, ISO 42001, EU AI Act). To do that, Asenion Assurance must be able to:
- Call your system — send one or more user (and optionally system) messages.
- Receive responses — get back the model’s or API’s reply for each call.
Everything below is about how you expose that interface so Asenion Assurance can connect.
What You Need to Provide
What you provide depends on how your system is exposed: hosted LLM (OpenAI, Azure, etc.) vs your own API.
Option 1: Hosted LLM (OpenAI, Azure OpenAI, AWS Bedrock, GCP VertexAI)
If your “system” is a model deployed on OpenAI, GCP Vertex AI, Azure OpenAI, or AWS Bedrock, you provide credentials and endpoint so Asenion Assurance can call that deployment as the target.
| You provide | Purpose |
|---|---|
| API key | Authenticate Asenion Assurance to your LLM provider. |
| Endpoint / base URL | Where to send requests (e.g. Azure resource URL, or leave default for OpenAI). |
| Model or deployment name | Which model/deployment to use (e.g. gpt-4o, GPT-4o in Azure). |
| System purpose (optional) | Short description of your system’s role; used to tailor red-team and compliance tests. |
Typical setup:
- OpenAI: API key in
.env(e.g.OPENAI_API_KEY) and, if needed,OPENAI_API_BASE. - Azure OpenAI: Azure credentials (Azure Version ID, Azure Version Type ID, Azure Version Key ID)
- AWS Bedrock: AWS credentials (AWS access key, secret access key, region)
- GCP VertexAI: GCP creditials (GCP Project ID, GCP region, GCP Service Account Json, Service account key file path).
Asenion Assurance runs on the machine that has these credentials; it does not need to be installed on your own servers, only network access to the LLM API (and to any evaluation service you use, if applicable).
Option 2: Your Own API (Custom Backend)
If the system under test is your own API (your backend that wraps an LLM or implements a chat flow), you have two main options.
A. OpenAI-compatible chat API
If your API exposes an OpenAI-compatible chat completions endpoint (same request/response shape as OpenAI’s /v1/chat/completions), you only need to give Asenion Assurance:
| You provide | Purpose |
|---|---|
| Base URL | Root URL of your API (e.g. https://api.yourcompany.com/v1). |
| API key (if you use auth) | Header or query auth so Asenion Assurance can call your API. |
| System purpose (optional) | Short description of your system; used in tests. |
You configure a provider in Asenion Assurance that uses this base URL and key (and model name if your API expects it). Asenion Assurance then sends prompts to your API and reads the assistant message from the response.
B. Custom API (different shape)
If your API is not OpenAI-compatible (different paths, request/response format, or multi-step flows), you need an adapter so Asenion Assurance still sees “prompt in → response out”:
| You provide | Purpose |
|---|---|
| API endpoint(s) | URL(s) Asenion Assurance (or the adapter) will call. |
| Authentication | How Asenion Assurance (or the adapter) authenticates (e.g. API key, OAuth). |
| Adapter | A small script or plugin that Asenion Assurance runs: it receives “prompt” (and optional context), calls your API, and returns the “response” text. |
The adapter can be a custom provider (e.g. a script or Node plugin) that Asenion Assurance invokes; see Custom providers below. You can host the API anywhere (on-prem or cloud) as long as the machine running Asenion Assurance (or the adapter) can reach it over HTTPS.
Network and Access
- Who runs Asenion Assurance?
- If you run Asenion Assurance (e.g. in your own environment): you only need to allow Asenion Assurance to reach your API or your LLM provider (outbound). No inbound access to your network is required.
- If Asenion runs Asenion Assurance for you: you must either (1) give Asenion credentials for your hosted LLM (Option 1), or (2) expose your API to the internet (or to a VPN/private link) and provide URL + auth so Asenion Assurance can call it (Option 2).
- Firewall / VPN:
The machine that runs Asenion Assurance must be able to open HTTPS connections to:- Your LLM provider (OpenAI, Azure, Bedrock, GCP), or
- Your API (if you use Option 2).
- Secrets:
Store API keys and credentials in environment variables or a secrets manager, not in code or in docs. Use a single provider config (YAML) that references env vars (e.g.apiKeyEnv: MY_API_KEY).
Minimal Setup (Quick Start)
- Choose your scenario
- Hosted LLM → use a provider YAML for OpenAI, Azure, or Bedrock and set the right env vars.
- Your own API (OpenAI-compatible) → add a provider YAML that points to your base URL and auth.
- Your own API (custom) → implement a small adapter (custom provider) that calls your API and returns the reply text.
- Configure the target provider
- Copy or create a YAML in
providers/(e.g.providers/target.yaml) with:idandlabelconfig(e.g.apiHost,apiKeyEnv,temperature, model/deployment)
- Set the corresponding env vars (e.g.
OPENAI_API_KEY,AZURE_API_KEY).
- Copy or create a YAML in
- Run tests
- Start the server (e.g.
python server.py), open the Web UI (e.g. http://localhost:8000), select your target provider and framework, then start a run. - Or run from the command line (e.g.
run_redteam_workflow.pywith the same provider/config).
- Start the server (e.g.
- Optional: system purpose
- In the UI or in the run request, set a short “system purpose” so red-team and compliance tests are tailored to your use case.
Standard Custom Provider API (recommended)
If you expose your own API (not OpenAI-compatible), you can implement the Asenion Custom Provider API so Asenion Assurance can call it without a one-off adapter.
Endpoints:
| Method | Path | Description |
|---|---|---|
POST | /chat | Send prompt, return assistant reply (required) |
POST | /sessions | New session — create, return session_id (optional) |
POST | /sessions/{session_id}/restart | Restart session — clear state (optional) |
DELETE | /sessions/{session_id} | End session — invalidate (optional) |
POST /chat — request body (JSON):
{
"message": "User prompt (required for single-turn)",
"system": "Optional system prompt",
"messages": [ { "role": "user|assistant|system", "content": "..." } ],
"session_id": "Optional session id"
}
POST /chat — response body (200, JSON):
{
"content": "Assistant reply (required)",
"session_id": "Optional, for next request",
"usage": { "prompt_tokens": 0, "completion_tokens": 0 }
}
Auth: Authorization: Bearer <token> and/or X-API-Key: <key>.
- Full guide: How to Implement Endpoints — request/response schemas, cURL examples, and checklist.
- OpenAPI: asenion-custom-provider-api.yaml After your endpoints match the contract, configure Asenion Assurance with your base URL and auth so the provider can connect and perform tests. —
Custom Providers and Plugins
If your API does not match the standard above, you can connect it by implementing a custom provider that:
- Accepts the prompt (and optional system message or context) from Asenion Assurance.
- Calls your API (with your auth and payload shape).
- Returns the assistant reply as plain text (or the structure Asenion Assurance expects).
Examples in this repo:
plugins/example-custom-provider.js— template that calls the standard Asenion Custom Provider API (POST /chat, optional session lifecycle). Copy it and setapiBaseUrlandapiKeyEnvto your target API; seeproviders/example_custom.yamlfor config.- Exec-style provider — Asenion Assurance can invoke a script (e.g.
exec:python path/to/your_wrapper.py) that reads prompt from stdin or env, calls your API, and prints the response. Your script is the adapter.
Once the custom provider is in place, you register it in Asenion Assurance (e.g. as the target in a provider YAML or in the UI) and run tests as usual.
Summary Checklist
- Target is reachable: Asenion Assurance (or your adapter) can send HTTPS requests to your LLM API or your backend.
- Auth is set: API key or other credentials are provided via env (or secure config), not hardcoded.
- Provider config: A provider YAML (or custom provider) points to your endpoint and model/deployment (if applicable).
- Optional: System purpose is set so tests are aligned with your product.
```