Connecting Your System to Asenion Assurance - Language Model
This guide is for customers who want to run Asenion compliance and security tests (Asenion Assurance) against their AI system or LLM. It explains what you need to provide and how to allow Asenion Assurance to connect to your system.
→ Quick start: See Client Checklist for a step-by-step checklist covering Custom Provider and Model testing.
Implementing your own API? → See How to Implement Endpoints for Asenion Assurance for step-by-step instructions and request/response shapes so the test provider can connect and run tests.
What Asenion Assurance Does
Asenion Assurance sends prompts to your system or model (the “target”) and evaluates the responses for compliance and security (e.g. OWASP, ISO 42001, EU AI Act). To do that, Asenion Assurance must be able to:
- Call your system — send one or more user (and optionally system) messages.
- Receive responses — get back the model’s or API’s reply for each call.
Everything below is about how you expose that interface so Asenion Assurance can connect.
What You Need to Provide
What you provide depends on how your system is exposed: hosted LLM (OpenAI, Azure, etc.) vs your own API.
Option 1: Hosted LLM (OpenAI, Azure OpenAI, AWS Bedrock, GCP Vertex AI, Gemini)
If your “system” is a model on a public cloud, you provide credentials and endpoint/region so Asenion Assurance can call that deployment as the target.
| You provide | Purpose |
|---|---|
| API key (where applicable) | Authenticate to the LLM provider (OpenAI, Azure, some Gemini flows). |
| Endpoint / base URL (where applicable) | Where to send requests (e.g. Azure resource URL, or default for OpenAI). |
| Cloud project + region (GCP) | For Vertex AI discovery and deployed models (use a region that supports your model type, often us-central1 for Vertex publisher models). |
| Service account (GCP Vertex) | JSON key or ADC so the server can obtain tokens for Vertex AI. |
| Gemini API key (optional) | Google AI Studio (AIza…) or Vertex AI Express (AQ.…) key for publisher Gemini via the dedicated provider; can be entered in the UI when creating a target. |
| Model or deployment name | Which model or deployment to use (e.g. gpt-4o, Bedrock model ID, Vertex endpoint ID). |
| System purpose (optional) | Short description of your system’s role; used to tailor red-team and compliance tests. |
Typical setup:
- OpenAI: API key in
.env(e.g.OPENAI_API_KEY) and, if needed,OPENAI_API_BASE. - Azure OpenAI: Azure API key, endpoint, deployment name, and API version as required by your resource.
- AWS Bedrock: AWS credentials (access key, secret, region) or instance role; model ID in provider config.
- GCP Vertex AI (deployed / custom endpoints): GCP project ID, region, service account JSON or key file path; provider uses
plugins/vertex_provider.js(service account / ADC). - Gemini (API key path): In the Web UI, Connect AI System includes a Gemini section (same card layout as other vendors: header badge, optional tag, scrollable model list). Select one model with the radio control, optionally paste your API key, then Create target provider. The generated YAML uses
plugins/gemini_provider.jsand may storeconfig.apiKeyin that file so each target can use a different key.GOOGLE_API_KEYin Settings remains a fallback when the YAML has no key.
Asenion Assurance runs on the machine that has these credentials; it does not need to be installed on your own servers, only network access to the LLM API (and to any evaluation service you use, if applicable).
Option 2: Your Own API (Custom Backend)
If the system under test is your own API (your backend that wraps an LLM or implements a chat flow), you have two main options.
A. OpenAI-compatible chat API
If your API exposes an OpenAI-compatible chat completions endpoint (same request/response shape as OpenAI’s /v1/chat/completions), you only need to give Asenion Assurance:
| You provide | Purpose |
|---|---|
| Base URL | Root URL of your API (e.g. https://api.yourcompany.com/v1). |
| API key (if you use auth) | Header or query auth so Asenion Assurance can call your API. |
| System purpose (optional) | Short description of your system; used in tests. |
You configure a provider in Asenion Assurance that uses this base URL and key (and model name if your API expects it). Asenion Assurance then sends prompts to your API and reads the assistant message from the response.
B. Custom API (different shape)
If your API is not OpenAI-compatible (different paths, request/response format, or multi-step flows), you need an adapter so Asenion Assurance still sees “prompt in → response out”:
| You provide | Purpose |
|---|---|
| API endpoint(s) | URL(s) Asenion Assurance (or the adapter) will call. |
| Authentication | How Asenion Assurance (or the adapter) authenticates (e.g. API key, OAuth). |
| Adapter | A small script or plugin that Asenion Assurance runs: it receives “prompt” (and optional context), calls your API, and returns the “response” text. |
The adapter can be a custom provider (e.g. a script or Node plugin) that Asenion Assurance invokes; see Custom providers below. You can host the API anywhere (on-prem or cloud) as long as the machine running Asenion Assurance (or the adapter) can reach it over HTTPS.
Network and Access
- Who runs Asenion Assurance?
- If you run Asenion Assurance (e.g. in your own environment): you only need to allow Asenion Assurance to reach your API or your LLM provider (outbound). No inbound access to your network is required.
- If Asenion runs Asenion Assurance for you: you must either (1) give Asenion credentials for your hosted LLM (Option 1), or (2) expose your API to the internet (or to a VPN/private link) and provide URL + auth so Asenion Assurance can call it (Option 2).
- Firewall / VPN:
The machine that runs Asenion Assurance must be able to open HTTPS connections to:- Your LLM provider (OpenAI, Azure, Bedrock, GCP), or
- Your API (if you use Option 2).
- Secrets:
Prefer environment variables or a secrets manager for keys. Provider YAML can reference env vars (e.g.apiKeyEnv: MY_API_KEY). Gemini targets created from the UI may embedconfig.apiKeyin the generated file underproviders/—treat those YAML files like credential stores (permissions,.gitignore, no public commits).
Connect AI System (Web UI)
From Settings, Target Providers, or New Test Run, Connect AI System opens a drawer that discovers models from configured vendors (Azure, Bedrock, Google Cloud / Vertex, Gemini).
- Vendors appear as aligned cards: header row (badge, title, small tag such as Vertex AI or AI Studio), then the model list.
- Models use radio buttons (one selection across the whole drawer). The footer shows None selected or 1 selected; Create target provider stays disabled until you pick a model.
- Gemini uses the same card pattern with a light accent; it includes an API key field when you need a key-based publisher model. Help text explains that the key is stored in the generated provider YAML (or falls back to GCP when left blank).
- Each successful Create writes a
discovered_*.yaml(or similar) underproviders/and refreshes the target dropdown.
Minimal Setup (Quick Start)
- Choose your scenario
- Hosted LLM → set the right env vars (and Settings fields), then use Connect AI System or hand-edit YAML under
providers/. - Your own API (OpenAI-compatible) → add a provider YAML that points to your base URL and auth.
- Your own API (custom) → implement a small adapter (custom provider) that calls your API and returns the reply text.
- Hosted LLM → set the right env vars (and Settings fields), then use Connect AI System or hand-edit YAML under
- Configure the target provider
- Copy or create a YAML in
providers/(e.g.providers/target.yaml) with:idandlabelconfig(e.g.apiHost,apiKeyEnv,temperature, model/deployment)
- Set the corresponding env vars (e.g.
OPENAI_API_KEY,AZURE_API_KEY).
- Copy or create a YAML in
- Run tests
- Start the server (e.g.
python server.py), open the Web UI (e.g. http://localhost:8000), select your target provider and framework, then start a run. - Or run from the command line (e.g.
run_redteam_workflow.pywith the same provider/config).
- Start the server (e.g.
- Optional: system purpose
- In the UI or in the run request, set a short “system purpose” so red-team and compliance tests are tailored to your use case.
Standard Custom Provider API (recommended)
If you expose your own API (not OpenAI-compatible), you can implement the Asenion Custom Provider API so Asenion Assurance can call it without a one-off adapter.
Endpoints:
| Method | Path | Description |
|---|---|---|
POST | /chat | Send prompt, return assistant reply (required) |
POST | /sessions | New session — create, return session_id (optional) |
POST | /sessions/{session_id}/restart | Restart session — clear state (optional) |
DELETE | /sessions/{session_id} | End session — invalidate (optional) |
POST /chat — request body (JSON):
{
"message": "User prompt (required for single-turn)",
"system": "Optional system prompt",
"messages": [ { "role": "user|assistant|system", "content": "..." } ],
"session_id": "Optional session id"
}
POST /chat — response body (200, JSON):
{
"content": "Assistant reply (required)",
"session_id": "Optional, for next request",
"usage": { "prompt_tokens": 0, "completion_tokens": 0 }
}
Auth: Authorization: Bearer <token> and/or X-API-Key: <key>.
- Full guide: How to Implement Endpoints — request/response schemas, cURL examples, and checklist.
- OpenAPI: asenion-custom-provider-api.html
After your endpoints match the contract, configure Asenion Assurance with your base URL and auth so the provider can connect and perform tests.
Custom Providers and Plugins
If your API does not match the standard above, you can connect it by implementing a custom provider that:
- Accepts the prompt (and optional system message or context) from Asenion Assurance.
- Calls your API (with your auth and payload shape).
- Returns the assistant reply as plain text (or the structure Asenion Assurance expects).
Examples in this repo:
plugins/example-custom-provider.js— template that calls the standard Asenion Custom Provider API (POST /chat, optional session lifecycle). Copy it and setapiBaseUrlandapiKeyEnvto your target API; seeproviders/example_custom.yamlfor config.- Exec-style provider — Asenion Assurance can invoke a script (e.g.
exec:python path/to/your_wrapper.py) that reads prompt from stdin or env, calls your API, and prints the response. Your script is the adapter.
Once the custom provider is in place, you register it in Asenion Assurance (e.g. as the target in a provider YAML or in the UI) and run tests as usual.
Summary Checklist
- Target is reachable: Asenion Assurance (or your adapter) can send HTTPS requests to your LLM API or your backend.
- Auth is set: API keys or service accounts via env or secure config; avoid committing secrets; if Gemini YAML stores
apiKey, protectproviders/. - Provider config: A provider YAML (or custom provider) points to your endpoint and model/deployment (if applicable).
- Optional: System purpose is set so tests are aligned with your product.