Asenion Assurance LLM Testing Checklist

Use this checklist to prepare your environment and system for running Asenion compliance and security tests. Complete the sections that apply to your setup.


Phase 1: Choose Your Integration Path

Path Your System What You Need
A. Hosted LLM (Models) OpenAI, Azure OpenAI, AWS Bedrock, GCP Vertex AI (deployed / publisher models), or Gemini (AI Studio or Vertex AI Express API key) API key and/or cloud credentials, endpoint or region, model or deployment name
B. Your Own API (OpenAI-Compatible) Backend that exposes OpenAI-style chat completions Base URL, API key (if auth required)
C. Custom API (Custom Provider) Non–OpenAI-compatible API Implement the Asenion Custom Provider API contract

Phase 2: Environment & Access

Network & Reachability

  • Asenion Assurance (or its runner) can send HTTPS requests to your LLM API or backend
  • Firewall/VPN allows outbound HTTPS to your provider (OpenAI, Azure, Bedrock, GCP) or to your API base URL
  • No inbound access to your network is required if you run Asenion Assurance

Credentials

  • API keys and secrets are stored in environment variables or a secrets manager (not in code or docs)
  • Provider config (YAML) references env vars where possible (e.g. apiKeyEnv: MY_API_KEY)
  • Gemini (optional): If you use Connect AI System and enter an API key for a Gemini model, the generated provider YAML may store config.apiKey for that target so different models can use different keys—treat those files as secrets (restricted permissions, not committed to public repos)

Phase 3: Path-Specific Setup

Path A: Testing Models (Hosted LLM)

Use this when you are testing a direct LLM deployment (e.g. GPT-4, Claude, Llama) rather than an application that wraps it.

Provider Checklist Items
OpenAI - [ ] API key obtained and set (e.g. OPENAI_API_KEY in .env)
- [ ] Base URL configured if non-default (OPENAI_API_BASE)
- [ ] Model name specified (e.g. gpt-4o, gpt-4o-mini)
- [ ] Provider YAML created pointing to your model
Azure OpenAI - [ ] API key obtained (AZURE_API_KEY)
- [ ] Endpoint/base URL configured (e.g. https://your-resource.openai.azure.com)
- [ ] Deployment name specified (e.g. GPT-4o)
- [ ] Provider YAML created with Azure config
AWS Bedrock - [ ] AWS credentials configured (env or ~/.aws/credentials)
- [ ] Region and model ID specified (e.g. us-east-1, anthropic.claude-3-sonnet-v1)
- [ ] Provider YAML created for Bedrock
GCP Vertex AI - [ ] GCP project ID and region set (publisher models such as Gemini on Vertex often need a supported region, e.g. us-central1)
- [ ] Vertex AI API enabled; service account has roles to call your endpoints
- [ ] Service account JSON configured (formatted for .env) or key file path set
- [ ] Provider YAML created (typically plugins/vertex_provider.js for Vertex targets)
Gemini (API key) - [ ] API key from Google AI Studio or Vertex AI Express key (format may be AIza… or AQ.…)
- [ ] In Connect AI System, open the Gemini section, select one model (radio), enter the key if you want it embedded in YAML, then Create target provider
- [ ] Generated YAML uses plugins/gemini_provider.js and calls Google’s publisher generateContent API with your key
- [ ] If you leave the key blank, the server may fall back to GCP service account / Vertex-style setup for that model (when supported)
  • System purpose (optional) — short description of how the model is used; used to tailor red-team and compliance tests

Path B: Your Own API (OpenAI-Compatible)

If your backend exposes an OpenAI-compatible /v1/chat/completions endpoint:

  • Base URL provided (e.g. https://api.yourcompany.com/v1)
  • API key or auth header configured if your API requires it
  • Provider YAML created that points to your base URL and model
  • System purpose (optional) defined for tailored tests

Path C: Custom Provider (Custom API)

If your API is not OpenAI-compatible, implement the Asenion Custom Provider API. Before registering your API in Asenion:

Required: POST /chat

  • POST /chat returns 200 with { "content": "..." }
  • Request body accepts message (required when messages absent), system, session_id (optional)
  • Request body accepts messages array for multi-turn (optional)
  • Response includes content (or output / text if your API uses those)
  • Returns session_id in response when using sessions
  • Returns 400 when both message and messages are missing
  • Returns 401 when auth is missing or invalid
  • Returns JSON error body for 4xx/5xx: { "error": "...", "message": "..." }
  • Content-Type: application/json for both request and response

Optional: Session Lifecycle

If your system is session-based:

  • POST /sessions — returns 200 with { "session_id": "..." }
  • POST /sessions/{session_id}/restart — returns 200; 404 if session not found
  • DELETE /sessions/{session_id} — returns 200 or 204; 404 if not found

Auth & Transport

  • Validates Authorization: Bearer <token> and/or X-API-Key: <key>
  • API reachable over HTTPS
  • Base URL and chat path documented for Asenion operator

  • System purpose set — short description of your system’s role for tailored red-team and compliance tests
  • Sensitive deployments — if you must ensure no prompts or test data leave your environment, coordinate with Asenion to run tests in your own environment or use dedicated credentials

Phase 5: Pre-Run Verification

  • Asenion server started (or Asenion Assurance is running in your environment)
  • Web UI or command-line flow available
  • Target provider selected (model or custom API); Connect AI System registers one model per action (radio selection across all listed vendors)
  • Framework chosen (OWASP, ISO 42001, EU AI Act, etc.)
  • Test run started and completes without connection/auth errors

Quick Reference

Document Purpose
Asenion Assurance Endpoints Full API contract, request/response schemas, cURL examples for Custom Provider
Connecting Your System Overview of integration options, network, and minimal setup

Summary Checklist (Print-Friendly)

Before your first test run:

  • Integration path chosen (A: Models, B: OpenAI-compatible API, or C: Custom Provider)
  • Network access verified (HTTPS to API/LLM)
  • Credentials stored securely (env vars; if Gemini YAML contains apiKey, lock down providers/ like any secret)
  • Provider configured in Asenion (UI: Connect AI System → pick one model with the radio control → create target)
  • (Path C only) Custom Provider API implemented and verified
  • System purpose set (optional)
  • Run successful with no connection/auth errors

Table of contents