Asenion Assurance — Client Checklist

Use this checklist to prepare your environment and system for running Asenion compliance and security tests. Complete the sections that apply to your setup.


Phase 1: Choose Your Integration Path

Path Your System What You Need
A. Hosted LLM (Models) OpenAI, Azure OpenAI, AWS Bedrock, or GCP Vertex AI API key, endpoint URL, model/deployment name
B. Your Own API (OpenAI-Compatible) Backend that exposes OpenAI-style chat completions Base URL, API key (if auth required)
C. Custom API (Custom Provider) Non–OpenAI-compatible API Implement the Asenion Custom Provider API contract

Phase 2: Environment & Access

Network & Reachability

  • Asenion Assurance (or its runner) can send HTTPS requests to your LLM API or backend
  • Firewall/VPN allows outbound HTTPS to your provider (OpenAI, Azure, Bedrock, GCP) or to your API base URL
  • No inbound access to your network is required if you run Asenion Assurance

Credentials

  • API keys and secrets are stored in environment variables or a secrets manager (not in code or docs)
  • Provider config (YAML) references env vars (e.g. apiKeyEnv: MY_API_KEY)

Phase 3: Path-Specific Setup

Path A: Testing Models (Hosted LLM)

Use this when you are testing a direct LLM deployment (e.g. GPT-4, Claude, Llama) rather than an application that wraps it.

Provider Checklist Items
OpenAI - [ ] API key obtained and set (e.g. OPENAI_API_KEY in .env)
- [ ] Base URL configured if non-default (OPENAI_API_BASE)
- [ ] Model name specified (e.g. gpt-4o, gpt-4o-mini)
- [ ] Provider YAML created pointing to your model
Azure OpenAI - [ ] API key obtained (AZURE_API_KEY)
- [ ] Endpoint/base URL configured (e.g. https://your-resource.openai.azure.com)
- [ ] Deployment name specified (e.g. GPT-4o)
- [ ] Provider YAML created with Azure config
AWS Bedrock - [ ] AWS credentials configured (env or ~/.aws/credentials)
- [ ] Region and model ID specified (e.g. us-east-1, anthropic.claude-3-sonnet-v1)
- [ ] Provider YAML created for Bedrock
GCP Vertex AI - [ ] GCP project ID and region set
- [ ] Service account JSON configured (formatted for .env)
- [ ] Provider YAML created for Vertex
  • System purpose (optional) — short description of how the model is used; used to tailor red-team and compliance tests

Path B: Your Own API (OpenAI-Compatible)

If your backend exposes an OpenAI-compatible /v1/chat/completions endpoint:

  • Base URL provided (e.g. https://api.yourcompany.com/v1)
  • API key or auth header configured if your API requires it
  • Provider YAML created that points to your base URL and model
  • System purpose (optional) defined for tailored tests

Path C: Custom Provider (Custom API)

If your API is not OpenAI-compatible, implement the Asenion Custom Provider API. Before registering your API in Asenion:

Required: POST /chat

  • POST /chat returns 200 with { "content": "..." }
  • Request body accepts message (required when messages absent), system, session_id (optional)
  • Request body accepts messages array for multi-turn (optional)
  • Response includes content (or output / text if your API uses those)
  • Returns session_id in response when using sessions
  • Returns 400 when both message and messages are missing
  • Returns 401 when auth is missing or invalid
  • Returns JSON error body for 4xx/5xx: { "error": "...", "message": "..." }
  • Content-Type: application/json for both request and response

Optional: Session Lifecycle

If your system is session-based:

  • POST /sessions — returns 200 with { "session_id": "..." }
  • POST /sessions/{session_id}/restart — returns 200; 404 if session not found
  • DELETE /sessions/{session_id} — returns 200 or 204; 404 if not found

Auth & Transport

  • Validates Authorization: Bearer <token> and/or X-API-Key: <key>
  • API reachable over HTTPS
  • Base URL and chat path documented for Asenion operator

  • System purpose set — short description of your system’s role for tailored red-team and compliance tests
  • Sensitive deployments — if you must ensure no prompts or test data leave your environment, coordinate with Asenion to run tests in your own environment or use dedicated credentials

Phase 5: Pre-Run Verification

  • Asenion server started (or Asenion Assurance is running in your environment)
  • Web UI or command-line flow available
  • Target provider selected (model or custom API)
  • Framework chosen (OWASP, ISO 42001, EU AI Act, etc.)
  • Test run started and completes without connection/auth errors

Quick Reference

Document Purpose
Asenion Assurance Endpoints Full API contract, request/response schemas, cURL examples for Custom Provider
Connecting Your System Overview of integration options, network, and minimal setup

Summary Checklist (Print-Friendly)

Before your first test run:

  • Integration path chosen (A: Models, B: OpenAI-compatible API, or C: Custom Provider)
  • Network access verified (HTTPS to API/LLM)
  • Credentials stored securely (env vars)
  • Provider configured in Asenion
  • (Path C only) Custom Provider API implemented and verified
  • System purpose set (optional)
  • Run successful with no connection/auth errors